uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,116,691,501,015
arxiv
\subsection*{Introduction} \noindent Lie detection is of considerable importance to modern society, in particular in connection with police investigations, court proceedings, and security questions. For example, the U.S. government invests large amounts of money for training ``behavior detection officers'' to detect terrorists from their behavior at airports. These programs have been criticized for being irrational \cite{press_NYT_20140323} because scientific evidence suggests that humans are only correct in approximately 54\% % of lie--truth judgments \cite{Bond_DePaulo_06}. This is essentially as good as flipping a coin. In this context, the recent lie detection study \cite{tenBrinke_14} presents the surprising finding that unconscious processes are much better in detecting liars than conscious processes. Consequently, the study received enormous attention\footnote{ Selected press coverage (retrieved Mar--May 2014): New York Times, Apr. 26 \pressLink{http://www.nytimes.com/2014/04/27/business/the-search-for-our-inner-lie-detectors.html} Science Magazine, Apr 1 \pressLink{http://news.sciencemag.org/signal-noise/2014/03/spot-liar-trust-your-instinct} BBC, Mar 29 \pressLink{http://www.bbc.com/news/health-26764866} British Psychological Society, Mar 28 \pressLink{http://www.bps.org.uk/news/our-subconscious-mind-may-detect-liars} S{\"u}ddeutsche Zeitung, Mar 27 \pressLink{http://www.sueddeutsche.de/wissen/psychologie-unterbewusstsein-durchschaut-unehrlichkeit-1.1923587} The Times, Mar 26 \pressLink{http://www.thetimes.co.uk/tto/science/article4045032.ece} Pacific Standard, Mar 25 \pressLink{http://www.psmag.com/navigation/health-and-behavior/unconscious-mind-better-detecting-lies-77368} Science Daily, Mar 24 \pressLink{http://www.scidai.ly/releases/2014/03/140324104520.htm} New Scientist, Mar 23 \pressLink{http://www.newscientist.com/article/mg22129610.700-invisible-how-to-see-through-lies.html} } % with potentially far--reaching practical consequences. For example, consider jurors at court were advised ``Truth or lie --- trust your instinct, says research'' \cite{press_BBC_20140329,press_science_magazine_20140325}. This could make it very difficult to allow for a rational debate in cases where the truth does not seem as obvious as our instinct might suggest \cite{Loftus_03}. We show that the lie detection study does not provide ``strong evidence'' that ``consciousness interfere[s] with the natural ability to detect deception'' \cite[p.~6]{tenBrinke_14}. The reasoning used by the lie detection study, as well as by many other studies in the Neurosciences, is illustrated in Fig.~1. Participants watched two videos of interrogations. In one video the suspect was lying, in the other the suspect was telling the truth. Participants did not know who was the liar. The goal of the study was to find out whether participants could tell apart liars from truth--tellers (e.g., from signs of stress). After watching the videos, participants performed two tasks. The ``direct'' task (Fig.~1A) is assumed to tap conscious processes because the participants simply see pictures of the suspects and classify these pictures as truth--tellers or liars. As expected \cite{Bond_DePaulo_06}, participants were very bad in this direct task (49.6\% correct, with chance level being 50\%). \begin{figure*}[!tb] \centerline{\includegraphics{./figure-explaining-task2-cropped.pdf}} \caption{Experimental rationale and fallacy: Typically there exists some hidden stimulus attribute. In the lie detection study this was whether the picture of a suspect showed a truth--teller or a liar. In other studies this could be the numerical size of a number or the emotional expression of a face that is hidden from consciousness by masking techniques. \textbf{A.} Direct task: When participants directly classify the hidden attribute, they typically perform badly. \textbf{B.} Indirect task: Nevertheless, the hidden attribute (``prime'') can affect RTs if participants perform a task on another well visible stimulus (``target''). In the lie detection study, participants decided whether well visible target--words were related to lying or truth--telling. They were faster if the targets were preceded by a congruent but hidden picture (e.g., the word ``deceitful'' preceded by the picture of a liar). While this is only possible if the hidden attribute was somehow processed by the nervous system, the fallacy is to conclude that there was relatively good unconscious classification accuracy of the hidden attribute, better than in the direct task.} \label{FigReasoning} \end{figure*} Things seemed to change drastically when the ``indirect'' task (Fig.~1B) was performed, which is assumed to tap unconscious processes. Now the pictures of the suspects were presented only briefly (``prime'') and hidden from consciousness by special masking techniques. The participants sorted well visible words (the ``targets'') like ``honest'' or ``deceitful'' into the categories ``truth'' or ``lie''. Interestingly, participants were significantly faster if such a word was preceded by a congruent picture of a suspect (e.g., the word ``deceitful'' was preceded by a picture of a liar) than if the word was preceded by an incongruent picture (e.g., the word ``deceitful'' was preceded by a picture of truth--teller). This can only be explained if some information about whether the masked picture shows a liar or a truth--teller has been processed. However, the authors of the lie detection study \cite{tenBrinke_14} derived further reaching conclusions from the significant congruency effect --- as is common practice in the Neurosciences. They concluded, that (i) the significant congruency effect indicates ``accurate unconscious assessments'' (p. 7) of truth--tellers vs.\ liars; % (ii) in parallel to this accurate unconscious processing, there exists another, inaccurate conscious process; % (iii) the accurate unconscious assessments can even be ``made inaccurate [...] by conscious'' processes (p. 7), such that it might be wise to prevent ``conscious deliberation about credibility'' (p. 7). We show below that all these conclusions are not warranted by the data. More generally, we describe that a significant congruency effect alone does not provide sufficient evidence for such conclusions. \subsection*{The fallacy} The main reason is that while the significant congruency effect indeed suggests that the primes have been classified to a certain extent, it does not indicate how good this classification was. The test for a significant difference between reaction times (RTs) in congruent and incongruent trials is only concerned with the question whether a `true' difference exists in the population at all. The test does not tell us how big this difference is and for how good a classification performance it could be harnessed. \paragraph*{In a nutshell:} The fallacy is to conclude from a significant effect in the indirect task that there has been good indirect classification performance of the prime (at least better than the classification performance in the direct task). However, the significant effect only indicates that {\em some} information about the stimuli has been processed, not {\em how much} information. Given enough statistical power, the indirect classification performance could be arbitrarily small while nevertheless there could be a significant congruency effect. This is not only a remote theoretical danger, as we show with our reanalysis of the lie detection study. % \paragraph*{Reanalysis of lie detection data} For the reanalysis, we put the data of the lie detection study to the test: If the significant congruency effect on RTs is supposed to serve as evidence for good unconscious processing, then we should be able to use the RTs to decide for each trial whether the prime and target stimuli were congruent or incongruent. Small RTs would indicate a congruent trial, large RTs would indicate an incongruent trial. We applied % two classifiers to the data\footnote{Of the two experiments in the lie detection study \cite{tenBrinke_14}, we concentrate on the second one, as this is the one that presents `unconscious' stimuli. For the first experiment we obtained similar results (classification accuracy: 51.1\%). All analyses were implemented twice independently, once in Matlab and once in R. The R--code is open available, see Part 1 of Materials and Methods.}: (i) the statistically optimal classifier under the assumption that RTs follow normal or lognormal distributions \cite{Ulrich_Miller_93} and (ii) a model--free classifier trained on the data according to the standard protocol from statistical learning (for details please consult the methods section). The two classifiers achieve classification accuracies of % (i) 50.6\% and (ii) 49.3\%. % We also found that (iii) on the given data there cannot exist a classifier with accuracy larger than 54\% --- the same value that was interpreted as ``detection incompetence'' in the lie detection study \cite[p.~1]{tenBrinke_14}. In short: the classification accuracy in the unconscious task is just as dismal as in the conscious task and can for all practical purposes be considered as being at chance level. There is no evidence for ``accurate unconscious assessments'' \cite[p.~7]{tenBrinke_14}. \begin{figure*}[!tb] \centerline{\includegraphics{./graph14.pdf}} \caption{RT-Distributions relevant for classification and significance tests. % \textbf{A.} Accurate classification of congruent vs.\ incongruent trials requires distinct RT--distributions. The top panels show RT-histograms for exemplary participants (left/right: participant with median/maximal accuracy of 50.8\%/56.1\%). % % The large panel shows RT--distributions for an idealized participant, based on average values and lognormal distributions \protect\cite{Ulrich_Miller_93}. % % All distributions overlap so heavily that classification accuracy is essentially at chance, showing that the RTs convey hardly any information about congruent vs.\ incongruent trials. % \textbf{B.} A significant difference requires distinct distributions for the mean RTs of congruent vs.\ incongruent trials; with the standard deviation given by the standard error of the mean (SEM; e.g., \protect\citeNP{Franz_Loftus_12}). These distributions are clearly distinct, reflecting the significant difference (\STt{65}{2.22}{p=0.03}; mean difference: 4.4~ms, SEM: 2.0~ms, Cohen's d: 0.27; cf. \protect\citeNP{Cohen_88}). % % Comparing A. and B. shows that classification accuracy can be at chance level even though the means are significantly different. This is caused by the massive reduction of the relevant standard deviation when calculating the SEM (cf. Part 3 in Materials \& Methods). Note, that we even had to change the scale of the abscissa in B to show the distributions appropriately. \textbf{C.} Histograms of RTs in behavioral task of \protect\citeA{Dehaene_etal_98}. The distributions overlap heavily, suggesting that classification accuracy will be low. The histogram corresponds to Figure~2b of \protect\citeA{Dehaene_etal_98} and was electronically digitized from the printed version. In all plots, dashed/solid lines indicate congruent/incongruent conditions.} \label{FigData} \end{figure*} Fig.~2A illustrates this with the distributions relevant for classification performance. The average RT--difference between congruent and incongruent conditions was only 4.4~ms, whereas the average within--subjects standard deviation was 146.5~ms. This gives a signal--to--noise ratio of 0.03, which is much too small for a meaningful classification performance. To understand why this can happen even though the RT means are significantly different, note that the classification whether a trial is congruent or incongrunent has to be performed on a single-trial basis. In particular, the accuracy of the classifier does not improve with more data. The statistical test for the difference in population means, on the other hand, is based on the estimated variability of the sample means, which gets smaller with more data. As shown in Fig.~2B, it can easily happen that two distributions are nearly indistinguishable by a classification task, yet a tiny difference in their means becomes significant if the sample size or the number of repetitions are large enough. See Part 2 in Materials \& Methods for more details. \paragraph*{Better approaches.} What would a more appropriate approach look like? For a meaningful comparison, we have to look not only for a significant effect, but also at how much information is transmitted by this effect to the task of classification. A straightforward way to do this is to consider the classification accuracy directly, as we did above. Other approaches are possible as well. For example, one could use signal detection theory \cite{Swets_61} on both tasks to determine and compare appropriate d--prime values --- as has been done in some studies \cite{Schmidt_02,Gegenfurtner_Franz_07,Schmidt_Vorberg_06}. Alternatively, one could apply classic information theory on both measures \cite{Shannon_1948}, an approach we are currently working on. For the lie detection study, all these methods would lead to the same conclusion: unconscious lie detection does not work any better than its conscious counterpart. Both are essentially at chance--level. \subsection*{The problematic reasoning is widely used} One might argue that this is a limited problem of one single study. However, the problematic reasoning is widely and routinely used. For illustration, we sketch three highly influential studies \cite{Dehaene_etal_98,Morris_etal_98,Pessiglione_etal_07}. Many more studies exist in the literature. \citeA{Dehaene_etal_98} investigated whether humans can unconsciously process information about the magnitude of numbers. Stimuli were numbers between 1 and 9 that were hidden from consciousness by masking. Participants categorized whether the numbers were larger or smaller than 5. In direct tasks (Fig.~1A) participants were not significantly different from chance level (52.6\% and 54\% correct). % Nevertheless, the masked numbers had significant effects in indirect tasks (Fig.~1B): If participants responded to a target number that could be congruent with the prime (e.g., both smaller than 5) or incongruent (e.g., one smaller and the other larger than 5), then participants showed significant effects on RTs and significant lateralizations in electroencephalography (EEG) and functional magnetic resonance imaging (fMRI). Based on the same reasoning as outlined above, \citeA{Dehaene_etal_98} concluded that in the indirect task participants ``unconsciously appl[ied] the task instructions to the prime, would therefore categorize it as smaller or larger than 5, and would even prepare a motor response appropriate to the prime'' (p. 598). The authors summarize ``that a large amount of cerebral processing [...] can be performed in the absence of consciousness'' (p. 599). However, these significant differences in RTs, EEG and fMRI measurements do not tell whether classification accuracy in the indirect task was better than in the direct task. If not, then there would be no evidence for unconscious processing of the primes. We cannot directly evaluate the relevant classification performance because we do not have access to the data. Instead, we analyzed the published histogram of all RTs performed in the behavioral task (Fig.~2C). If we determine the classification accuracy based on these distributions, we obtain 55\% correct, which is discomfortingly close to the accuracy in the direct tasks. While this is only a very rough estimate, it cannot rule out the possibility that there might indeed be a similar problem in the study by \citeA{Dehaene_etal_98} as we found for the lie detection study. The only way to find out would be replication studies or a reanalysis of the existing data. \citeA{Morris_etal_98} investigated emotional learning in the amygdala. Two angry faces were used as stimuli, one of which had been conditioned to an aversive event. The faces were hidden from consciousness by masking, such that participants were at chance when classifying whether such a face was shown to them. Nevertheless, activity in the right amygdala was significantly modulated by the fact that one of the two faces had been associated with the aversive event, as measured with positron emission tomography (PET). Using again the same reasoning, the authors conclude that ``we provide the first evidence that the human amygdala can discriminate the acquired behavioral significance of stimuli without the need for conscious perception'' (p. 469). Our critique is again: The significant modulation of amygdala activity does not show whether there is also good classification accuracy that is clearly different from chance level and that would justify the conclusion of a superior process operating in parallel to the conscious process. \citeA{Pessiglione_etal_07} investigated subliminal motivation. Images of coins were presented, either one pound or one penny, and hidden from consciousness by masking, such that participants were at chance level when classifying the coins. Nevertheless, activity in the ventral pallidum (VP) was significantly modulated by the value of the coins, as measured by fMRI (similar results were found for skin conductance and grip force). The authors concluded that there are two motivational processes, one conscious and the other unconscious: ``Thus, only the VP appeared in position to modulate behavioral activation according to subliminal incentives and hence to underpin a low--level motivational process, as opposed to a conscious cost--benefit calculation'' (p. 906). Our concerns are again the same: The significant modulation of activation does not tell whether the information available to the VP suffices for a classification performance that is clearly better than the conscious classification performance. Therefore, it is not clear whether the authors' assumption of two processes (an unconscious and a conscious one) for cost--benefit calculation is warranted. \subsection*{Is there unconscious processing?} Because all our example studies happen to be related to the question of whether there exists unconscious processing independent of and parallel to conscious processing, we want to preclude a potential misunderstanding. We are mainly interested in describing the methodological fallacy, not in discussing unconscious processing. Such a discussion would go beyond the scope of this article and would have to take into account a long history of research \cite{Eriksen_60,Holender_86,Reingold_Merikle_88,Greenwald_etal_96,Hannula_etal_05,Kouider_Dehaene_07}. Therefore, we do not claim that unconscious processing independent of conscious processing does not exist or cannot be shown. We do, however, claim that the lie detection study does not provide evidence for a superior unconscious lie detection ability\footnote{Note that our conclusions on the lie detection study \cite{tenBrinke_14} are corroborated by a recent commentary of lie--detection experts \cite{Levine_Bond_14} who question the plausibility of the lie-detection results in the light of other research and meta--analyses in this area. While \citeA{Levine_Bond_14} had to speculate that one of the conditions is a statistical outlier we can now show the statistical reasons behind the wrong conclusions. Hence, our findings converge with the intuition and meta--analytic data of these lie--detection experts.} % and that this study shows in an exemplary way how the claims of the other studies using the same flawed rationale can go astray and need careful reconsideration using more appropriate methods % % % % % % % % % % % \subsection*{Conclusions} We described a reasoning that is widely used but flawed. In the case of the lie detection study \cite{tenBrinke_14}, the commendable open--data practice allowed us to show in an exemplary way how this reasoning can lead to wrong conclusions. More generally, conclusions of the many studies using this reasoning should be treated with caution and could be wrong. In the future, we should employ better statistical methods and conclusions based on the flawed reasoning should be reconsidered. \newpage \renewcommand{\thesection}{} \renewcommand{\thesubsection}{\arabic{subsection}} \section*{Materials and Methods} \subsection{Classification and statistical optimality in more detail} \label{OptimalClassification} In this section, we describe how an optimal classification of the single trial data in an indirect task (Fig.~1B of main text) can be performed. We prove below that under the assumption that the RTs follow a normal distribution or a lognormal distribution \cite{Ulrich_Miller_93}, the statistically optimal classifier is given by a median split of the reaction times (``median classifier''). We also describe a typical classifier as used in machine learning that does not require any distributional assumptions (``trained classifier''). Finally, we derive a theoretical upper bound for classification performance on the given data that in principle can never be exceeded (``over--optimistic upper bound''). Before going into details, we first describe the results of applying these classifiers to the data of the lie detection study \cite{tenBrinke_14}. \paragraph*{Classification results for lie detection study.} For each participant, the goal is to classify the trials in the indirect task as 'congruent' or 'incongruent', based on the RTs of this participant. For each participant, we proceed as follows. (i) For the model--based median classifier, we compute the median RT, use this as the threshold of a step function classifier (see below), and compute the accuracy of this classifier over all trials. (ii) For the model--free trained classifier, we randomly split the trials into a training and test set of 50\% each (other split sizes lead to very similar results). We determine the best threshold on the training set, and compute the resulting accuracy on the test set. We repeat this procedure 10 times with different random splits of the data and report the average over these test accuracies. (iii) For the over--optimistic upper bound, we evaluate the accuracy of all possible thresholds for the step function classifier over all trials and report the best result. The following table shows means and standard deviations over the accuracies of all participants: \bigskip \noindent \begin{tabular}{lccl} \bf Method & \bf mean(accuracy) & \bf std(accuracy) \\ \hline (i) Median classifier (model: lognormal) & 50.61\% & 2.65\%\\ \hspace{0.6cm}Median classifier (model: normal) & 50.61\% & 2.65\%\\ (ii) Trained classifier (model--free) & 49.34\% & 2.64\% \\ \hline (iii) Over--optimistic upper bound & 53.73\% & 1.99\%\\ \end{tabular} \bigskip \noindent We can see that both the model--based (i) and model--free (ii) classifiers perform nearly exactly at chance level. The over--optimistic upper bound shows that on this data set, there does not exist a classifier that can obtain an accuracy higher than 54\% --- the value that was interpreted as ``detection incompetence'' in the lie detection study \cite[p.~1]{tenBrinke_14}. \paragraph*{General form of the optimal classifier.} Consider a classification task where the input is a real-valued number $x$ (e.g., a reaction time, RT), and the classifier is supposed to predict one of two labels $y$ (e.g., 'congruent' or 'incongruent'; for simplicity we use labels 1 and 2 in the following). Following the standard setup in statistical decision theory \cite[section 1.5]{Bishop06} we assume that the input data $X$ and the output data $Y$ are drawn according to some fixed (but unknown) probability distribution $P$. This distribution can be described uniquely by the class-conditional distributions $P( X \condon Y = 1)$ and $P(X \condon Y = 2)$ and the class priors $\pi_1 = P(Y = 1)$ and $\pi_2 = P(Y=2)$. A classifier is a function $f:I\!\!R \to \{1,2\}$ that assigns a label $y$ to each input $x$. The classifier that has the smallest probability of error is called the Bayes classifier. In case the classes have equal weight, that is $\pi_1 = \pi_2$, the Bayes classifier has a particularly simple form: it classifies an input point $x$ by the class that has the higher class-conditional density at this point. Formally, this classifier is given by \banum \label{eq-fopt} f_{opt}(x) := \begin{cases} 1 & \text{ if } P(X = x \condon Y =1 ) > P(X = x \condon Y=2)\\ 2 & \text{ otherwise.} \end{cases} \eanum \paragraph*{Optimal classifier for normal and lognormal distributions.} We now consider the special case where the class-conditionals follow a particular distribution. Let us start with the normally distributed case. We assume that both class-conditionals are normal distributions with means $\mu_1$, $\mu_2$ and equal variance $\sigma^2$, and we denote their corresponding probability density functions (pdfs) by $\varphi_{\mu_1,\sigma}$ and $\varphi_{\mu_2,\sigma}$. Under the additional assumption that both classes have equal weights $\pi_1 = \pi_2 = 0.5$, the cumulative distribution function (cdf) of the input (marginal distribution of $X$) is given as \banum \label{eq-gaussian} &\Gamma(x) := 0.5 \cdot \Big( \Phi(\frac{x - \mu_1}{\sigma}) + \Phi(\frac{x - \mu_2}{\sigma} )\Big), \eanum where $\Phi$ denotes the cdf of the standard normal distribution. For $t \in I\!\!R$, we introduce the step function classifier with threshold $t$ by \banum \label{eq-step} f_t(x) := \begin{cases} 1 & \text{ if } x \leq t\\ 2 & \text{ otherwise.} \end{cases} \eanum In the special case where the threshold $t$ coincides with the median of the marginal distribution of $X$, we call the resulting step function classifier the {\em median classifier. } \begin{propositionnn}[Median classifier is optimal for normal model] If the input distribution is given by Eq.~\eqref{eq-gaussian}, then the optimal classifier $f_{opt}$ coincides with the median classifier. \end{propositionnn} {\em Proof.} Because both classes have the same weight of 0.5, the Bayes classifier is given by $f_{opt}$ as in Eq.~\eqref{eq-fopt}. For any choice of $\mu_1$, $\mu_2$ and $\sigma$, the class-conditional pdfs $\varphi_{\mu_1, \sigma}$ and $\varphi_{\mu_2,\sigma}$ intersect exactly once, namely at $t^* = (\mu_1 + \mu_2) / 2$. By definition of $f_{opt}$, the optimal classifier $f_{opt}$ is then the step function classifier with threshold $t^*$. We now compute the value of the cdf at $t^*$: \ba \Gamma(t^*) & = 0.5 \cdot \Big( \Phi(\frac{t^* - \mu_1}{\sigma}) + \Phi(\frac{t^* - \mu_2}{\sigma} )\Big)\\ &= 0.5 \cdot \Big( \Phi(\frac{\mu_2 - \mu_1}{2\sigma}) + \Phi(\frac{\mu_1 - \mu_2}{2\sigma})\Big)\\ &= 0.5 \cdot \Big(\Phi(\frac{\mu_2 - \mu_1}{2}) + (1 - \Phi(\frac{\mu_2 - \mu_1}{2})\Big)\\ & = 0.5. \ea Here, the second last equality comes from the fact that the normal distribution is symmetric about 0. This calculation shows that the optimal threshold $t^*$ indeed coincides with the median of the input distribution, which is what we wanted to prove. \hfill$\Box$ It is easy to see that this proof can be generalized to more general types of symmetric probability distributions. It is, however, even possible to prove an analogous statement for lognormal distributions, which are not symmetric themselves. We introduce the notation $\lambda_{\mu,\sigma}$ for the probability density function (pdf) of a lognormal distribution, and $\Lambda_{\mu,\sigma}$ for the corresponding cdf. These functions are defined as \ba &\lambda_{\mu,\sigma}(x) := \frac{1}{x \sigma \sqrt{2\pi}} \exp\Big(- \frac{(\log x -\mu)^2 }{2 \sigma^2} \Big) && \text{ and } &&\Lambda_{\mu,\sigma}(x) := \Phi\Big( \frac{\log x -\mu}{\sigma} \Big). \ea Consider the case where the class-conditional distributions are lognormal distributions with same scale parameter $\sigma$ but different location parameters $\mu_1$ and $\mu_2$, and assume that both classes have the same weights $\pi_1 = \pi_2 = 0.5$. Then the pdf and cdf of the input distribution (marginal distribution of $X$) are given as \banum & g(x) = 0.5 \cdot \;(\; \lambda_{\mu_1, \sigma}(x) + \lambda_{\mu_2, \sigma}(x) \;) \nonumber \\ & G(x) = 0.5 \cdot \;(\; \Lambda_{\mu_1, \sigma}(x) + \Lambda_{\mu_2, \sigma}(x) \;). \label{eq-lognormal} \eanum \begin{propositionnn}[Median classifier is optimal for lognormal model] If the input distribution is given by Eq.~\eqref{eq-lognormal}, then the optimal classifier $f_{opt}$ coincides with the median classifier. \end{propositionnn} {\em Proof.} % The proof is analogous to the previous one. For any choice of $\mu_1$, $\mu_2$ and $\sigma$, the densities $\lambda_{\mu_1, \sigma}$ and $\lambda_{\mu_2, \sigma}$ intersect exactly once. To see this, we solve the equation $\lambda_{\mu_1, \sigma}(t^*) = \lambda_{\mu_2, \sigma}(t^*)$, which leads to the unique solution $t^*= \exp( (\mu_1 + \mu_2) / 2)$. The input cdf at this value can be computed as \ba G(t^*) & = 0.5 \Big( \Lambda_{\mu_1, \sigma}(t^*) + \Lambda_{\mu_2, \sigma}(t^*) \Big) \\ & = 0.5\Big( \Phi( \frac{\mu_2 - \mu_1}{2\sigma} )+ \Phi( \frac{\mu_1 - \mu_2}{2\sigma} ) \Big)\\ & = 0.5. \ea The last step follows as above by the symmetry of the normal cdf. \hfill$\Box$ \paragraph*{Training a model--free classifier. } If we do not want to make any assumptions about the underlying probability distribution, we can follow the standard protocol of statistical learning to identify the threshold $t$ of the best step function classifier. For each participant, we are given trials in form of input-output pairs $(X_i, Y_i)_{i=1,...,n}$, $X_i \in I\!\!R$, $Y_i \in \{1, 2\}$. We randomly split this data set into a training set consisting of 50\% and a test set of the remaining 50\% of all trials. On the training set, we determine the threshold $t^*$ that leads to the smallest number of misclassifications (= training error). For the corresponding step function classifier $f_{t^*}$ we now compute the error on the test set (= test error). We repeat this procedure 10 times to remove potential subsampling artifacts and report the mean over these repetitions. For readers familiar with machine learning, note that in this simple scenario, no model selection is involved, so a more complex evaluation procedure such as cross validation is not necessary. \paragraph*{An over--optimistic upper bound on classification accuracy.} To rule out the case that the result of the model--free classifier is seriously sub-optimal (due to the effect of splitting the data in training and test sets, or due to overfitting or underfitting), we can derive an upper bound on the accuracy of the best step function classifier that possibly exists on the given data. For each participant, we cycle through all possible thresholds $t$ and evaluate the accuracy of the corresponding step function classifier $f_t$ on all trials. We then select the best accuracy obtained in this way as the classification accuracy of this participant. This accuracy is overly optimistic, as this classifier usually overfits and exploits sampling artifacts. On the other hand, it gives an upper bound on the classification accuracy that any other step function classifier could potentially achieve on the data. Finally, note that in the context of the RT experiment, it would not make sense to consider classifiers that do not have the form of a step function classifier --- the general classification scenario implied by the experimental setup is to separate slow RTs from fast RTs. \subsection{Why is the relevant standard deviation for the significance test much smaller than that for classification?} \label{IntuitiveExplanation} Let us illustrate our answer with the data of the lie detection study \cite{tenBrinke_14}. Consider two probability distributions with slightly different expected values, such as the ones in Fig.~2A. The task of the classifier is to predict for each trial whether the measured RT has been generated from a congruent or an incongruent condition. More abstractly, given a real-valued sample, we want to decide which of the two distributions is more likely to have generated that sample point. In general, this will only be possible in a satisfactory manner if the two distributions have only little overlap and their means are considerably different from each other. The significance test, on the other hand, assesses whether the expected values of the two distributions are different at all. It does not ask for a large difference, it just asks for any difference. The more measurements are taken, the closer each mean estimate will be to the corresponding expected value. We know from the central limit theorem that the SEM is of order $1 / \sqrt{n}$. In the limiting case of an infinite number of measurements, the SEM would approach zero. To get a feeling for this effect, consider the data of the lie detection study. For a rough estimate, let us for a moment ignore the fact that there were different participants (i.e., that between--subjects variability exists; this is not so critical because we are dealing with within--subjects designs such that the difference mainly is affected by within--subjects variability \cite{Franz_Loftus_12}). The average standard deviation for a trial was 146.5~ms (Fig.~2A). Each condition was measured about 180 times in each participant and the study had 66~participants, which leads to a factor of ${1}/{\sqrt{180 \cdot 66}}$. Taking the difference between the congruent and incongruent conditions increases the SEM by a factor of $\sqrt{2}$ (assuming for simplicity independence and equal variances), such that a rough prediction for the SEM relevant for the significance test is given as $146.5\cdot \sqrt{2} / \sqrt{180 \cdot 66 }\mbox{ ms}= 1.9$~ms. This is close to the empirically obtained $2.0$~ms. \subsection{If the means are significantly different, doesn't this imply that the classification accuracy is significantly different from chance level? } \label{ss-classification-significant} If we have enough statistical power, significance tests will eventually show that classification is different from chance if the means are significantly different. (In real data, both significances might not occur at the same time because sources of noise are not exactly identical for the means and the classification results.) % However, with regard to the reasoning outlined in Fig.~1 of the main paper, this question is misleading. For the typical neuroscientific interpretation it is not only important that the classification accuracy in the indirect task is significantly different from chance level (this could also happen if the true classification performance were, say, 51\%). What counts is whether the classification accuracy is {\em considerably larger} than chance level, and in particular, considerably larger than the accuracy obtained in the direct task. This leads back to the old statistical issue of needing to distinguish between the statistical significance of effects vs.\ the size of the effects. \subsection{Many studies did not test for the difference of the effects. Isn't this also a problem?} \label{TestForDifference} The correct procedure would indeed be to test for the difference \cite{Franz_Gegenfurtner_08,Nieuwenhuis_etal_11}. This is, however, an issue independent of the general fallacy we are concerned with, so we do not discuss it further. \subsection{The lie detection study calculated Cohen's d values. Doesn't this ameliorate the problem?} \label{CohenD} While most studies used the RTs in the indirect task for their significance test, the lie detection study \cite{tenBrinke_14} used a somewhat different approach. For each participant an individual Cohen's d value \cite{Cohen_88} was computed from the RTs, resulting for the $66$ participants in $d_1, ..., d_{66}$. Then, a significance test was performed on these values and a second Cohen's d value $d_{across}$ was calculated across the individual values $d_1, ..., d_{66}$. This value $d_{across}$ is what is shown in Fig.~2 (Exp.~2) of the lie detection study \cite{tenBrinke_14}, it was found to be $d_{across} = 0.27$ \cite[p.~6]{tenBrinke_14}. % This means that the Cohen's d--values used in this figure refer to the question of whether the means of the RTs are different from each other (which they very well can be, as we explained above). It does not say anything about the effect size of the indirect classification performance, which would be the relevant quantity. The relevant Cohen's d--values computed on the distribution that is relevant to classification (our Fig.~2A) amount on average to $0.03$ \cite[p.~6]{tenBrinke_14}, % which is very small and therefore fully consistent with the results of our classifiers (for pychophysicists: this is equivalent to a very small d--prime value in signal detection theory). \newpage \section*{Acknowledgments} We thank the authors \cite{tenBrinke_14} and the editor \cite{Eich_14} of the lie detection study for their open data policy and regret that due to this commendable openness they are the first to be criticized for a method that has been used in many studies. We thank Gilles Blanchard and Frank R{\"o}sler for comments on the manuscript. U.v.L. was supported by the German Research Foundation (grant LU1718/1-1 and Research Unit 1735 ``Structural Inference in Statistics: Adaptation and Efficiency''). \subsection*{Author contributions} The major contribution to this manuscript comes from V.H.F. He discovered the methodological flaws and reanalyzed the data of the lie detection study \cite{tenBrinke_14}. The role of U.v.L. was the one of a critical discussion partner. She verified all arguments and re-implemented the analyses independently in Matlab. Both authors jointly wrote the paper. \subsection*{Competing Interests} Volker H. Franz received funding from the University of Hamburg and the German Research Foundation. He currently serves in the editorial board of British Journal of Psychology. Ulrike von Luxburg received funding from the University of Hamburg and the German Research Foundation. She currently serves in the editorial board of the Journal of Machine Learning Research and is a board member of the International Machine Learning Society. Previously she served in the editorial board of Statistics and Computing. \newpage
1,116,691,501,016
arxiv
\section*{Methods} {\small The device geometry as well as the edge contacts were defined using electron beam lithography and dry etching, in the method of Ref.~\onlinecite{Wang2013}. The backgate capacitance density was estimated to be $6.7\times 10^{10}~e\,\mathrm{cm^{-2}\,V^{-1}}$, where $e$ is the elementary charge. The s-SNOM used was a NeaSNOM from Neaspec GmbH, equipped with a CO$_2$ laser and cryogenic HgCdTe detector. The probes were commercially-available metallized atomic force microscopy probes with an apex radius of approximately 25~nm. The tip height was modulated at a frequency of approximately $250~\mathrm{kHz}$ with amplitude of 60--80~nm. $\sopt$ was obtained from the third harmonic interferometric pseudo-heterodyne signal.\cite{Chen2012,Fei2012} For simplicity most figures only show $\Re\sopt$, however similar information appears in $\Im\sopt$ as described by equation \eqref{eq:fit}; all analysis (background subtraction, fitting, etc.) was performed simultaneously on $\Re\sopt$ and $\Im\sopt$. The location of the etched graphene edge ($x=0$) was determined from the simultaneously-measured topography. The theoretical model of plasmon modes was calculated in a classical electromagnetic transfer matrix method, with a thin film stack of vacuum--SiO$_2$(285~nm)--h-BN(46~nm)--graphene--h-BN(7~nm)--vacuum. Thin film and nonlocal effects reduce $\Re\ensuremath{q_{\mathrm{p}}}$ by $\sim 5$--20\% compared to infinite dielectric Drude model calculation (see Supplement). The zero temperature random phase approximation (RPA) result\cite{Wunsch2006,Hwang2007,Principi2009a} was used for the graphene nonlocal conductivity $\sigma(k,\omega)$. The permittivity model of Ref.~\onlinecite{Cai2007} was used for the h-BN films, modified to include dielectric losses based on Ref.~\onlinecite{Caldwell2014}. The damping effect from dielectric losses shown in Fig~\ref{fig4} was also calculated in this method, taking phonon linewidths of 6.5~meV in-plane and 1.9~meV out-of-plane in the terminology of Ref.~\onlinecite{Caldwell2014}, and their origin is discussed further in the Supplement. In Fig.~\ref{fig2}c and Fig.~\ref{fig2}d, the color quantity plotted is the imaginary part of the reflection coefficient of evanescent waves, evaluated at the top h-BN surface. In these figures the damping has been modified (e.g., reduced dielectric loss) to enhance the visibility of modes---this does not significantly modify the mode locations. }
1,116,691,501,017
arxiv
\section{Introduction} \label{sec:intro} Theoretical models are an important and commonly used tool for interpreting and furthering our understanding of observed galaxy populations. Typically, these models are used to generate mock galaxy catalogues that can be compared to equivalent samples drawn from the real Universe. Knowledge of the models' construction, combined with their successes and failures in reproducing the observations, can often allow important inferences to be made about the physics of galaxy formation and evolution. There are a number of different methods for generating mock galaxy samples for comparison with observations. At the most advanced and complex end of the scale are full hydrodynamic simulations. These attempt to solve the physics of galaxy formation from first principles, directly modelling complex baryonic processes such as cooling and shocks in tandem with the dissipationless growth of dark matter structure. Unfortunately, the associated high computational cost prohibits the resolution of small scale physical processes such as star formation and black hole feedback in a volume large enough to provide a cosmologically significant sample of galaxies. Hence hydrodynamic simulations often resort to parametrized approximations to deal with these unresolved ``sub-grid'' processes. In addition they must also deal with the complex and often poorly understood numerical effects that come with modelling dissipational physics using finite physical and temporal resolutions. Semi-analytic galaxy formation models attempt to overcome the computational costs associated with hydrodynamic simulations by separating the baryonic physics of galaxy formation from the dark-matter-dominated growth of structure \citep{White1991,Kauffmann1999}. This is achieved by taking pre-generated dark matter halo merger trees and post-processing them with a series of physically motivated parametrizations that attempt to capture the mean behaviour of the dominant baryonic processes involved in galaxy formation. The resulting speed means that these models can be used to generate cosmologically significant samples of galaxies using only modest computing resources. However, semi-analytic models typically require a number of free parameters, many of which are often not well constrained by theory or observation \citep{Neistein2010}. The complicated and degenerate nature of the different physical prescriptions also means that the effects of these parameters on the final galaxy population are often highly degenerate and can be difficult to interpret \citep[e.g.][]{Lu2012}. Additionally, our relatively poor understanding of high-redshift galaxy formation means that at least some of the parametrizations used may not be appropriate at these early times \citep{Henriques2013,Mutch2013}. For many science questions and applications we are not required (or able) to include and understand all of the relevant input physics. In these cases it is often sufficient to construct simple ``toy'' models \citep[e.g.][]{Wyithe2007,Dekel2013,Tacchella2013}. These typically build an average population of galaxies using simplified approximations, and are designed to test new ideas and interpretations or to allow the investigation of particular trends or features found in observational data. One example is the ``reservoir'' model of \citet{Bouche2010}. Here, averaged dark matter halo growth histories are used to track the typical build up of cold gas in galaxies. In their fiducial formalism, the accretion of baryons on to a galaxy is only allowed to occur when the host halo lies in a fixed mass range. However, within this range the accretion is modelled as a simple fraction of the halo growth rate. Using a standard Kennicutt--Schmidt law \citep{Kennicutt1998} for star formation, this simple model is able to reproduce the observed scaling behaviours of the star-forming main sequence and Tully--Fisher relations. Expanding upon this framework, \citet{Krumholz2012} also introduced a metallicity-dependent star formation efficiency, allowing them to straightforwardly investigate the associated effects on the star formation histories of galaxies. Rather than attempting to generate galaxy populations based on our theories of the relevant physics, an alternative method of generating mock galaxy samples is to use purely statistical methods. Halo occupation distribution (HOD) models use observed galaxy clustering measurements to constrain the number of galaxies of a particular type within a dark matter halo of a given mass \citep{Peacock2000,Zheng2005}. For the purpose of constructing mock galaxy catalogues, such a methodology has the advantage that it requires no knowledge of the how each galaxy forms and is also statistically constrained to produce the correct result. However, this limits our ability to learn about the physics of galaxy evolution, as one has no way to self-consistently connect individual galaxies at any given redshift to their progenitors or descendants at other times. A similar method to HODs for creating purely statistical mock catalogues is subhalo abundance matching \citep[SHAM; e.g.][]{Conroy2006}. SHAM models are typically constructed by generating a sample of galaxies of varying masses, drawn from an observationally determined stellar mass function. Each galaxy is then assigned to a dark matter halo taken from a halo mass function generated using an $N$-body simulation. This assignment is made such that the most massive galaxy is placed in the most massive halo and so on, proceeding to lower and lower mass galaxies. Due to possible differences in the formation histories of haloes of any given mass, an artificial scatter is often added during this assignment procedure \citep[e.g.][]{Conroy2007,Behroozi2013b,Moster2013}. By leveraging the use of dark matter merger trees as the source of the halo samples at each redshift, both HOD and SHAM studies have been able to provide important constraints on the average build up of stellar mass in the Universe \citep{Zheng2007,Conroy2009,Moster2013,Behroozi2013b}. This has allowed these studies to also draw valuable conclusions about processes such as the efficiency of star formation as a function of halo mass and the role of intra-cluster light (ICL) in our accounting of the stellar mass content of galaxies. However, both HOD and SHAM models are applied independently at individual redshifts and do not self-consistently track the growth history of individual galaxies. This limits the remit of these models to considering only the averaged evolution of certain properties over large samples. Our goal in this work is to present an alternative class of galaxy formation model which allows us to achieve the ``best of both worlds''; providing both a self-consistent growth history of each individual galaxy, whilst also minimizing any assumptions about the physics which drives this growth. This is achieved by tying star formation (and hence the growth of stellar mass) to the growth of the host dark matter halo in $N$-body dark matter merger trees using a simple but well motivated parametrization that depends only on the properties of the halo itself. In this way, we are able to provide a complete formation history for every galaxy. The model we present is closely related to that of \citet{Cattaneo2011}, but with a number of important generalisations that increase its utility whilst still maintaining a high level of transparency and simplicity. This paper is laid out as follows: In \S\ref{sec:model} we introduce the framework of our new model. In particular, \S\ref{sec:baryons} focusses on how we build up the baryonic content of dark matter haloes, with the practical details of the model's application outlined in \S\ref{sec:generating_galaxy_pop}. In \S\ref{sec:results} we present some basic results, in particular investigating the model's ability to reproduce the observed galaxy stellar mass function at multiple redshifts. In \S\ref{sec:discussion} we discuss our findings as well as outline the general utility of the model and a number of possible ways in which it can be extended. Finally, we present a summary of our conclusions in \S\ref{sec:conclusions}. A 1st-year {\it Wilkinson Microwave Anisotropy Probe} \citep[{\it WMAP}1;][]{Spergel2003} $\Lambda$ cold dark matter ($\Lambda$CDM) cosmology with $\Omega_{\rm m}{=}0.25$, $\Omega_{\Lambda}{=}0.75$, $\Omega_{\rm b}{=}0.045$ is utilized throughout this work. In order to ease comparison with the observational data sets employed, all results are quoted with a Hubble constant of $h{=}0.7$ (where $h{\equiv}H_0/100\, \mathrm{km\,s^{-1}Mpc^{-1}}$) unless otherwise indicated. Magnitudes are presented using the Vega photometric system and a standard \citet{Salpeter1955} initial mass function (IMF) is assumed throughout. \section{The simplest model of galaxy formation} \label{sec:model} \subsection{The growth of structure} The aim of the model presented in this work is to self-consistently tie the growth of galaxy stellar mass to that of the host dark matter haloes in as simple a way as possible. In order to achieve this we require knowledge of the properties and associated histories of a large sample of dark matter haloes spanning the full breadth of cosmic history. We obtain this in the form of merger trees constructed from the output of the $N$-body dark matter Millennium Simulation \citep{Springel2005}. Using the evolution of over $10^{10}$ particles in a cubic volume with a side length of 714 Mpc, the Millennium Simulation merger trees track the build up of dark matter haloes larger than approximately $2.9{\times} 10^{10}\,M_{\sun}$ over 64 temporal snapshots. These snapshots are logarithmically spaced in expansion factor between redshifts 127 and 0, with an average separation of $\sim$200--350 Myr. Each individual dark matter structure is identified using a friends-of-friends linking algorithm with further substructures (subhaloes) identified using the {\small SUBFIND} algorithm of \citet{Springel2001}. The simulation employs a concordance $\Lambda{\rm CDM}$ cosmology compatible with first year {\it WMAP} \citep{Spergel2003} parameters: $(\Omega_{\rm m},\Omega_{\Lambda},\sigma_8,h_0) = (0.25, 0.75, 0.9, 0.73)$. \subsection{The baryonic content of dark matter haloes} \label{sec:baryons} The maximum star formation rate of a galaxy is regulated by the availability of baryonic material that can act as fuel. In our formation history model we assume that every dark matter halo carries with it the universal fraction of baryonic material, $f_{\rm b}{=}0.17$ \citep{Spergel2003}. However, some of these baryons will already be locked up in stars or contained in reservoirs of material that are unable to participate in star formation. Therefore we parametrize the dependence of the amount of newly accreted baryonic material which is available for star formation on the properties of the host dark matter halo using a {\it baryonic growth function}, $F_{\rm growth}$. In practice, only some fraction of this available material will actually make its way in to the galaxy, with an even smaller amount then successfully condensing to form stars in a suitably short time interval. The efficiency with which this occurs depends on a complex interplay of non-conservative baryonic processes, both internal and external to the galaxy--halo system. A number of important examples include shock heating, feedback from supernova and active galactic nuclei (AGN), as well as environmental processes such as galaxy mergers and tidal stripping. Here we assume that all of these complicated and intertwined mechanisms can be distilled down into a single, arbitrarily complex {\it physics function}, $F_{\rm phys}$. Combining all of this together, we can write down a deceptively simple equation for the growth rate of stellar mass ($\dot M_{*}$) in the Universe on a per-halo basis: \begin{equation} \label{eqn:sfr} \dot M_{*} = F_{\rm growth} F_{\rm phys}\,. \end{equation} In the following sections, we discuss the form we employ for the baryonic growth and physics functions in turn. \subsubsection{The baryonic growth function} In order to explore the simplest form of our formation history model, we begin by assuming that as a dark matter halo grows, all of the fresh baryonic material it brings with it is immediately available for star formation. This corresponds to a baryonic growth function which is simply given by the rate of growth of the host dark matter halo: \begin{equation} \label{eqn:bgf} F_{\rm growth} = f_{\rm b} \frac{dM_{\rm vir}}{dt}\,. \end{equation} \begin{figure} \includegraphics[width=\columnwidth]{./figures/fig1.pdf} \caption{\label{fig:growth_diversity} The large variation in the possible growth histories of haloes which all have approximately equal masses by redshift zero. The grey shaded region indicates the amplitude of the physics function in Eqn. 3. The blue and red lines represent 30 randomly selected growth histories for haloes with final $M_{\rm vir}${} values of approximately $10^{12}$ and $10^{13}\, M_{\sun}$, respectively. Variations of 3--4 Gyr in the time at which these haloes reach a given mass is common. Unlike statistical techniques for tying galaxy properties to their host haloes, our formation history model implicitly includes the full range of different halo growth histories and their effects on the predicted galaxy population.} \end{figure} In practice haloes of the same $z{=}0$ mass may show a diverse range of growth histories, all of which are captured by our model. In Fig.~\ref{fig:growth_diversity} we demonstrate this by showing the individual growth histories of a random sample of dark matter haloes selected from the Millennium Simulation in two narrow mass bins. From this figure we see that there can be significant variations in the time at which similar haloes at redshift zero reach a given mass. For example, in the upper halo mass sample, some haloes reach $10^{12}\,{\rm M_{\sun}}$ by $z{=}5$ whilst others do not reach this value until $z{=}2$. In addition, some haloes may have complex growth histories, achieving their maximum mass at $z{>}0$. This can potentially be caused by a number of processes such as stripping during dynamical encounters with other haloes. Since the baryonic growth function maps the formation history of each individual dark matter halo to the stellar mass growth of its galaxy, this diversity in growth histories is fully captured, propagating through to be reflected in the predicted galaxy populations at all redshifts. This is an important attribute of our model that sets it apart from other statistical-based methods which merely map the properties of galaxies to the instantaneous or mean properties of haloes, independently of their histories (e.g. HOD and SHAM models). These methods typically have to add artificial scatter to approximate the effects of variations in the halo histories, whereas this variation is a self-consistent input to our formation history model. \subsubsection{The physics function} \label{sec:physics_func} The physics function describes the efficiency with which baryons are converted into stars in haloes of a given mass. The form of this function may be arbitrarily complex, however, the goal of this work is to find the simplest model which successfully ties the growth of galaxy stellar mass to the properties of the host dark matter haloes. The physics function is not meant to provide an accurate reproduction of the details of the full input physics, but rather their combined {\it effects} on the growth of stellar mass in the Universe. In this spirit, we begin by assuming that there is only one input variable: the instantaneous virial mass of the halo, $M_{\rm vir}$. Although still not understood in detail, the observed relationship between dark matter halo mass and galaxy stellar mass is well documented \citep[e.g.][]{Zheng2007,Yang2012,Wang2013}. Assuming the favoured $\Lambda {\rm CDM}$ cosmology, a comparison of the observationally determined galactic stellar mass function to the theoretically determined halo mass function indicates that the averaged efficiency of stellar mass growth varies strongly as a function of halo mass. In Fig.~\ref{fig:halo_vs_stellar_MFs}, we contrast a Schechter function fit of the observed redshift zero stellar mass function{} \citep[solid blue line;][]{Bell2003} against the dark matter halo mass function of the Millennium Simulation (red dashed line). The halo mass function has been multiplied by $f_{\rm b}$ in order to approximate the total amount of baryons available for star formation in a halo of any given mass. The increased discrepancy between the stellar mass function and halo mass functions at both low and high masses indicates that the efficiency of star formation is reduced in these regimes. It is commonly held that at low masses the shallow gravitational potential provided by the dark matter haloes allows supernova feedback to efficiently eject gas and dust from the galaxy. This reduces the availability of this material to fuel further star formation episodes, hence temporarily stalling in situ stellar mass growth. Other processes such as the photoionization heating of the intergalactic medium may also play an important role in reducing the efficiency of star formation in this low-mass regime \citep[][and references therein]{Benson2002}. At high halo masses, it is thought that inefficient cooling coupled with strong central black hole feedback also leads to a quenching of star formation \citep[e.g.][]{Croton2006}. Therefore, it is only between these two extremes, around the knee of the galactic stellar mass function, that stellar mass growth reaches its highest average efficiency. \begin{figure} \includegraphics[width=\columnwidth]{./figures/fig2.pdf} \caption{\label{fig:halo_vs_stellar_MFs} A comparison of the observed galactic stellar mass function (blue solid line) and the halo mass function of the Millennium Simulation (red dashed line). The halo mass function has been multiplied by the universal baryon fraction in order to demonstrate the maximum possible stellar mass content as a function of halo mass. The closer the stellar mass function is to this line, the more efficient star formation is in haloes of the corresponding mass. If galaxies were to form stars with a fixed efficiency at all halo masses then the slope of the stellar mass function would be identical to that of the halo mass function. The differing slopes at both high and low masses indicates that star formation (as a function of halo mass) is less efficient in these regimes. At low masses, this is commonly attributed to efficient gas ejection due to supernova feedback, whereas at high masses energy injection from central super-massive black holes is thought to be able to effectively reduce the efficiency of gas cooling. However, many other physical processes may also contribute in both regimes.} \end{figure} We begin by parametrizing the physics function as a simple log-normal distribution centred around a halo virial mass $M_{\rm peak}$, and with a standard deviation $\sigma_{M_{\rm vir}}$: \begin{equation} \label{eqn:physicsfunc_mvir} F_{\rm phys}(M_{\rm vir}/M_{\sun}) = \mathcal{E}_{M_{\rm vir}} \exp\left(-\left(\frac{\Delta M_{\rm vir}}{\sigma_{M_{\rm vir}}}\right)^2\right), \end{equation} where $\Delta M_{\rm vir} {=} \log_{10}(M_{\rm vir}/M_{\sun}){-}\log_{10}(M_{\rm peak}/M_{\sun})$ and the parameter $\mathcal{E}_{M_{\rm vir}}${} represents the maximum possible efficiency for converting in-falling baryonic material into stellar mass, achieved when $M_{\rm vir}{=}M_{\rm peak}$. Such a distribution has been found by SHAM studies to provide a good match to the derived star formation rates as a function of halo mass for $z{\la}2$ \citep{Conroy2009,Bethermin2012}. \begin{figure*} \begin{minipage}{\textwidth} \begin{center} \includegraphics[width=0.8\columnwidth]{./figures/fig3.pdf} \caption{\label{fig:cartoon} The mean virial mass ($M_{\rm vir}${}) growth histories for five samples of dark matter haloes with varying final masses (blue solid lines). Each sample is only plotted out to a redshift limit determined by where 80\% of the haloes contain more than 40 particles. The grey shaded region indicates the amplitude of the physics function in Eqn.~\ref{eqn:physicsfunc_mvir}. This is also illustrated by the log-normal curve on the right-hand side (orange shaded region). Galaxies with halo masses within the peaked region form stars efficiently; outside this mass range, the amount of star formation is negligible. All galaxies of sufficient mass at $z{=}0${} will cross the efficient star formation mass range at some point in their history, with this period typically coming earlier for more massive haloes. Also shown is the mean maximum circular velocity ($V_{\rm max}${}) evolution for haloes of varying final velocity (red dashed lines). These samples have been chosen to have $z{=}0${} maximum circular velocity values similar to the mean values of the five mass-selected samples. There are clear differences in the evolution of $V_{\rm max}$ and $M_{\rm vir}$ which will in turn result in differences between the produced galaxy populations.} \end{center} \end{minipage} \end{figure*} This simple form of the physics function provides a number of desirable properties. In Fig.~\ref{fig:cartoon}, we present the average growth histories of five samples of dark matter haloes chosen from the Millennium Simulation merger trees by their final redshift zero masses (solid blue lines). For clarity, we only plot these histories out to redshifts where more than 80\% of the haloes in each sample have masses which are twice the resolution limit of the input merger trees. The grey shaded region indicates the amplitude of the physics function defined by Eqn~\ref{eqn:physicsfunc_mvir} when using our fiducial parameter values (see \S\ref{sec:z0_results} for details). As the haloes grow, they pass through the region of efficient star formation at different times depending on their final masses. Galaxies hosted by the most massive $z{=}0${} haloes form the majority of their in situ stellar mass at earlier times whereas those in the lowest mass haloes are still to reach the peak of their growth. In addition, lower mass haloes tend to spend a longer time in the efficient star forming regime compared to their high-mass counterparts. These trends qualitatively agree with the observed phenomenon of galaxy downsizing \citep[e.g.][]{Cowie1996,Cattaneo2008}. Subhalo abundance matching studies have suggested that $V_{\rm max}$ may be more tightly coupled to the stellar mass growth of galaxies than $M_{\rm vir}$ \citep[e.g.][]{Reddick2013}. This makes intuitive sense as $V_{\rm max}$ is directly related to the gravitational potential of the inner regions of the host halo, where galaxy formation occurs. Therefore, in addition to virial mass we also consider the case of a physics function where the dependent variable is the instantaneous maximum circular velocity of the host halo, $V_{\rm max}${}: \begin{equation} \label{eqn:physicsfunc_vmax} F_{\rm phys}(V_{\rm max}/{\rm (km\,s^{-1})}) = \mathcal{E}_{V_{\rm max}} \exp\left(-\left(\frac{\Delta V_{\rm max}}{\sigma_{V_{\rm max}}}\right)^2\right), \end{equation} where $\Delta V_{\rm max} {=} \log_{10}(V_{\rm max}/{\rm (km\,s^{-1})}){-}\log_{10}(V_{\rm peak}/{\rm (km\,s^{-1})})$. To avoid confusion, from now on we will refer to the formation history model constructed using this physics function as the ``static $V_{\rm max}${} model''. Similarly, we will refer to the case of $F_{\rm phys}(M_{\rm vir})${} as the ``static $M_{\rm vir}${} model''. In Fig.~\ref{fig:cartoon} we show the average $V_{\rm max}$ growth histories for a number of different $z{=}0${} selected samples. The $y$-axis has been scaled such that the grey band also correctly depicts the changing amplitude of the $V_{\rm max}${} physics function as well as its $M_{\rm vir}${} counterpart. Additionally, each of the $V_{\rm max}${} samples in Fig.~\ref{fig:cartoon} (red dashed lines) is chosen to have mean $z{=}0${} values close to that of the five $M_{\rm vir}$ samples (blue lines). However, there are clear differences between the growth histories of these two halo properties. In particular, the evolution of $V_{\rm max}${} is slightly flatter, resulting in haloes transitioning out of the efficient star forming region at an earlier time than the equivalent $M_{\rm vir}$ sample. Such differences will have important consequences for the time evolution of the galaxy populations generated by each of the two physics functions and we highlight some of these in \S\ref{sec:results}. By combining the baryonic growth function with a physics function of the forms presented here, our resulting model may be thought of as a simplified and extended version of that presented by \citet{Bouche2010}. Unlike their model, the scaling of gas accretion efficiency with halo mass, and the dependence of star formation on previously accreted material, is implicitly contained within our physics function. Most importantly though, \citet{Bouche2010} use statistically generated halo growth histories instead of simulated merger trees. Hence their model contains no information about the scatter due to variations in halo formation histories. Furthermore, since their growth histories do not include satellites, there is no self-consistent stellar mass growth due to mergers. The model of \citet{Cattaneo2011} also uses simulated merger trees as input and thus shares many of the same advantages as our formation history model. However, their model ties the properties of galaxies to the instantaneous properties of their host haloes alone. In contrast, we use the full information of the mass accretion history to describe the availability of baryonic material for star formation. Also, we make no attempt to motivate the precise form of our model in terms of combinations of particular physical processes and their scalings with halo properties, as \citet{Cattaneo2011} do. This allows our model to remain maximally general and flexible. \subsection{Generating the galaxy population} \label{sec:generating_galaxy_pop} Armed with the forms of our baryonic growth function (Eqn.~\ref{eqn:bgf}) and physics function (Eqn.~\ref{eqn:physicsfunc_mvir}), we now discuss the practical implementation of the formation history model to generate a galaxy population from the input dark matter merger trees. For each halo in the tree, the change in dark matter halo mass, coupled with the time between each merger tree snapshot, provides us with the value of ${\rm d}M_{\rm vir}/{\rm d}t$. This change in mass naturally includes growth due to both smooth accretion {\it and} merger events. Combined with the instantaneous value of $M_{\rm vir}${} or $V_{\rm max}${} we can calculate a star formation rate for the occupying galaxy following Eqn.~\ref{eqn:sfr}. Some fraction of the mass formed by each new star formation episode will be contained within massive stars. The lives of these stars will be relatively short and therefore they will not contribute to the measured total stellar mass content of the galaxy. In order to model this effect we invoke the ``instantaneous recycling'' approximation \citep{Cole2000}, whereby some fraction of the mass of newly formed stars is assumed to be instantly returned to the galaxy interstellar medium (ISM). Based on a \citet{Salpeter1955} IMF we take this fraction to be 30\%, however, we note that changes to this value can be trivially taken into account by appropriately scaling the value of $\mathcal{E}$ in the physics function. Although well motivated and conceptually simple, our use of ${\rm d}M_{\rm vir}/{\rm d}t$ in the baryonic growth function (Eqn.~\ref{eqn:bgf}) does introduce some practical considerations. For example, the change in halo mass from snapshot-to-snapshot in the input dark matter merger trees can be stochastic in nature, especially for the case of low-mass or diffuse haloes identified in regions of high density. Also, when satellite galaxies fall into larger systems their haloes are tidally stripped, leading to a negative change in halo mass and thus a reduction in stellar mass according to Eqn.~\ref{eqn:sfr}. In the real Universe, we expect that the galaxy is located deep within the potential well of its host halo and is therefore largely protected from the earliest stripping effects suffered by the dark matter \citep{Penarrubia2010}. We must therefore decide when, if at all, to allow stellar mass loss when using this formalism. For simplicity, we address this by setting the star formation rate of satellite galaxies to be zero at all times; in other words fixing their stellar mass upon in fall. This is unlikely to be true in the real Universe across all mass and environment scales \citep[][]{Weinmann2006}, however, the assumption of little or no star formation in satellite galaxies is a reasonable approximation and is relatively common in analytic galaxy formation models \citep[e.g.][]{Kauffmann1999,Cole2000,Bower2006,Croton2006}. It is also in keeping with our goal of finding the simplest possible model. The form of the baryonic growth function presented in Eqn.~\ref{eqn:bgf} above is only one of a number of possibilities. As an example, one could use the instantaneous halo mass divided by its dynamical time, $M_{\rm vir}/t_{\rm dyn}$. This quantity grows more smoothly over the lifetime of a halo and is never negative. Additionally, one may speculate that this is a better representation of the link between stellar and halo mass build up. However, for simplicity, we do not investigate alternative forms of the baryonic growth function, but leave this to future work. Satellite galaxies are explicitly tracked in the input merger trees until their host subhaloes can no longer be identified or fall below the imposed resolution limit of 20 particles. At this point, their position is approximated by the location of the most bound particle at the last snapshot the halo was identified. We then follow \citet{Croton2006} in assuming that the associated satellite galaxy will merge with the central galaxy of the parent halo/subhalo after a time-scale motivated by dynamical friction arguments \citep{Binney2008}: \begin{equation} t_{\rm merge} = \frac{1.17}{G} \frac{V_{\rm vir}r^2_{\rm sat}}{m_{\rm sat}\ln(1+M_{\rm vir}/m_{\rm sat})}\ , \end{equation} where $V_{\rm vir}$ and $M_{\rm vir}$ are the virial velocity and mass of the parent dark matter halo in $\rm km\,s^{-1}$ and $\rm M_{\sun}$ respectively, $r_{\rm sat}$ is the current radius of the satellite halo in $\rm kpc$, and $m_{\rm sat}$ is the mass of the satellite in $\rm M_{\sun}$. In these units, the gravitational constant, $G$, is given by $\rm 4.40\times 10^{-9}\: kpc{^2}\,km\,s^{-1}\,M_{\sun}^{-1}\,Myr^{-1}$. The final stellar mass of a merger remnant is given by the sum of the stellar masses of the two merging progenitor galaxies. This is a key feature of the model and allows the growth histories of the progenitors of each galaxy to affect the final stellar populations of their descendants. Our input dark matter merger trees are constructed such that the mass of central friends-of-friends haloes implicitly includes the mass of all of its associated subhaloes. Therefore, as an in-falling satellite halo crosses the virial radius of its parent, the satellite mass is instantaneously added to that that of the parent, thus contributing to its $dM_{\rm vir}/dt$ value and the amount of star formation in the central galaxy. In reality this burst of newly formed stars could be produced by a number of different mechanisms. These include star formation in the central galaxy fuelled by external smooth accretion or material stripped from the infalling satellite (e.g. hot halo gas), star formation in the satellite galaxy as it uses up its remaining cold gas reserves during infall and the merger-driven starburst which may occur when the central or satellite galaxies eventually do collide and merge. However, our simple model makes no assumptions about what contribution each of these mechanisms makes to the total amount of stars produced during a merger event. Combined with our simple baryonic growth function that assumes {\it all} of the incoming baryonic material is available for star formation (irrespective of whether or not it is already locked up in stars), our model implicitly includes merger-driven starbursts with an increased efficiency. However, since we do not explicitly account for the amount of incoming baryons which are already locked up in stars when a satellite halo in-falls, there is the possibility that the resulting merger-driven star burst produces a system with a baryon fraction in excess of the universal value. For our best-fitting models below, we have found that this situation occurs in less than 0.25\% of all friends-of-friends haloes at any single snapshot. The situation is most prevalent in haloes with $M_{\rm vir}{\approx} 11.5\:M_{\sun}$, although even there, less than 1\% have baryon fractions above the cosmic value. Knowledge of the star formation rates of each galaxy and its progenitors at every time step in the simulation allows us to also calculate luminosities. For this purpose we use the simple stellar population models of \citet{Bruzual2003} and assume a \citet{Salpeter1955} IMF. In the real Universe supernova ejecta enriches the intra galactic medium, altering the chemical composition of the next generation of stars and the spectrum of the light they emit. As we do not track the amount of gas or metals in our simple model, we assume all stars are of 1/3 solar metallicity. This is a common assumption when no metallicity information is available. Finally, a simple ``plane-parallel slab'' dust model \citep{Kauffmann1999} is applied to the luminosity of each galaxy in order to provide approximate dust extincted magnitudes. These magnitudes are used below to augment our analysis by allowing us to calculate the $B{-}V$ colour for each galaxy at $z{=}0${}. However, our main focus will remain on stellar masses as these are a direct model prediction. \section{Results} \label{sec:results} Having outlined the methodology and implementation of our simple formation history model, we now present some basic results which showcase its ability to recreate observed distributions of galaxy properties. We begin by considering redshift zero alone, before moving on to investigate the results at higher redshifts. Throughout, we contrast the variations between the predicted galaxy populations when using $M_{\rm vir}${} or $V_{\rm max}${} as the dependant variable of the physics function (Eqns.~\ref{eqn:physicsfunc_mvir} \& \ref{eqn:physicsfunc_vmax}). \subsection{Redshift zero} \label{sec:z0_results} \begin{table*} \begin{minipage}{\textwidth} \begin{center} \caption{\label{tab:params} The fiducial parameter values of the physics function when using either $M_{\rm vir}${} or $V_{\rm max}${} as the dependant variable. Values are presented for both the non-evolving (see \S\ref{sec:z0_results}) and evolving (see \S\ref{sec:evo_results}) form. Also shown are the ranges for the flat priors used during the MCMC calibration. The parameters $\log_{10}(M_{\rm peak}/\rm{M_{\sun}})$ and $\log_{10}(V_{\rm peak}/({\rm km\,s^{-1})})$ indicate the haloes which possess the peak star formation efficiency (at $z{=}0${} in the evolving case). The $\sigma$ and $\mathcal{E}$ parameters represent the width and height of the Gaussian physics function, respectively. For the evolving models, $\alpha$, $\beta$ and $\gamma$ indicate the rate of power-law evolution of the peak location, width and height of the physics function, respectively. The non-evolving model parameters were chosen to provide the best reproduction of the observed $z{=}0$ colour-split stellar mass function of \citet{Bell2003}. In the evolving case, the values were chosen to additionally reproduce the evolution of the peak stellar--halo mass relation of \citet{Moster2013}.} \center \begin{tabular}{|c|c|c|c|c|c|c|} \hline $\mathbf{M_{\rm vir}}$ {\bf model} & $\log_{10}(M_{\rm peak}/\rm{M_{\sun}})$ & $\sigma_{M_{\rm vir}}$ & $\mathcal{E}_{M_{\rm vir}}$ & $\alpha_{M_{\rm vir}}$ & $\beta_{M_{\rm vir}}$ & $\gamma_{M_{\rm vir}}$\\ \hline Prior ranges & [$11.15, 12.65$] & [$0.2, 1.5$] & [$0.1, 1.5$] & [$-3.0, 3.0$] & [$-3.0, 3.0$] & [$-3.0, 3.0$] \\ \hline {\bf Static} {\it(\S\ref{sec:z0_mvir})} & $11.7$ & $0.65$ & $0.56$ & -- & -- & --\\ {\bf Evolving} {\it(\S\ref{sec:evo_results})} & $11.6$ & $0.56$ & $0.90$ & $0.03$ & $0.25$ & $-0.74$ \\ \hline\hline $\mathbf{V_{\rm max}}$ {\bf model} & $\log_{10}(V_{\rm peak}/({\rm km\,s^{-1})})$ & $\sigma_{V_{\rm max}}$ & $\mathcal{E}_{V_{\rm max}}$ & $\alpha_{V_{\rm max}}$ & $\beta_{V_{\rm max}}$ & $\gamma_{V_{\rm max}}$\\ \hline Prior ranges & [$1.5, 3.5$] & [$0.1, 1.5$] & [$0.1, 1.5$] & [$-3.0, 3.0$] & [$-3.0, 3.0$] & [$-3.0, 3.0$] \\ \hline {\bf Static} {\it(\S\ref{sec:z0_vmax})} & $2.2$ & $0.18$ & $0.53$ & -- & -- & -- \\ {\bf Evolving} {\it(\S\ref{sec:evo_results})} & $2.1$ & $0.17$ & $1.12$ & $0.10$ & $0.33$ & $-0.98$ \\ \hline \end{tabular} \end{center} \end{minipage} \end{table*} In order to determine the ``best'' parameter values for the $M_{\rm vir}${} model, we calibrate them against Schechter function fits of the observed red and blue galaxy stellar mass functions of \citet{Bell2003}. This calibration was done using Markov chain Monte Carlo (MCMC) parameter estimation techniques \citep[for details of our implementation see][]{Mutch2013}. The observed mass functions are constructed from a $g$-band limited sample taken from a combination of Sloan Digital Sky Survey (SDSS) early release \citep{Stoughton2002} and Two Micron All Sky Survey \citep[2MASS;][]{Jarrett2000} data, with a magnitude-dependent colour cut used to divide the red and blue galaxy populations. To similarly split the model galaxies into red and blue samples we employ a more basic mass/magnitude independent colour cut of $B{-}V=0.8$. This is equivalent to the colour division found by the 2dF Galaxy Redshift Survey \citep{Cole2005}. For all of the results presented in this work we use a minimum of 130\,000 model calls in the integration phase of our Monte Carlo chains, where the precise number used varies in proportion to the number of free model parameters. Due to computational limitations we are unable to utilize the full Millennium Simulation volume and instead restrict ourselves to a random sampling of 1/128 of the total simulation merger trees. This is equivalent to a comoving volume of approximately $9.8\times 10^{6}\mathrm{h}^{-3}\,\mathrm{Mpc^3}$ (i.e. a box with a side length of approximately $100 \mathrm{h}^{-1}\,\mathrm{Mpc}$). We use the same random merger tree set throughout. Flat priors were used for each parameter with ranges as presented in Table~\ref{tab:params}. To ensure that all of our chains are fully converged we employ the Rubin--Gelman statistic \citep{Gelman1992} as well as visually inspect the chain traces. It is important to note that, since our model relies on both the instantaneous host dark matter halo mass and its growth rate, the precise best-fitting parameter values may vary depending on the time-step spacing between the snapshots of the input simulation. For this reason one should be cautious not to over-interpret the exact parameter values of our simple model as they can be sensitive to the details of the implementation\footnote{Conversely, our method allows one to easily investigate the ramifications of varying simulation and merger tree properties, providing a direct check of such previously hidden differences.}. Using the posterior probability distributions of the MCMC fitting procedure allows us place 68 and 95\% confidence limits on all of our model results, both constrained and predicted. In this work, confidence intervals are calculated from a large sample of model runs (60--200) that have parameter combinations randomly sampled from the relevant posterior distributions. An important consideration when statistically calibrating any model against observational data is the use of realistic observational uncertainties \citep{Mutch2013}. As discussed by \citet{Bell2003}, there is likely significant systematic uncertainties associated with their stellar mass function estimation which are not formally included in the relevant Schechter function parameter values. To overcome this we utilize the uncertainties of \citet{Baldry2008} which are calculated by comparing the global mass functions that result from five independent stellar mass determinations of a single galaxy sample. We then partition this global uncertainty between red and blue galaxies. This is done such that the fractional contribution to the uncertainty due to red (blue) galaxies is equal to the fraction of red (blue) galaxies in each stellar mass bin. \begin{figure} \includegraphics[width=\columnwidth]{./figures/fig4.pdf} \caption{\label{fig:mvir_csmd} The colour--stellar mass diagram of the static $M_{\rm vir}${} model. The black dashed line at $B{-}V$=0.8 \citep{Cole2005} indicates the value used to divide the model galaxies into red and blue samples. There is a clear ridge of over-density extending across all stellar masses at $B{-}V{\approx}0.87$ representing the red sequence.} \end{figure} \begin{figure*} \begin{minipage}{\textwidth} \begin{center} \subfigure{\includegraphics[width=0.475\textwidth]{./figures/fig5_left.pdf}}\quad \subfigure{\includegraphics[width=0.475\textwidth]{./figures/fig5_right.pdf}} \caption{\label{fig:z0_smf} The red (dashed lines) and blue (solid lines) $z{=}0$ stellar mass function{}s produced by the static formation history model when using $M_{\rm vir}${} (left-hand panel) and $V_{\rm max}${} (right-hand panel) as the input variable to the physics function (Eqns.~\ref{eqn:physicsfunc_mvir} \& \ref{eqn:physicsfunc_vmax}). Galaxy colour is classified using a mass-independent colour cut of $B{-}V{=}0.8$. The free model parameters have been calibrated to provide the best possible reproduction of the observed stellar mass function{} of \citet{Bell2003} (error bars) and are presented in Table~\ref{tab:params}. Shaded regions indicate the 68 (dark) and 95 (light) \% confidence intervals of our MCMC fit. The fact that such an agreement can be achieved is an important success given the simplicity of the formation history model.} \end{center} \end{minipage} \end{figure*} \begin{figure*} \begin{minipage}{\textwidth} \begin{center} \subfigure{\includegraphics[width=0.475\textwidth]{./figures/fig6_left.pdf}}\quad \subfigure{\includegraphics[width=0.475\textwidth]{./figures/fig6_right.pdf}} \caption{\label{fig:z0_smf_probs} Marginalized posterior probability distributions for the $M_{\rm vir}${} (left) and $V_{\rm max}${} (right) static model parameters when calibrating against the red and blue stellar mass functions at $z{=}0$ (Fig.~\ref{fig:z0_smf}). All panels have been zoomed in to the high-probability regions. Contours on the 2D (blue) panels indicate the 68 and 95\% confidence regions. Yellow dots mark the marginalized most probable parameter values. The diagonal panels show the marginalized 1D distributions with the 68 and 95\% confidence intervals (dark and light shaded regions, respectively). The approximately Gaussian shape of these 1D distributions indicates the well behaved nature of the model. The only parameter degeneracies are between the normalization ($\mathcal{E}_{M_{\rm vir}}${}/$\mathcal{E}_{V_{\rm max}}${}) and width ($\sigma_{M_{\rm vir}}${}/$\sigma_{V_{\rm max}}${}) of the physics functions.} \end{center} \end{minipage} \end{figure*} \subsubsection{The $M_{\rm vir}${} model} \label{sec:z0_mvir} The best-fitting parameters for the static $M_{\rm vir}${} model are presented in Table~\ref{tab:params} (see \S\ref{sec:evo_results} for the ``evolving'' model). The preferred $M_{\rm peak}${} value of $10^{11.7}\,\rm{M_{\sun}}$ implies that galaxies in haloes slightly less massive than that of the Milky Way \citep[$M_{\rm vir}\approx 10^{12}\, \rm{M_{\sun}}$;][]{Xue2008} are on average the most efficient star formers. In these haloes, 56\% of all freshly accreted baryonic material is converted into stars, as indicated by the value of $\mathcal{E}_{M_{\rm vir}}${}. We also note that both the position of the peak of the physics function and its width agree well with the star formation rate--halo mass relation obtained by the abundance matching study of \citet{Bethermin2012}. In Fig.~\ref{fig:mvir_csmd} we show the colour--stellar mass diagram produced using the best parameters of our static $M_{\rm vir}${} model. The black dashed line indicates the colour split used to divide the galaxies into red and blue populations. Although there is a lack of a clear colour bi-modality as seen in observational data \citep[e.g.][]{Baldry2004}, we still find a clear overabundance of galaxies with $B{-}V{\approx}0.87$ corresponding to the observed ``red sequence''. The presence of this feature at approximately the correct position in colour space \citep{Cole2005} is an interesting result for such a simple model. In the left hand panel of Fig.~\ref{fig:z0_smf} we show the red and blue model galaxy stellar mass function{}s (solid lines) against the corresponding constraining observations (error bars). Despite its simplicity, the chosen form of the physics function produces a good reproduction of the data. This is true across a wide range in stellar mass, indicating that the model is capable of successfully matching the integrated time evolution of stellar mass growth as a function of halo mass at $z{=}0${}. Also, since blue galaxies preferentially trace those objects which have undergone significant recent star formation, the model's reproduction of the observed blue mass function suggests that the rate of star formation as a function of stellar mass near $z{=}0${} is also in broad agreement with the observed Universe. The fact that such an agreement is attainable with this simple model should be viewed as a key success of the methodology and a validation of the general form we have chosen for the physics function, $F_{\rm phys}(M_{\rm vir})${}. Having said this, there are some differences in the left hand panel of Fig.~\ref{fig:z0_smf} worth noting. In particular, there is an over-prediction in the number density of the most massive red galaxies and a corresponding under prediction of the most massive blue galaxies. \subsubsection{The $V_{\rm max}${} model} \label{sec:z0_vmax} Having established that a physics function constructed using $M_{\rm vir}${} as the single input variable can successfully provide a good match to the observed $z{=}0${} red and blue stellar mass function{}s, we now turn our attention to the results of using $V_{\rm max}${} as the input property (Eqn.~\ref{eqn:physicsfunc_vmax}). In the right hand panel of Fig.~\ref{fig:z0_smf} we present the colour-split stellar mass function{}s for the static $V_{\rm max}${} model. Again, a fixed colour division of $B{-}V{=}0.8$ is used to define the two colour populations and we use MCMC tools to constrain the physics function parameter values to provide the best statistical reproduction of the \citet{Bell2003} data. The resulting parameter values are presented in Table~\ref{tab:params}. Unsurprisingly, a comparison with the equivalent values of the $M_{\rm vir}${} model indicates that the peak efficiency of converting fresh baryonic material into stars in a single time-step remains similar ($\mathcal{E}_{V_{\rm max}}{=}0.53;\,\mathcal{E}_{M_{\rm vir}}{=}0.56$). However, the average virial mass of haloes with $V_{\rm max}{\approx} (V_{\rm peak}{=}158\: \rm{km\,s^{-1}})$ is $10^{11.9}\,\rm{M_{\sun}}$, therefore this peak efficiency occurs in slightly more massive haloes than was the case for the $M_{\rm vir}${} model. This is a reflection of the different growth histories of these two halo properties. As was the case for the $M_{\rm vir}${} model, an excellent reproduction of the observations is attainable when using $V_{\rm max}${} as the input parameter to the physics function. We find that the over prediction of high-mass red galaxies has been alleviated, although at the cost of now somewhat under predicting the number density of low-mass blue galaxies. Importantly though, given a suitable choice for values of the free parameters of the physics function, both the $M_{\rm vir}${} and $V_{\rm max}${} physics functions can produce a good match to the distribution and late time growth of stellar mass at $z{=}0${} despite the differences in their mean time evolution (cf. Fig.~\ref{fig:cartoon}). In Fig.~\ref{fig:z0_smf_probs} we present the marginalized posterior probability distributions for our MCMC calibration of both the $M_{\rm vir}${} (left-hand panel) and $V_{\rm max}${} (right-hand panel) models. For clarity we have zoomed in on the regions of high probability in all panels instead of showing the full ranges explored. The well behaved and understandable nature of the parameter distributions gives us further confidence in the validity of our model implementation. The approximately Gaussian shape of the 1D distributions (diagonal panels), coupled with their uni-modal nature, indicates that all of the parameters are well constrained. Furthermore, the 2D panels demonstrate that the only degeneracies in either model are between those parameters controlling the normalization ($\mathcal{E}_{M_{\rm vir}}${}/$\mathcal{E}_{V_{\rm max}}${}) and width ($\sigma_{M_{\rm vir}}${}/$\sigma_{V_{\rm max}}${}) of the log-normal physics function. This makes intuitive sense as these parameters jointly determine the integral of the star formation rate defined by Eqn.~\ref{eqn:sfr} and therefore the approximate total amount of stellar mass formed by each galaxy. \subsection{High redshift} \label{sec:highz_results} \begin{figure} \includegraphics[width=\columnwidth]{./figures/fig7.pdf} \caption{\label{fig:3panel_smf} The $z{\approx}0.9$, 1.8, 3.3 stellar mass function{}s predicted by the static $M_{\rm vir}${} (dashed black line) and $V_{\rm max}${} (dashed orange line) models (Eqns.~\ref{eqn:physicsfunc_mvir},\ref{eqn:physicsfunc_vmax}). Observational data from \citet[][PG08]{Perez-Gonzalez2008} and are shown for comparison. The solid lines indicate the results of convolving the model stellar masses with a normally distributed random uncertainty of 0.3 or 0.45 dex (for redshifts less than/greater than 3, respectively) in order to mimic the systematic uncertainties associated with the observed masses. The dark and light shaded regions show the 68 and 95\% confidence intervals predicted using the marginalized parameter distributions obtained from our $z{=}0$ parameter calibration (Fig.~\ref{fig:z0_smf_probs}). There are clear differences between the $M_{\rm vir}${} and $V_{\rm max}${} models at higher redshifts, despite their approximate agreement at $z{=}0${} (Fig.~\ref{fig:z0_smf}). This reflects the different time evolution of these two halo properties (see Fig.~\ref{fig:cartoon}).} \end{figure} In the previous section we demonstrated that our simple formation history model is capable of reproducing the observed red and blue galaxy stellar mass function{}s of the local Universe. We also showed that this is true independent of whether we utilize the $M_{\rm vir}${} or $V_{\rm max}${} form of the physics function (Eqns.~\ref{eqn:physicsfunc_mvir} \& \ref{eqn:physicsfunc_vmax}). However, as seen in Fig.~\ref{fig:cartoon}, there are important differences in the time evolution of these halo properties. This suggests that we should see corresponding differences in the galaxy populations predicted at higher redshifts. In Fig.~\ref{fig:3panel_smf} we present the stellar mass function{}s of both the $M_{\rm vir}${} and $V_{\rm max}${} models (dashed lines) against the observed $z{>}0$ mass functions of \citet[][points]{Perez-Gonzalez2008}. The solid lines represent the formation history model results after a convolution with a normally distributed random error of dispersion 0.3 dex for $z{<}3$ and 0.45 dex for redshifts greater than this value \citep{Moster2013}. Such a convolution is common practice and approximates the missing uncertainties in the observational data due to systematics involved with producing stellar mass estimates from high-redshift galaxy observables \citep[e.g.][]{Fontanot2009,Guo2011,Santini2012}. There are clear quantitative differences between the stellar mass functions produced by the two models. These become more pronounced as we move to higher redshifts. At $z{\approx}3$ (bottom panel), the $V_{\rm max}${} model predicts a sharp fall off in the number density of galaxies with stellar masses greater than $10^{10.5}\,M_{\sun}$. When using $M_{\rm vir}${} to define the physics function, this drop off does not occur until $M_{\star}{\approx} 10^{11}\,M_{\sun}$, resulting in a differing prediction in the number density of these galaxies by greater than two orders of magnitude at high masses. Despite this, both versions of the physics function predict $z{>}0$ stellar mass function{}s which are too steep at high masses, although the addition of the random uncertainties (solid lines) largely alleviates this problem. There are also notable differences at lower stellar masses, where both models over-predict the number of galaxies. The $V_{\rm max}${} model also predicts that a large fraction of galaxies with $M_{\rm vir}{<}10^{10.5}\,M_{\sun}$ are already in place by $z{=}3$, with a correspondingly slower evolution to $z{=}0$. Many of these differing qualitative predictions can be understood by considering the differences in the time evolution of $M_{\rm vir}${} and $V_{\rm max}${} as shown in Fig.~\ref{fig:cartoon}. For example, the deficit of high stellar mass galaxies in the $V_{\rm max}${} model at $z{\approx}3$ is due to their host haloes initially being identified with $V_{\rm max}${} values greater than $V_{\rm peak}${} (at least for the mass resolution of our simulation). These haloes therefore spend little time in the efficient star-forming band. The result is a reduced amount of in situ star formation at early times, with effects that carry all the way through to $z{=}0${} as these galaxies grow, predominantly through merging. We can similarly understand the cause of the larger predicted number density of high-redshift low-mass galaxies in the $V_{\rm max}${} model. In this case, the lowest mass haloes present at high redshifts have spent a longer time close to the peak of the efficient star-forming band. This results in these haloes already hosting significant amounts of stellar mass by $z{=}3$. \begin{figure*} \begin{minipage}{\textwidth} \begin{center} \includegraphics[width=1.0\textwidth]{./figures/fig8.pdf} \caption{\label{fig:no_evo_SHMrelation} The evolution of the mean stellar--halo mass relation of central galaxies. The results of the static $M_{\rm vir}${} (Eqn.~\ref{eqn:physicsfunc_mvir}; black solid line) and $V_{\rm max}${} (Eqn.~\ref{eqn:physicsfunc_vmax}; orange dashed line) models are both shown. The dark and light shaded regions indicate the 68 and 95\% confidence intervals predicted using the marginalized parameter distributions from our calibration at $z{=}0$ (see Fig.~\ref{fig:z0_smf_probs}). A comparison with the subhalo abundance matching results of \citet{Moster2013} (blue error bars) indicates that both forms of the physics function fail to reproduce the evolution of the stellar mass growth efficiency required to reproduce the high-redshift stellar mass function (see Fig.~\ref{fig:3panel_smf}). Coloured dots in the first and second panels show a random sampling of 20 galaxy--halo systems from each $x$-axis bin of the $M_{\rm vir}${} and $V_{\rm max}${} models, respectively.} \end{center} \end{minipage} \end{figure*} To illustrate this further, in Fig.~\ref{fig:no_evo_SHMrelation} we show the evolution of the mean stellar--halo mass relation for both models. The blue error bars represent the relations predicted by the subhalo abundance matching model of \citet{Moster2013}. We have specifically chosen to compare our results against the work of \citet{Moster2013}, as they take their halo masses from the same dark matter merger trees as used in this work \citep[as well as the higher resolution Millennium-II Simulation;][]{Boylan-Kolchin2009} and also construct their model to match the same high-redshift stellar mass functions of \citet{Perez-Gonzalez2008}. Hence, the blue error bars of Fig.~\ref{fig:no_evo_SHMrelation} represent the evolution in the integrated stellar mass growth efficiency which our model must achieve in order to successfully replicate the observed stellar mass function{}s of Fig.~\ref{fig:3panel_smf}. By construction, both the $M_{\rm vir}${} and $V_{\rm max}${} models produce extremely similar relations at $z{=}0$ but with clear differences at higher redshifts. It is these variations in the typical amount of stars formed within haloes of a given mass that drives the different predictions for the evolution of the stellar mass function. For example, the much higher average stellar mass content of low-mass haloes at $z{=}3$ when using the $V_{\rm max}${} model (Fig.~\ref{fig:no_evo_SHMrelation}) is the cause of the increased normalization of the low-mass end of the relevant stellar mass function{} in Fig.~\ref{fig:3panel_smf}. Importantly, it can be seen from Fig.~\ref{fig:no_evo_SHMrelation} that neither the $M_{\rm vir}${} nor the $V_{\rm max}${} model reproduces the evolution of the stellar--halo mass relation found by \citet{Moster2013}; in particular the position and normalization of the peak value. The use of a redshift-independent halo mass to define the peak in situ star formation efficiency of the $M_{\rm vir}${} model results in no change to the position of the peak of the stellar--halo mass relation with redshift. Although not immediately obvious why this should be so, it can be understood by considering the typical evolution of a halo across the relation. At early times, haloes grow in mass rapidly, however, they typically still sit below the efficient star formation mass regime defined by the physics function (see Fig.~\ref{fig:cartoon}). In Fig.~\ref{fig:no_evo_SHMrelation}, these haloes will therefore travel almost horizontally from left to right with a low stellar--halo mass fraction. Eventually haloes will enter the mass regime of efficient star formation, causing them to rapidly increase their stellar--halo mass fractions with only a relatively modest growth in halo mass. This phase of rapid stellar mass growth causes a ''pile-up'' of galaxies in the stellar--halo mass relation{} that peaks around the virial mass at which haloes again transition out of the efficient star forming regime. Since the mass at which this occurs is fixed in our simple static model, the position of the stellar--halo mass relation{} peak is therefore also fixed in the $M_{\rm vir}${} model. Due to the evolving $M_{\rm vir}${}--$V_{\rm max}${} relationship, the position of the peak efficiency for the $V_{\rm max}${} model does evolve, but unfortunately in the direction opposite to that required. The shallower tail of the relation towards higher halo masses is caused by the subsequent growth of galaxies due to mergers. As well as the precise shape and normalization of the mean stellar--halo mass relation{}, it is also important to consider the scatter of the distribution about this mean. For example, at high halo masses it is possible to increase the scatter of stellar--halo mass ratios to produce an increased normalization of the high-mass end of the stellar mass function whilst leaving the mean stellar--halo mass relation{} unchanged. In the first two panels of Fig.~\ref{fig:no_evo_SHMrelation} we have plotted the stellar--halo mass ratios of 20 randomly selected haloes from each of the 15 mass bins used to construct the mean relations. At halo masses above the peak value we find an approximately constant value for the scatter as a function $M_{\rm vir}${} in both models. Below the peak halo mass, the scatter rapidly increases with decreasing $M_{\rm vir}${}. This reflects the stochastic nature of star formation for lower mass haloes whose mass evolution may not be well resolved at all times in our model. However, at $z{=}0${} we find an average 1$\sigma$ scatter of 0.15 for the $M_{\rm vir}${} model and 0.23 for the $V_{\rm max}${} model over the range of halo masses plotted. This agrees well with previous studies \citep[e.g.][]{More2009,Yang2009,Behroozi2013b}. Based purely on the inability to reproduce the required evolution in the stellar--halo mass relation, it is unlikely that the non-evolving physics function will be able to match the observed distribution of stellar masses in both the low- and high-redshift Universe simultaneously. This is true irrespective of the values of the available parameters or whether $M_{\rm vir}${} or $V_{\rm max}${} is used as the dependant variable. \subsection{Incorporating a redshift evolution} \label{sec:evo_results} \begin{figure*} \begin{minipage}{\textwidth} \begin{center} \subfigure{\includegraphics[width=0.475\textwidth]{./figures/fig9_left.pdf}} \quad \subfigure{\includegraphics[width=0.475\textwidth]{./figures/fig9_right.pdf}} \caption{\label{fig:evo_prob_join} Marginalized posterior probability distributions for the $M_{\rm vir}${} (left) and $V_{\rm max}${} (right) parameters of the redshift dependent physics function. In both panels, the models were constrained to simultaneously reproduce the observed $z{=}0$ red and blue stellar mass function{}s (Fig.~\ref{fig:evo_colorsplit_SMF}) as well as the time evolution of the stellar--halo mass relation{} (Fig.~\ref{fig:evo_prob_join}). All panels have been zoomed in to the high-probability regions. Con tours on the 2D (blue) panels indicate the 68 and 95\% confidence regions. Yellow dots mark the marginalized most probable parameter values. The diagonal panels show the marginalized 1D distributions with the 1$\sigma$ and 2$\sigma$ confidence intervals shown by light and dark shaded regions, respectively. The approximately Gaussian shape of these 1D distributions indicates the well behaved nature of the model.} \end{center} \end{minipage} \end{figure*} \begin{figure*} \begin{minipage}{\textwidth} \begin{center} \includegraphics[width=1.0\textwidth]{./figures/fig10.pdf} \caption{\label{fig:evo_SHMrelation} The evolution of the mean stellar--halo mass relation of central galaxies for the evolving $M_{\rm vir}${} (Eqn.~\ref{eqn:physicsfunc_mvir}; black solid line) and $V_{\rm max}${} (Eqn.~\ref{eqn:physicsfunc_vmax}; grey dashed line) models. Orange shaded regions indicated the subhalo abundance matching results of \citet{Moster2013}. A comparison with Fig.~\ref{fig:no_evo_SHMrelation} indicates that by shifting both the normalization ($\mathcal{E}_{M_{\rm vir}}${}, $\mathcal{E}_{V_{\rm max}}${}) and position ($M_{\rm peak}${}, $V_{\rm peak}${}) of the physics functions, we are able to reproduce the correct evolution of the stellar--halo mass relation at high-redshifts in terms of both shape and amplitude. This leads to a much better agreement between the observed and predicted stellar mass functions at $z{>}0$. (see Fig.~\ref{fig:evo_3panel_smf}). Coloured dots in the first and second panels show a random sampling of 20 galaxy/halo systems from each x-axis bin of the $M_{\rm vir}${} and $V_{\rm max}${} models respectively.} \end{center} \end{minipage} \end{figure*} Although capable of reproducing the observed red and blue stellar mass functions at $z{=}0${}, we showed in \S\ref{sec:highz_results} that our simple formation history model struggles to reproduce the high-redshift distribution of stellar masses. Importantly, we also concluded that there is unlikely to be any combination of physics function parameter values (see Eqns.~\ref{eqn:physicsfunc_mvir} \& \ref{eqn:physicsfunc_vmax}) which could alleviate this discrepancy. In this section we therefore look to extend our simple model by introducing a redshift dependence to the physics function. This is equivalent to the introduction of an evolution of the star formation efficiency with time for a fixed halo mass/maximum circular velocity. Such an evolution is well motivated both theoretically and observationally, suggesting the presence of alternative/additional star formation mechanisms at high-redshift when compared to those of the local Universe. For example, so-called ``cold-mode'' accretion \citep{Birnboim2003,Keres2005,Brooks2009} is thought to be able to effectively fuel galaxies of massive haloes at high-redshift, allowing for increased star formation. In addition, the early Universe was a more dynamic place with an enhanced prevalence of gas-rich galaxy mergers and turbulence-driven star formation \citep[e.g.][]{Dekel2009b,Wisnioski2011}. To reproduce the evolving position and normalization of the stellar--halo mass relation as found by \citet{Moster2013}, we modify the physics function of Eqn.~\ref{eqn:physicsfunc_mvir} by introducing a simple power law dependence on redshift to each of the free parameters: \begin{eqnarray} \log_{10}(M_{\rm peak}(z)) &=& \log_{10}(M_{\rm peak}) (1+z)^{\alpha_{M_{\rm vir}}}\ ,\\ \sigma_{M_{\rm vir}}(z) &=& \sigma_{M_{\rm vir}} (1+z)^{\beta_{M_{\rm vir}}}\ ,\\ \mathcal{E}_{M_{\rm vir}}(z) &=& \mathcal{E}_{M_{\rm vir}} (1+z)^{\gamma_{M_{\rm vir}}}\ , \end{eqnarray} where at $z{=}0$: $M_{\rm peak}${}$(z){=}$$M_{\rm peak}${}, $\sigma_{M_{\rm vir}}${}$(z){=}$$\sigma_{M_{\rm vir}}${} and $\mathcal{E}_{M_{\rm vir}}${}$(z){=}$$\mathcal{E}_{M_{\rm vir}}${}. The exact values of the redshift scalings are calibrated using MCMC to provide the best simultaneous reproduction of the \citet{Moster2013} stellar--halo mass relation{} at $z{=}0$, 1, 2 and 3, as well as the $z{=}0$ red and blue stellar mass function{}s, and are presented in Table~\ref{tab:params}. In the left hand panel of Fig.~\ref{fig:evo_prob_join} we present the relevant marginalized posterior probability distributions of the six free model parameters. Similarly to the redshift-independent case (cf. \S\ref{sec:z0_results}), the approximately Gaussian shape of the 1D probability distributions indicates that the parameters are generally well constrained. However, in addition to the degeneracy between the physics function normalization ($\mathcal{E}_{M_{\rm vir}}${}) and width ($\sigma_{M_{\rm vir}}${}) noted in \S\ref{sec:z0_results}, there are also clear and understandable degeneracies between the redshift evolution and $z{=}0$ value of each parameter (e.g. $M_{\rm peak}${} and $\alpha_{M_{\rm vir}}$). We note that there are minor differences between the $z{=}0${} stellar mass function{} utilized by \citet{Moster2013} to constrain their stellar--halo mass relation{} \citep{Li2009}, and the $z{=}0${} mass function which we employ in this work \citep{Bell2003}. However, we calibrate our model against both the stellar--halo mass relation{} and colour-split stellar mass functions at $z{=}0${} with equal weights. The MCMC fitting procedure then attempts to find the best compromise between these two (as well as all other) constraints. Since we find that there are no multimodal features in the marginalized posterior probability distributions (see Fig.~\ref{fig:evo_prob_join}), the parameter sets required to fit each constraint individually must be statistically compatible with each other. We therefore conclude that this slight inconsistency in our fitting procedure has minimal effect on our ability to demonstrate the success and utility of the model and on our results. The preferred values of $\alpha_{M_{\rm vir}}$ and $\beta_{M_{\rm vir}}$ are relatively small, indicating little need for evolution in both the peak position, $M_{\rm peak}${}$(z)$, and width, $\sigma_{M_{\rm vir}}${}$(z)$, of the physics function. As a consequence, the values of both $M_{\rm peak}${} and $\sigma_{M_{\rm vir}}${} are similar to the non-evolving case (cf. Table~\ref{tab:params}). However, there is a strong evolution preferred for the normalization of the physics function, $\gamma_{M_{\rm vir}}$, such that it decreases rapidly with increasing redshift. In order to maintain the total $z{=}0$ stellar mass density, the value of $M_{\rm peak}${}=0.9 is therefore considerably higher than was the case in the non-evolving form of the physics function. This implies that 90\% of all freshly accreted baryonic material in haloes with $\log_{10}(M_{\rm peak}/M_{\sun}){=}11.6$ is converted into stars at $z{=}0$. However, at $z{=}1$,2 and 3 the peak conversion efficiencies are considerably lower: 54, 40 and 32\%, respectively. We also similarly modify the $V_{\rm max}${} physics function, $F_{\rm phys}(V_{\rm max})${}: \begin{eqnarray} \log_{10}(V_{\rm peak}(z)) &=& \log_{10}(V_{\rm peak}) (1+z)^{\alpha_{V_{\rm max}}}\ ,\\ \sigma_{V_{\rm max}}(z) &=& \sigma_{V_{\rm max}} (1+z)^{\beta_{V_{\rm max}}}\ ,\\ \mathcal{E}_{V_{\rm max}}(z) &=& \mathcal{E}_{V_{\rm max}} (1+z)^{\gamma_{V_{\rm max}}}\ , \end{eqnarray} with the redshift scalings being calibrated to reproduce the same observations as the $M_{\rm vir}${} case above. The marginalized posterior probability distributions are presented in the right-hand panel of Fig.~\ref{fig:evo_prob_join}, with the preferred parameter values again presented in Table 1. As was found for the redshift-dependent $M_{\rm vir}${} model, the marginalized posterior distributions indicate that the free model parameters are well constrained and that there are no unexpected degeneracies between them (Fig.~\ref{fig:evo_prob_join}). If we consider the preferred values of the parameters themselves, we again find that the normalization of the physics function, $\mathcal{E}_{V_{\rm max}}${}$(z)$, shows the most pronounced evolution. A value of $\gamma_{V_{\rm max}}{=}-0.98$ indicates that the maximum star formation efficiency declines almost linearly as a function of $(1{+}z)$. This strong evolution requires a value of $\mathcal{E}_{V_{\rm max}}${} at $z{=}0$ that is actually greater than 1, and hence more than the total freshly accreted baryonic material must be converted into stars in haloes with $\log_{10}(V_{\rm peak}/{\rm (km\,s^{-1})}){=}2.1$ at this redshift. This suggests that we may need to vary the universal baryon fraction as a function of halo mass (or maximum circular velocity), perhaps to mimic the effects of processes such as the recycling of ejected baryons during star formation \citep[e.g.][]{Papastergis2012}. \begin{figure} \includegraphics[width=\columnwidth]{./figures/fig11.pdf} \caption{\label{fig:evo_3panel_smf} The $z{\approx}0.9$, 1.8, and 3.3 stellar mass function{}s predicted by the evolving $M_{\rm vir}${} (dashed black line) and $V_{\rm max}${} (dashed orange line) models. Observational data from \citet{Perez-Gonzalez2008} are shown for comparison. The solid lines give the results of convolving the model stellar masses with a normally distributed random uncertainty of 0.3 or 0.45 dex (for redshifts less than/greater than 0.3, respectively) in order to mimic the systematic uncertainties associated with the observed masses. By using an appropriate redshift evolution of the physics function parameters, the model's ability to successfully recover the observed high-redshift stellar mass functions is improved.} \end{figure} \begin{figure*} \begin{minipage}{\textwidth} \begin{center} \subfigure{\includegraphics[width=0.475\textwidth]{./figures/fig12_left.pdf}} \quad \subfigure{\includegraphics[width=0.475\textwidth]{./figures/fig12_right.pdf}} \caption{\label{fig:evo_colorsplit_SMF} The red (dashed lines) and blue (solid lines) galaxy stellar mass function{}s produced by the evolving $M_{\rm vir}${} (left) and $V_{\rm max}${} (right) formation history models. Error bars indicate the observations of \citet{Bell2003}. The free model parameters have been constrained to simultaneously reproduce these mass functions as well as the evolution of the stellar--halo mass relation{} (Fig.~\ref{fig:evo_SHMrelation}). The dark and light shaded regions show the associated 68 and 95\% confidence regions obtained from this calibration.} \end{center} \end{minipage} \end{figure*} In Fig.~\ref{fig:evo_SHMrelation}, we present the stellar--halo mass relations of the new, redshift-dependant, $M_{\rm vir}${} (black) and $V_{\rm max}${} (orange) models. The blue error bars again indicate the results of \citet{Moster2013}. By incorporating the redshift dependence we are now able to successfully reproduce the evolution of both the normalization and peak position of the stellar--halo mass relation required at $z{\ge}0$. The effects of this on the predicted high-redshift stellar mass functions of both the $M_{\rm vir}${} and $V_{\rm max}${} models can be seen in Fig.~\ref{fig:evo_3panel_smf}. As expected, we now find an improved agreement with the observations when compared to the original, non-evolving physics function results (cf. Fig.~\ref{fig:3panel_smf}). The typical 1$\sigma$ scatter in the evolving $M_{\rm vir}${} stellar--halo mass relation{} remains unchanged from the static case at approximately 0.15 dex at $z{=}0${}. However, the scatter in the evolving $V_{\rm max}${} model is reduced to approximately 0.19 dex (from 0.23 dex in the static case). In both models the scatter decreases as a function of redshift such that at $z{=}1$ and $2$ it is approximately 0.13 and 0.01 dex, respectively. For completeness we also present the $z{=}0${} colour-split stellar mass function{}s for both models in Fig.~\ref{fig:evo_colorsplit_SMF}. A reasonable agreement with the constraining observational data is still achieved. However, we now find that an underprediction in the number density of massive blue galaxies is present in both models, suggesting that our implemented evolutionary model may not provide enough late time star formation in the most massive haloes. \section{Discussion} \label{sec:discussion} In \S\ref{sec:z0_results}, we demonstrated that our most basic, non-evolving form of the physics function is able to successfully reproduce the observed red and blue stellar mass functions of the local Universe (Fig.~\ref{fig:z0_smf}). This key result highlights the utility and validity of our basic methodology and model implementation. In addition, it reinforces the commonly held belief that the growth of galaxies is intrinsically linked to the growth of their host dark matter haloes \citep{White1978}. Although the level of agreement achieved with the observed $z{=}0$ colour-split stellar mass function{}s is generally very good, there are some discrepancies. For example, there is an underprediction in the number density of the most massive blue galaxies in the $M_{\rm vir}${} model (left-hand panel of Fig.~\ref{fig:z0_smf}), with a corresponding overprediction in the number of massive red galaxies. Our $z{>}0$ analysis suggests that this is at least partially due to an incorrect evolution of the stellar--halo mass relation{} with time (see Fig.~\ref{fig:no_evo_SHMrelation}). However, we also note that an excess of massive red galaxies is a common feature of traditional semi-analytic galaxy formation models which similarly tie the evolution of galaxies to the masses of their host dark matter haloes. In such models, efficient feedback from AGN is typically responsible for truncating star formation in the most massive galaxies and hence causes the average stellar populations of these objects to become older and redder \citep{Bower2006,Croton2006,Mutch2011}. This is already mimicked within the framework of our formation history model through the turnover at the high $M_{\rm vir}${} (or $V_{\rm max}${}) end of the physics function. However, a more gradual cut-off may be required in the $M_{\rm vir}${} model case, in order to allow star formation to proceed for longer in the galaxies populating the most massive haloes. In this paper we have deliberately restricted ourselves to considering only a very simple form of the physics function. This has allowed us to take advantage of the resulting transparency when interpreting our findings. However, we stress that the model can easily be extended to include arbitrary levels of complexity. For example, we have chosen to use a log-normal distribution to define the form of the physics function. Although being conceptually simple, the symmetric nature of this formalism implicitly assumes that the physical mechanisms responsible for quenching star formation in both low- and high-mass haloes scale identically with halo mass (Fig.~\ref{fig:cartoon}). This assumption has little physical justification and in order to provide the best results, it may be necessary to independently adjust the slope of the function at both low and high masses, and perhaps even as a function of redshift. In future work, we will address this issue by carrying out a full statistical analysis aimed at testing a number of different functional forms for both the physics and baryonic growth functions. Even within the reduced scope of this current work, we have learned a great deal from simply examining the high-redshift stellar mass function predictions of the formation history model. In particular, we have highlighted the need for the physics function to produce an evolution in the stellar--halo mass relation as a function of redshift in order to match the observed space density of massive galaxies at early times. Using $V_{\rm max}${} as the input property to the function introduces such an evolution, but in the wrong direction. Future improvements to the model could focus on finding a halo property that does evolve correctly with time and would thus be a more natural anchor of the physics function. This would avoid the need to artificially introduce an evolution to match the observations, as we have done here. Although the need for an evolving stellar--halo mass relation has been discussed in the literature, the precise form with which this evolution manifests itself is less clear. The results of subhalo abundance matching studies, such as that of \citet{Moster2013} (which we compare to in this work), are quite sensitive to the choice of input data sets and the technical aspects of the methodology. For example, \citet{Moster2013} find that the peak stellar--halo mass ratio increases from just 0.15\% at ${z=}4$ to 4\% at $z{=}0$, with a corresponding shift in position from a halo mass of $10^{12.5}\,{\rm M_{\sun}}$ to $10^{11.8}\,{\rm M_{\sun}}$. In contrast, an alternative study carried out by \citet{Behroozi2013b} finds very little change in either the normalization or peak location over a broad range in redshift. However, they do note a marked drop in the relation for the most massive haloes at $z{\la}2.5$. This results in a qualitatively different prediction for the evolution of these massive haloes, such that their efficiency of converting baryons in to stars is higher as a function of look-back time \citep[the opposite trend to that found by][]{Moster2013}. A potentially valuable use for the formation history model is to provide a general consistency check of subhalo abundance matching studies. The physics function could be adapted to exactly replicate the shape and evolution of the star formation efficiencies they predict \citep{Behroozi2013}, allowing their validity to be assessed when self-consistently applied to individual dark matter merger trees. The additional galaxy properties provided by the formation history model, such as star formation histories and colours, could be used to further compare and contrast the success of different abundance matching methodologies. For example, it has been suggested that galaxy mergers may result in a significant fraction of the in-falling satellite stellar mass being added to a diffuse ICL component instead of to the newly formed galaxy \citep[][]{Monaco2006,Conroy2007}. The strength of this effect is expected to increase significantly with increasing halo mass and is included in the subhalo abundance matching study of \citet{Behroozi2013b} but not \citet{Moster2013}. By simply adding a mass-dependent amount of stellar material to an ICL component during merger events, our simple formation history model could be easily adapted to explore such a scenario. We also note that the simplicity of our formation history model results in it being extremely fast and computationally inexpensive when compared to traditional semi-analytic models. This has allowed us to straightforwardly calibrate it against a number of observed relations using MCMC techniques. This procedure can also be trivially extended to provide statistically accurate (against select observations) mock catalogues for use with large surveys. Further to what can be achieved using current HOD or subhalo abundance matching methods, catalogues produced using our model include both full growth histories and star formation rate information for each individual galaxy, with no need to add any artificial scatter to approximate variations in formation histories. In addition, the direct and clear dependence of the model on the halo properties of the input dark matter merger trees makes it an ideal tool for investigating a number of additional topics. Examples include comparing the effects of variations between different $N$-body simulations and halo finders on the physics of galaxy formation and evolution, investigating the predictions of simple monolithic collapse scenarios, contrasting various mass-dependent merger starburst models and exploring the ramifications of $N$-body simulations run with alternative theories of gravity. Finally, we note that a common criticism of semi-analytic models is the presence complex degeneracies between large numbers of free parameters. The MCMC calibration procedure we employ highlights the complete absence of such degeneracies in our formation history model, further demonstrating its well behaved and understandable nature. \subsection{Potential model extensions} One area of the model presented in this work which may benefit from being extended is the treatment of mergers (cf. \S\ref{sec:generating_galaxy_pop}). In the current model, merger-driven starbursts occur immediately when an infalling satellite halo crosses the virial radius of its parent. All of the newly formed stars are then added to the central galaxy of the parent halo. However, it is likely that there will be a significant time delay between the satellite crossing the virial radius of the parent and the actual merger between the satellite and central galaxy. In practice, due to the relatively large temporal spacing of our input dark matter merger trees (${\approx} 200{-}350\,{\rm Myr}$), we expect this slight inconsistency to have little effect on our results. However, if running the formation history model on merger trees with a higher temporal resolution, this issue may become important. A future investigation of the clustering predictions of the formation history model, especially when split by galaxy colour, will allow us to fully assess the validity of these simplifications. Also, as discussed in \S\ref{sec:generating_galaxy_pop}, we assume that all freshly accreted baryonic material is available for star formation, regardless as to whether or not it is already locked up in stars in the form of an infalling satellite galaxy. In practice this simply leads to an increased star formation efficiency for merger events. Testing of an alternative model in which the d$M_{\rm vir}${} term of the baryonic growth function includes only smooth accretion (i.e. does not include mass increases due to the accretion of satellite haloes) and star formation is allowed to proceed in satellite galaxies, indicates that merger-driven starbursts are an important feature of our model. Without this efficient star formation mechanism there is no combination of the available free parameters that allows the static model to reproduce the local colour-split stellar mass function. Another potential extension of the current model would be to track the cold gas content of each galaxy. This would allow star formation to be limited to using only that gas which is currently available and not already locked up in stars, thus providing more realistic instantaneous star formation rates for individual galaxies. Merger-driven starbursts could additionally be implemented as consuming some fraction of any available cold gas in the two progenitor galaxies. More advanced versions of this class of model have already been shown to be successful in reproducing the results of full hydrodynamic simulations \citep{Neistein2012}, and have been explored in other works using statistically generated mass accretion histories \citep[e.g.][]{Bouche2010}. However, it is important to recognize that our aim with the formation history model is to provide a simple, physically motivated, ``toy'' model. By adding the ability to track various reservoirs of material, or other similar complexities, we would arrive at what is essentially a simplified semi-analytic galaxy formation model, which is not the goal of this work. \section{Conclusions} \label{sec:conclusions} In this work we introduce a simple model for self-consistently connecting the growth of galaxies to the formation history of their host dark matter haloes. This is achieved by directly tying the time averaged change in mass of a halo to the star formation rate of its galaxy via two simple functions: the ``baryonic growth function'' and the ``physics function'' (Eqns.~\ref{eqn:bgf},\ref{eqn:physicsfunc_mvir}). We utilize $N$-body dark matter merger trees to provide self-consistent growth histories of individual haloes that naturally includes scatter due to varying formation histories. This allows us to produce full star formation histories for individual objects, and thus provide predictions for secondary properties such as galaxy colour. While closely related to other models in terms of its basic methodology \citep{Bouche2010,Cattaneo2011}, our model has a number of important generalizations which enhance its utility. In particular, we implement a single, unified physics function which encapsulates the effects of all of the intertwined baryonic processes associated with galactic star formation and condenses them down into a simple mapping between star formation efficiency and dark matter halo properties. The qualitative form of this function is motivated by our general knowledge of galaxy evolution, however, in this work we make no attempt to directly tie it to individual physical processes or their particular scalings with halo properties. As well as introducing this new model, we demonstrate its ability to replicate important observed relations such as the galactic stellar mass function, and also illustrate some examples of its potential for investigating different theories of galaxy formation and evolution. Our main results can be summarized as follows. \begin{enumerate} \item Motivated by the observed suppression of star formation efficiency in both the most massive and least massive dark matter haloes we begin by parametrizing the physics function as a simple, non-evolving, log-normal distribution with a single independent variable of either halo virial mass, $M_{\rm vir}$, or maximum circular velocity, $V_{\rm max}$ (Fig.~\ref{fig:cartoon}). \item With just three free parameters controlling the position, normalization and dispersion of the peak star formation efficiency, we show that the formation history model can successfully reproduce the observed red and blue stellar mass functions at redshift zero. Assuming a suitable choice of the parameters, this result is independent of the use of $M_{\rm vir}$ or $V_{\rm max}$ as the dependant variable of the physics function (Fig.~\ref{fig:z0_smf}). \item For the purposes of replicating the stellar mass functions across a wide range of redshifts, we find our static model to be inadequate. This is due to its inability to produce the correct evolution of the stellar--halo mass relation with time (Figs~\ref{fig:3panel_smf}~and~\ref{fig:no_evo_SHMrelation}). \item We therefore investigate the use of redshift as a second dependant variable to the physics function in order to control the position and normalization of the peak star formation efficiency with time. Using this simple adaptation alone, the formation history model is able to better reproduce the observed high-redshift stellar mass functions out to $z{=}3.5$ (Figs~\ref{fig:evo_SHMrelation}~and~\ref{fig:evo_3panel_smf}) whilst still maintaining a good reproduction of the $z{=}0${} colour-split stellar mass function{}. \item By statistically calibrating the free model parameters using MCMC techniques throughout this work, we are able to use the marginalized posterior likelihood distributions to demonstrate the well behaved and transparent nature of our simple model (Fig.~\ref{fig:z0_smf_probs}). \end{enumerate} In order to demonstrate its construction and utility we have presented one of the simplest forms of the formation history model. However, a fundamental strength of its construction is that it can be easily extended to arbitrary levels of complexity in order to investigate a whole host of physical processes associated with galaxy formation and evolution, some general examples of which we have outlined in \S\ref{sec:discussion}. In future work we will investigate the predictions made when using alternative forms of the baryonic growth and physics functions. We will also extend the model to investigate the birth of super-massive black holes and the evolution of the quasar luminosity function. \section*{Acknowledgements} Both SJM and GBP are supported by the ARC Laureate Fellowship of S. Wyithe. SJM also acknowledges the support of a Swinburne University SUPRA postgraduate scholarship. DJC acknowledges receipt of a QEII Fellowship awarded by the Australian government. The authors would like to thank A. Knebe for useful discussions, as well as the referee, E. Neistein, for numerous useful comments which have helped to improve the content of this work. The Millennium Simulation used as input for the formation history model was carried out by the Virgo Supercomputing Consortium at the Computing Centre of the Max-Planck Society in Garching. Halo catalogues from the simulation are publicly available at http://www.mpa-garching.mpg.de/millennium/ \bibliographystyle{mn2e}
1,116,691,501,018
arxiv
\section{Introduction}\label{intro} The study of extrasolar planetary systems has become a growing field of modern astronomy in the last ten to fifteen years. After the discovery of an exo-Jupiter in close orbit around 51 Peg by \citet{Mayor}, the number of detected exoplanets has greatly increased, thanks to improved astronomical methods in terms of accuracy and sensitivity. A large proportion of exoplanets have been discovered by mean of indirect methods such as radial velocities (RV), but the direct detection -- or even direct imaging -- of planetary companions constitutes a further step in our understanding of the physics of these objects. The case of exo-earths around Solar-type stars is of particular interest. The detection of these low mass planets is particularly challenging because of the tiny effects they induce on their parent star. Direct observation of Earth-like planets is limited by extreme contrast and small angular separation existing between the two bodies. Probing the habitable zone of such solar systems would face flux contrast of about 10$^{\rm 6}$ --10$^{\rm 7}$ in the mid-infrared with angular separations below 100 mas \citep{Angel86}.\\ Space-based missions like Darwin in Europe \citep{Fridlund04} or TPF-I in the United States \citep{Beichman} aim to detect and characterize Earth-size planets using mid-infrared nulling interferometry. The technique is based on starlight cancellation by means of destructive interferences in the mid-infrared, i.e from 5 $\mu$m to 20 $\mu$m \citep{Bracewell,Leger96,Angel97}. This spectral band presents the advantage of a lower star--companion contrast by three order of magnitude with respect to the visible domain \citep{Bracewell2}.\\ Nulling interferometry faces several instrumental obstacles that contribute to degrade the interferometric null. These include fine path-length control, intensity balance and polarization matching between the incoming beams, the requirement of a broadband achromatic $\pi$ phase shifter \citep{Labeque04} and fine control of the wavefront errors. As a consequence, the experimental demonstration of a deep broadband nulling has been actively pursued during the last years, first in the visible range \citep{Serabyn99,Wallace00,Haguenauer06} then at 10 $\mu$m \citep{Wallace05,Gappinger05}, in order to validate critical technologies. Among the different mentionned instrumental issues, the control of the wavefront corrugations is of major importance since the null is greatly degraded either by low-order errors ( i.e. residual optical path difference (OPD), pointing errors, optical aberrations) or high-order errors (spatial high-frequency defects due to optics and coatings) \citep{Leger95}. A reduction of wavefront errors is achievable using singlemode waveguides \citep{Shaklan88}, a solution commonly used in optical and near-infrared interferometry with optical fibers or integrated optics. \citet{Mennesson} have also underlined theoretically the advantages of singlemode waveguides for mid-infrared nulling interferometry to relax the strong instrumental constraints set on a deep nulling ratio. In a previous paper \citep{LabadieAA}, we presented the work achieved in the context of research and development on mid-infrared singlemode guided optics for stellar interferometry. The original concept was based on the manufacturing of hollow metallic waveguides (HMW) and their characterization in the laboratory via the analysis of the polarization analysis of the transmitted flux. That paper did not include results on nulling extinction ratios. As a futher step, conductive waveguides, for which application to nulling interferometry has been theoretically investigated by \citet{Wehmeier}, have been used in the present study to explore their impact as modal filters in a monochromatic light nulling experiment.\\ The present paper is organized as follows. First, we describe the goals of the experiment and the made assumptions. Then, we describe the experimental setup and the adopted protocols. The final section presents the results on dynamic and static measurements. \section{Strategy for the null measurement} \subsection{Dynamic and static null}\label{strategy1} The primary goal of this study is to obtain a direct comparison between the monochromatic extinction ratio that can be achieved with and without using a single-mode HMW, and to demonstrate that the effect of the residual wavefront errors on the interferometric null can be minimized using such a component. To obtain a valid comparison of the two mentionned cases (with and without waveguide), some precautions need to be taken on the physical meaning of the measured quantities. If we consider a monochromatic interferometer using the co-axial recombination scheme and fed by an unresolved source, the interferometric intensity $I$ measured on the single detector is given by \begin{eqnarray} I&=&I_{1}+I_{2}+2\sqrt{I_{1}I_{2}}V.\cos(\phi)\label{fringeseq} \end{eqnarray} \noindent $I_{1}$ and $I_{2}$ are the intensities in each arm and $\phi$ the phase difference due to OPD. $V$ is the visibility term, with 0$<$$V$$<$1, due to low-order and high-order errors of the recombined wavefronts. The monochromatic rejection ratio is extracted from the visibility term $V$ through \begin{eqnarray} \rho&=&\frac{1+V}{1-V}\label{nullterm} \end{eqnarray} \noindent In a real nulling experiment, $V$ is also affected by the spectral bandwith of the emission line if a laser source is used. This can have some impact if a deep nulling ratio is expected, but in the present context this issue can be neglected as explained in Sect.~\ref{tuning}.\\ \\ Following the preliminary results of \citet{LabadiePhD}, we propose to perform the nulling measurement in two different ways: a {\it dynamic} mode and a {\it static} mode. In the first approach (i.e. {\it dynamic}), a large number of fringe patterns containing two periods is recorded with a good sampling of the constructive and destructive fringe. An {\it a posteriori} measurement of the average photometric values $<$$I_{1}$$>$ and $<$$I_{2}$$>$ is performed to correct for the flux unbalance. Then, the {\it statistical} value $V_{i}\cos(\phi)$=($I_{i}$-$<$$I_{\rm 1}$$>$-$<$$I_{\rm 2}$$>$)/(2$\sqrt{<I_{\rm 1}><I_{\rm 2}>}$) for the $i^{th}$ occurrence is computed. An average value $<$$V$$>$ of the visibility is computed using the well-known statistical mean estimator $\bar{\mu}$ given by (1/$n$).$\sum_{i=1}^n V_{i}$, where $n$ is the number of samples \citep{Protassov}. The error bar on the mean visibilty is computed with the standard deviation estimator $\bar{\sigma}$ = $\sqrt{(1/(n-1))\sum_{i=1}^n (x_{i}-\bar{\mu})^2}$. The advantage of the dynamic method is that the error bar on the visibility is not obtained by computing the error propagation from $<$$I_{1}$$>$ and $<$$I_{2}$$>$ -- this would be the case if only one occurrence of the fringe pattern was recorded -- but from the variance $\sigma^{2}_{V}$ of the sample $V_{i}$ which includes the uncertainty on the photometric channel. The second approach (i.e. {\it static}) involves manually optimizing by iterative process the destructive state and recording the transmitted signal over a significant laps of time. The nulled signal is compared to the constructive output obtained with the same principle. Here, the intensity unbalance is minimized {\it before} nulling the signal, either by partially masking the brightest channel with a small screen translated into the beam, or by playing with the tilt of this same channel in order to degrade the corresponding coupling efficiency (see Sect.~\ref{tiltimpact}). The advantage of the {\it static} method is that it can be used to obtain information on the temporal stability of the null. \subsection{Impact of the tilt on the photometric calibration}\label{tiltimpact} \begin{figure}[t] \centering \includegraphics[width=8.5cm]{combination1.eps} \caption{Schematic view of modal filtering of the interferometric beams {\it before} recombination. By tilting the wavefront in one channel, the coupling efficiency is degraded. This impacts only the amplitude of the fundamental mode, not its phase. }\label{combination1} \end{figure} Let us suppose that the incoming beams are coupled into two identical singlemode waveguides {\it before} beam recombination as it occurs, for instance, in an integrated optics beam combiner. The ratio of power coupled into each waveguide is given by the overlap integral between the electric field of the beam focused on the waveguide input and the electric field of the fundamental mode. For the sake of simplicity, we consider here only the one-dimensional case. We also consider identical linear polarizations for the overlapped fields, so that the vector product simply becomes a scalar product. Let us model the fundamental mode by a field distribution $S(x)$ and the excitation field of the focused beam by $E(x)$, where $x$ is the linear coordinate in the focal plane. For the purpose of numerical simulations, $S(x)$ and $E(x)$ are often modeled by, respectively, a Gaussian and a sine cardinal functions \citep{Jeunhomme}. Under the previous conditions, the power guided by the fundamental mode of a singlemode waveguide is given by \begin{eqnarray} P(x_{\alpha}) & = &\frac{\left|\int_{\infty} E(x-x_{\alpha})\cdot S(x)\cdot dx\right|^{2}}{\int_{\infty} \left|S(x)\right|^{2}dx}\label{couplingfactor} \end{eqnarray} \noindent The term $x_{\alpha}$ = $\alpha$$\cdot$$f$ is the linear displacement in the focal plane of the focusing optics with respect to the geometric center of the waveguide. The value of $x_{\alpha}$ is linked to the relative tilt of the wavefronts $\alpha$. The integrals are computed over an infinite section transverse to the propagation axis. For two unbalanced channels $P_{\rm 1}$ and $P_{\rm 2}$, the tilt on the brightest beam induces a linear translation in the focal plane, which modifies the coupling efficiency. The term $P(x_{\alpha})$ decreases but the wavefront phase inside the waveguide remains unchanged due to the fundamental property of the singlemode waveguide. The principle is illustrated in Fig.~\ref{combination1}.\\ At the time of this study, we did not have identical singlemode waveguides to filter each beam separately. Thus, we implemented modal filtering {\it after} beam recombination, which has the advantage to correct any additional phase error induced by the beam splitter. Also, even if no wavefront corrugation occurs after recombination, two slightly different properties of the waveguides (dimensions, metallic coating...) would introduce a differential effect on the filtered beams that will degrade the null.\\ The principle of modal filtering remains unchanged when it is implemented {\it before} or {\it after} beam combination. From Eq.~\ref{couplingfactor}, the integrated {\it amplitude} coupled to the fundamental mode can be written as \begin{eqnarray} A(x_{\alpha}) & = &\frac{\int_{\infty} E(x-x_{\alpha}).S(x).dx}{\left(\int_{\infty}\left|S(x)\right|^{2}dx\right)^{1/2}} \end{eqnarray} \noindent where $A$ is a complex number taking into account the phase shift $\phi_{c}$ between $E$ and $S$. When modal filtering is implemented {\it after} beam combination, the power coupled to the fundamental mode from two $\pi$ phase-shifted input fields $E_{1}$ and $E_{2}$ is \begin{eqnarray} P_{0}(x_{\alpha}) & = &\frac{\left|\int_{\infty} \left[E_{1}(x)-E_{2}(x-x_{\alpha})\right]S(x).dx\right|^{2}}{\int_{\infty}\left|S(x)\right|^{2}dx}\label{before} \end{eqnarray} \noindent If Eq.~\ref{before} is rewritten as \begin{eqnarray} P_{0}(x_{\alpha}) & = & \left|A_{1}(0)-A_{2}(x_{\alpha})\right|^{2}\label{after} \end{eqnarray} \noindent then the equation corresponds to the case where the input fields $E_{1}$ and $E_{2}$ are coupled separately to the fundamental mode of the singlemode waveguide prior to recombination. This results from the fact that modal filtering is, just like beam recombination, a linear process with respect to field amplitudes. Thus the two operations are commutable. Such a property is used to minimize the intensity unbalance in the same way as shown in Fig.~\ref{combination1}. \section{Experiment description}\label{expdescription} \subsection{The laboratory setup}\label{setup} The different elements of the setup were purchased in 2004 to first mount the injection bench used for the study in \citet{LabadieAA}. The setup was then adapted for nulling measurements during year 2005. The results presented in this paper are in the continuation of the preliminary results obtained in \citet{LabadiePhD}.\\ Our experiment consists in implementing a pupil-plane combination scheme in a classical Michelson interferometer. The layout of the testbench, described in Fig.~\ref{layout}, is adapted from the injection setup presented in \citet{LabadieAA}. The chip containing the singlemode waveguide {\it WG} used as a modal filter is mounted on a three axis positioner which allows precise placement of the sample input at the focal point of $L_{\rm 1}$. The hollow metallic waveguide is 1-mm long. Since in a conductive waveguide, the electric field is totally confined within the metallic cavity (the field is null in the metallic walls , the fundamental mode has the same size as the geometrical aperture of the waveguide, i.e. $\sim$10 $\mu$m in our case \citep{LabadieAA}. As a consequence, fast optics with f/1 or smaller is used for $L_{\rm 1}$ and $L_{\rm 2}$ to couple -- and decouple -- light for waveguides with high numerical aperture. The infrared source is a CO$_{2}$ laser emitting at 10.6 $\mu$m which is co-aligned with a 0.632 $\mu$m He-Ne laser. The temperature controller of the source helps to lock the laser on a given emission line. The $P_{\rm 22}$ line lasing at 10.611 $\mu$m with a spectral bandwidth $\Delta \nu$ $\approx$ 500 MHz was used in this study. A set of calibrated densities (not represented in the schematic view of Fig.~\ref{layout}) is placed in the optical train to attenuate the infrared laser, avoiding any damaging to the sample or the detection stage. The beam is reshaped to a diameter of 25 mm thanks to the beam expander {\it BE} which has a magnification {\it m} $\approx$ 7.\\ \begin{figure}[t] \centering \includegraphics[width=8.5cm]{banc.eps} \caption{Layout of the monochromatic interferometric bench. $S_{\rm 1}$: infrared CO$_{\rm 2}$ laser source; $S_{\rm 2}$: visible HeNe alignment source; $BS_{\rm 1}$: ZnSe beamsplitter; $C$: chopper; $BE$: beam expander; $BS_{2}$: ZnSe beam combiner; $DL$: delay line; $M_{1}$: interferometer fixed mirror; $L_{\rm 1}$: $f$/1 aspheric injection lens; $L_{\rm 2}$+$L_{\rm 3}$: afocal imaging system; $WG$: waveguide sample; $D$: HgCdTe detector; $L.I.D$: lock-in detection; $PC$: computer for data processing. Dashed lines represent electric wires.}\label{layout} \end{figure} \noindent A zinc selenide (ZnSe) beam splitter {\it BS} is used in a double-pass scheme with the flat mirrors M$_{\rm 1}$ and M$_{\rm 2}$ to separate and recombine the wavefronts. The 8' wedge between the two faces of the beam splitter prevents interference from multiple reflected beams. In addition, an anti-reflection coating is applied to the rear face of {\it BS}, making the front face the reference plane for beam splitting. The two mirrors $M_{\rm 1}$ and $DL$ with a diameter of 25-mm are placed in tip-tilt mounts. In addition, $DL$ is translated thanks to a motor with piezo actuator to provide the delay line.\\ After $L_{\rm 1}$ couples the light into the waveguide, the output is re-imaged onto a 77K HgCdTe single-pixel detector $D$ with the afocal system composed of the plano-convex lenses L$_{\rm 2}$ and L$_{\rm 3}$. The f/2 numerical aperture of $L_{3}$ produces a 50-$\mu$m point spread function (PSF) that completely fits into the 500-$\mu$m square chip of the detector. Because the electronics of the HgCdTe detector is insensitive to the continuous component of the signal, the laser is chopped at the specific frequency of 191 Hz to avoid any contamination from the AC main supply 50 Hz harmonics. The chopper reference and the detected signals are processed through a classical lock-in amplifier to filter out unmodulated background. Finally, an 18 bit analog-digital converter card in a PC records the extracted signal. The delay line scans about two fringes, which are sampled over 2048 points. Each point represents an increment of 12 nm every 90 ms, corresponding to a scan of 190 seconds. The piezo actuator is controlled to repetitively perform the same OPD of the delay line. As shown in Fig.~\ref{fringe1}, a small shift of the dark fringe is observed due to an uncertainty in the absolute re-positioning of the delay line. But, since this remains below $\sim$500 nm, it has a negligible effect on the measurement.\\ The experiment takes place in an open-air non-cryogenic environment. In addition, no active control of the OPD or of the wavefronts relative tilt is implemented so far. Thus the measurements are sensitive to air turbulence, mechanical vibrations or electronics drifts. \subsection{Tuning the interferometer}\label{tuning} \begin{figure}[b] \centering \includegraphics[width=5.5cm, angle=0]{rawsignal.ps} \caption{Raw signals as a function of motor steps involved in the dynamic measurement of the interferometric null. The black curve (crosses) is the interferometric signal. The blue (squares) and magenta (diamonds) curves are the photometric channels. The green curve (triangles) is the detector offset. One scan has 2048 points recorded over 190 s. The linear step is 12 nm.}\label{rawdata} \end{figure} The interferometer is initially adjusted without any waveguide in the optical path. Using the tip-tilt mount of $M_{1}$ and $DL$, the two channels are first superimposed in constructive mode in the image plane of $L_{3}$ using a mid-infrared camera for the coarse alignment. The camera is then replaced with the HgCdTe detector connected to the lock-in amplifier.The OPD is then adjusted to reach a destructive state followed by a fine tuning of the wavefront tilt. This operation is made under conditions of destructive interferences rather than constructive ones because a tiny variation of the transmitted signal due to the relative tilt of the wavefronts is easily detectable with an almost nulled signal. It is not necessary to have perfect destructive interference to perform this tuning: a phase shift close to $\pi$ between the wavefronts is sufficient according to Eq.~\ref{fringeseq}.\\ Once the deepest achievable destructive signal is obtained from the tip-tilt adjustment, about ten scans of each photometric channel are recorded by successively masking $M_{1}$ and $DL$. The setup is then returned to interferometric mode and about fifty scans of the fringe pattern are taken with the same geometrical OPD. The detector offset is recorded with the same procedure by simply turning off the laser sources. This measurement is made once before and once after the fringe acquisition. The plots of Fig.~\ref{rawdata} show a single occurrence of the different raw signals involved in the calibration of the null.\\ The same experimental procedure is followed when measuring the interferometric null using a singlemode waveguide in dynamic mode. The waveguide is simply introduced in the optical path using the three axis positioner and its position optimized by maximizing the transmitted flux. The lens $L_{3}$ is then translated by 1 mm -- the geometrical length of a conductive waveguide -- to conjugate optically the waveguide output plane with the detector plane.\\ \begin{figure*} \begin{minipage}{\textwidth} \centering \subfigure[]{\includegraphics[width=5.5cm, angle=0]{fringe1.ps}\label{fringe1}} \hspace{0.5cm} \subfigure[]{\includegraphics[width=5.7cm, angle=0]{nulldepth.ps}\label{nulldepth}} \hspace{0.5cm} \subfigure[]{\includegraphics[width=5.43cm, angle=0]{visibility.ps}\label{fringe3}} \end{minipage} \caption{(a): Destructive fringes corrected from intensity unbalance and detector offset for five different occurrences of the fringe pattern. The position of the null remains unchanged within $\sim$500 nm of the delay line. One motor step is 12 nm. (b): Null depth as a function of delay line position in logarithmic scale with a modal filter. The four curves have been shifted horizontally for better visibility. The curves show a null depth of few times 10$^{-4}$ and a maximum null of 7.7$\times$10$^{-5}$ is observed for the magenta curve (solid line). (c): Plot of the visibility for ten scans in dynamic mode. The two points with error bars give the mean visibility with and without a single mode HMW after beam recombination.}\label{triple} \end{figure*} When using geometric translation with a mirror as a phase shifter, the interferometric null is wavelength dependent. Although the laser source has a very long coherence length, it cannot be considered as infinite in the frame of a co-axial nulling experiment. For a source with spectral width $\Delta \lambda$, the corresponding visibility loss is given by {\it 1}-sinc($\pi\Delta z\Delta\lambda/\lambda^2$) where $\Delta z$ is the distance from the zero-path difference point. In this experiment, the interferometer is tuned to zero-OPD by measuring geometrically the position of the translating mirror with respect to the uncoated face of the beam splitter. The accuracy of this measurement is better than $\Delta$z = 1 mm, which corresponds to a visibility loss $<$ 2.2$\times$10$^{-6}$ for the $\sim$0.187 nm spectral width of the $P_{\rm 22}$ emission line of the laser. \section{Experimental results}\label{results} The graph of Fig.~\ref{nulldepth} presents the null depth for four different fringe dynamic acquisitions of our sample with modal filtering. Each data set is corrected for the detector offset and for the photometric unbalance using the same quantities $\bar I_{\rm 1}$, $\bar I_{\rm 2}$ for all the sets (see Sect.~\ref{strategy1}). The values $\bar I_{\rm 1}$=1.70568$\pm$0.008 V and $\bar I_{\rm 2}$=1.68804$\pm$0.008 V were recorded for the two photometric channels, which results in a photometric unbalance of only $\sim$1\%$\pm$0.01\% before correction. The different curves have been artificially shifted by 150 motor steps for a better visibility of the plot. A given color represents one snapshot of the normalized fringe pattern after correction. The presented curves show a null depth of a few times 10$^{-4}$ on the left part of the graph between 1 and 1000 counts. In one case the null reaches 7.7$\times$10$^{-5}$ (magenta solid line) for an average rejection ratio of 12,990:1. On the right part of the graph (i.e. after motor step 1000), one can observe that the null depth is slightly degraded for all the four curves. A possible explanation is that the motor of the delay line presents a slight but systematic drift when leaving the zero-OPD postion (around count i=100), which was used for the initial alignment of the interferometer (see Sect.~\ref{tuning}). Although this hypothesis needs to be experimentally confirmed, it appears as a possible limitation for a setup like ours which has no active control of mechanical drifts.\\ In the graph of Fig.~\ref{fringe3}, we plot the dynamic measurements of ten scans of the interferometric visibility represented by the blue squares. The visibility $V$=1.0 is plotted as a green dashed line. The two single points represent the mean visibility and the corresponding error bar when modal filtering is implemented (black triangle) and when no filter is used in the optical path (red triangle). These two points are obtained through statistical measurements of the visibility in dynamic mode. Some points of the blue statistical serie appear above unit visibility, which would mean that the rejection ratio is infinite. However, this only corresponds to a bias in the substraction of the mean detector offset: at the instant $t$ where such a point is acquired, the instantaneous value of the offset was likely below the mean offset. The visibility curve in Fig.~\ref{fringe3} shows a correlation between one dynamic scan to the other. Its origin is at the moment unknown and this effect is included in the error bar of the mean visibilty. The average visibility is found to be $V$=0.999486$\pm$0.0007, which translates to a mean extinction ratio $\rho^{-1}$=2.5$\times$10$^{-4}$. We obtain good accuracy since the data is not affected by laser intensity fluctuations in the coherent combination (the power drifts affect the two channels of the interferometer equally). The error bar overlaps the visibility $V$=1. This indicates that the achievable null might be better than the previous value, although ultimately limited by the dynamic range of the detector and lock-in system which reaches the 10$^{-6}$ level.\\ The second point -- placed arbitrarily at the fourth occurrence -- gives the average visibility and error bar of the dynamic measurement without modal filter in the optical path and with the same alignment settings. The degradation can be clearly observed with an average visibility of $V$=0.976$\pm$0.001. The corresponding extinction ratio is 1.2$\times$10$^{-2}$.\\ \\ \noindent The dynamic acquisition method is an interesting statistical approach to measuring the achievable extinction ratio, but because the destructive fringe is scanned, the method does not provide large information on the stability of the null with the current setup. Such information is obtained through a {\it static} measurement of the null. The principle is to search manually for the optimal destructive signal with the piezo motor and then to minimize as much as possible the intensity unbalance either by playing with the coupling of the strongest channel or by partially masking with a small screen that can be translated into the beam (see Sect.~\ref{strategy1}). Prior to any null measurement, the time stability was investigated to understand the potential impact of external constraints (vibrations, drifts...).\\ \begin{figure}[t] \centering \includegraphics[width=7.5cm, angle=0]{stability.ps} \caption{Nulled output as a function of the time. This plot shows that the time scale over which the null can be considered stable is 10 to 20 seconds. The strong spikes come from sensitivity of the setup to vibrations and external constraints.}\label{stability} \end{figure} The plot of Fig.~\ref{stability} shows the raw uncalibrated null signal -- i.e. not subtracted from the detector offset -- recorded over 170 seconds. This plot does not correspond to the the best-null case measured later on. The plot shows that the nulled signal remains stable for a maximum of 10 to 20 seconds at this null level. Several spikes due to nulling degradation appear after this time scale. These spikes are very likely caused by vibrations from which the setup is not isolated at the moment. We also observe a long-term drift of the signal at the end of the plot suggesting that some positioning element in the setup might suffer from a constrain at low frequencies. This result implies that a destructive state can be steadily maintained in the best case for only ten seconds or so.\\ A static measurement of the null was performed with the two methods of photometric equalization described above. For the equalization by screen translation, the results are shown in Table~\ref{finalnull}. \begin{table}[h] \centering \begin{tabular}{| l | c |} \hline Constructive signal & 620 $\pm$ 4 mV \\ Uncalibrated null & 0.042 $\pm$ 0.005 mV \\ Detector offset & 0.007 $\pm$ 0.003 mV \\ \hline Extinction ratio & 5.6$\times$10$^{-5}$ (17,820:1) \\ \hline \end{tabular}\\ \caption{Experimental data obtained during null measurement in static mode.The extinction ratio is computed from the mean values of the voltage.}\label{finalnull} \end{table} \noindent The recorded null value was obtained over a period of 10 seconds with a good level of accuracy. On the other hand, adjusting the coupling efficiency to equalize the photometry did not provide better results than that presented in Fig.~\ref{triple}.\\ \\ To which extent can the {\it dynamic} and {\it static} approaches be compared? The plot in Fig.~\ref{stability} shows a null stability over a time scale of 10 to 20 seconds, which is shorter than the duration of a single scan in {\it dynamic} mode (190 s). However, within these 190 seconds, the interference is deeply destructive for approximatively 10 seconds (i.e. 100 motor steps, which corresponds to 9 s, see Fig.~\ref{rawdata}), while the constructive state is barely affected by the setup instabilities. The {\it static} approach aims at giving {\it one} occurrence (possibly the best one) of the {\it dynamic} mode, in order to confirm the potential of our modal filters. Since the {\it dynamic} mode cannot be guaranteed to fall on an optimal destructive state within one given scan, this aspect is reflected in the experimental dispersion of the corrected visibilities.\\ Because the {\it best case} nulling ratios obtained are comparable in both approaches (7.7$\times$10$^{-5}$ in {\it dynamic} mode, 5.6$\times$10$^{-5}$ in {\it static} mode), it could be initially inferred that the two methods are equivalent. This could be true at the experimental level, but not at the level of the instrument system. The {\it static} approach would be clearly worthwhile in a vibration-free environment, although usually difficult to obtain, since this would permit to spend larger integration time on the dark fringe, limit the number of detector read outs and bleeding effects due to switching between dark and bright fringe. At the contrary, the {\it dynamic} mode would appear more favorable when external constraints affect the experiment because: 1. The instrument stability could be quantified statistically through the standard deviation of the measured visibilities. 2. The scans corresponding to a deep null could be isolated in a deterministic way.\\ In addition, we think that the importance of the stability issue favors the use of compact and stable integrated optics beam combiners. \section{Conclusions}\label{disc} The experiment presented in this paper has shown that the use of a singlemode conductive waveguide as a modal filter permits significant improvement in the extinction ratio in a 10-$\mu$m nulling interferometer. The ratios were measured in a dynamic way by acquiring a large number of fringe patterns which gave a statistical value of the null. An average ratio of $\rho^{-1}$=2.5$\times$10$^{-4}$ has been measured with an error bar of $\pm$0.07\% on the visibility. A deeper null may have been possible in theory given the error bars, but our setup did not permit us to measure it in dynamic mode. A static measurement of the null has provided a single occurrence of 5.6$\times$10$^{-5}$. With the current setup, such a null can be maintained for approximately 10 to 20 seconds in the best case.\\ It is clear that in the frame of a mid-infrared nulling experiment, a proper isolation of the setup from external vibrations, electronic drifts of the delay line or variations of the local temperature are necessary. These types of issues are at the moment a limiting factor.\\ This study aims to show that singlemode conductive waveguides can be efficient modal filters over distance of 1-mm in the context of mid-infrared nulling interferometry. Theoretical studies on the filtering length of conductive waveguides for infrared radiation suggest that the waveguides could be even shorter to compensate for high propagation losses \citep{Tiberini07} without altering the filtering capabilities. \begin{acknowledgements} This work was supported by the \emph{European Space Agency} ($\it {ESA}$) under contract 16847/02/NL/SFe and supported by funding from the \emph{French Space National Agency} ($\it {CNES}$) and {\it Alcatel Alenia Space}. The authors thank Dr. T. Herbst for fruitful comments. \end{acknowledgements} \bibliographystyle{aa}
1,116,691,501,019
arxiv
\section{Introduction} Recently there has been a considerable amount of research devoted to the following family of questions: suppose that square matrices $A$ and $B$ fulfil some relation ``approximately''. Can we then perturb $A$ and $B$ so that the resulting matrices $A'$ and $B'$ actually fulfil the relation in question? Let us make it more precise by reviewing some historical and more recent examples. We start with the most famous one. Paul Halmos \cite{Hal} posed the following problem, known since as the \emph{Halmos problem}: Let $\de >0$, and suppose that $A$ and $B$ are self-adjoint matrices of norm $1$. Can we find $\eps>0$ such that if the operator norm of $AB-BA$ is at most $\eps$ then there exists self-adjoint matrices $A'$ and $B'$ such that $A'B' = B'A'$ and such that the operator norms of $A'-A$ and of $B'-B$ are at most $\de$? An affirmative answer to this question was given by Huaxin Lin~\cite{HL} (see also \cite{FR} and~\cite{Has}). On the other hand, Voiculescu proved that for integers $d\ge 7$ there exist $d\times d$ unitary matrices $U_d$, $V_d$ such that \begin{itemize} \item $\| U_d V_d-V_d U_d\|=|1-e^{2\pi i/d}|$, and \item for any pair $A_d,B_d$ of commuting $d\times d$ matrices we have $$ \|U_d-A_d\|+\|V_d-B_d\|\ge \sqrt{2-|1-e^{2\pi i/d}|}-1. $$ \end{itemize} In other words, in the original Halmos problem, if we replace the assumption that $A$ and $B$ are self-adjoint with the assumption that $A$ and $B$ are unitary, then the answer is negative, even if we do not demand that the nearby commuting matrices $A'$ and $B'$ should be unitary. Furthermore, counterexamples were found by Davidson~\cite{Dav} if we ask about three or more almost commuting self-adjoint matrices. A similar question had previously been asked by Rosenthal \cite{Ros}, where the ``closeness'' and ``almost commutativity'' of the matrices were defined using the normalised Hilbert-Schmidt norm in place of the operator norm. Affirmative answers to this version of the Halmos Problem were given for arbitrarily large finite families of normal operators by various authors \cite{HW},\cite{FS},\cite{Gle}. More recently the analogous question was studied in~\cite{AP1} for permutations and the Hamming distance. Arzhantseva and Paunescu showed the following result, which was a direct motivation for the investigations presented in this article. For every $\de>0$ there exists $\eps>0$ such that if $A$ and $B$ are permutations such that the normalised Hamming distance between $AB$ and $BA$ is at most $\eps$ then we can find permutations $A'$ and $B'$ such that $A'B' = B'A'$ and the normalised Hamming distances between $A$ and $A'$, as well as $B$ and $B'$, are both bounded by $\de$. The corresponding result is true also for an arbitrary finite number of permutations. In this paper we study the analogous question for the \emph{rank metric}. We refer to~\cite{AP2} and references therein for the background and motivation for studying rank metric, and here we only state the definitions. The set of natural numbers is $\N := \{0,1,\ldots\}$ and we let $\Np := \{1,2\ldots\}$. For $d \in \Np$ let $\Mat(d)$ be the set of all $d\times d$ square matrices with complex coefficients. Finally, for $A \in \Mat(d)$ we let $\rank(A) := \frac{\dim_\C (\im(A))}{d}$. This norm defines a metric on $\Mat(d)$ in a usual way, i.e.~$\drank(A,B) := \rank(A-B)$. Our main aim in this note is to show the following theorem. \begin{theorem}\label{tmain} For every $\eps > 0$ and $n\in \Np$ there exists $\de > 0$ such that for all $d\in \Np$ we have the following. If $A_1, A_2,\dots A_n \in \Mat(d)$ are matrices, each of them is either unitary or self-adjoint, and for all $1\le i,j \le n$ we have $\rank(A_iA_j-A_jA_i)\le \delta$, then there exist commuting matrices $B_1, B_2, \dots, B_n$ such that for every $1\le i \le n$ we have $\rank(A_i-B_i)\le \eps$. \end{theorem} A more general statement will be presented in Theorem~\ref{t1}. \begin{remark} It is natural to ask whether the matrices $B_1,\ldots, B_n$ can be taken to be ``of the same type'' as the matrices $A_1,\ldots,A_n$, e.g. whether we can demand, say, the matrix $B_1$ to be unitary, provided that $A_1$ is unitary. We do not know the answer to this question. We think that Theorem~\ref{tmain} likely stays true when $A_1,\ldots, A_n$ are allowed to be arbitrary normal matrices. On the other hand, it would be interesting to find a counterexample when $A_1,\ldots, A_n$ are allowed to be arbitrary matrices. \end{remark} Becker, Lubotzky and Thom~\cite{blt} generalised the results from~\cite{AP1} to the context of finitely presented polycyclic groups, and showed that there are significant obstacles to generalise it further. We are able to prove some analogous results in the context of the rank metric. Let us make it precise now. Let $\Ga$ be a finitely presented group with presentation $$ \langle \ga_1,\ldots, \ga_g | P_1(\ga_1,\ldots, \ga_g), \ldots, P_r(\ga_1,\ldots, \ga_g)\rangle, $$ where $P_i$ are non-commutative polynomials in $g$ variables. For a $k\times k$ matrix $B$ we denote with $\widehat B$ the operator on the vector space $\C^{\oplus\N}$ which acts as $B$ on the first $k$ basis vectors and is $0$ otherwise. We will say that $\Ga$ is \emph{stable with respect to the rank metric} if for every $\de>0$ there exists $\eps>0$ such that the following holds. For all $d\in \N$ we have that if $A_1,\ldots, A_g$ are unitary $d\times d$ matrices with $\rank(P_i(A_1,\ldots, A_g)-\Id_d) \le \de$, then there exist $k\in \N$ and $k\times k$ matrices $B_1,\ldots, B_g$ with $\dim\im(\widehat{A_i}-\widehat{B_i}) \le \eps\cdot d$ and such that $P_i(B_1,\ldots, B_g) = \Id_k$ for all $i=1,\ldots, r$. \begin{remarks} \begin{enumerate} \item Originally, we have not worked with $\widehat{A_i}$ but rather with $A_i$ in the definition above. We thank N.Ozawa for pointing out that it is more natural to take $\widehat{A_i}$. \item Theorem~\ref{tmain} can be rephrased as saying that the groups $\Z^k$, where $k=1,2,\ldots$, are stable with respect to the rank metric. We remark that there exist other natural notions of \emph{being stable with respect to the rank metric}: for example, we could demand the matrices $B_i$ to be unitary, or we could remove the assumption that the matrices $A_i$ are unitary. \end{enumerate} \end{remarks} Perhaps the most interesting question which we cannot tackle at present is inspired by the results of~\cite{blt}: are polycyclic groups are stable with respect to the rank metric? However, by using some of the ideas from~\cite{blt} we can show the following result. Let $p$ be a prime number . Recall that the Abels' group $A_p$ (see ref...) is the group of $4$-by-$4$ matrices of the form $$ \begin{pmatrix} 1 & \ast & \ast & \ast \\ & p^m & \ast & \ast \\ & & p^n & \ast \\ & & & 1 \end{pmatrix}, $$ where $m,n\in \Z$, and where the stars are arbitrary elements of the ring $\Z[\frac{1}{p}]$ of rational numbers which can be written with a power of $p$ as the denominator. \begin{theorem}\label{abels} For any prime number $p$ the Abels' group $A_p$ is not stable with respect to the rank metric. \end{theorem} \begin{remark}\begin{enumerate} \item It is not difficult to show that if a finitely presented amenable group is stable with respect to the rank metric then it is residually linear. Thus, mimicking the question posed in~\cite{AP1}, one could ask whether every finitely presented linear amenable group is stable with respect to the rank metric. Since Abels' group is a solvable group of step 3 which is finitely presented and linear, Theorem~\ref{abels} gives a negative answer to this question. \item Our proof of Theorem~\ref{abels} is based on an argument from~\cite{blt} used to show tha the Abels' groups are not stable with respect to the Hamming distance. In fact, Theorem~\ref{abels} is a generalisation of that particular result from~\ref{abels}. We will present the proof of~Theorem~\ref{abels} in Section~\ref{sec-abels}. It is very self-contained and can also serve a minor role as an alternative exposition of one of the results of~\cite{blt} (the advantage of our proof of Theorem~\ref{abels} compared with the exposition in~\cite{blt} is somewhat smaller definitional overheads). \end{enumerate} \end{remark} \newcommand{\inter}{\operatorname{int}} \section{The strategy of the proof and the general statement of Theorem~\ref{tmain}} Let us very informally discuss the strategy of the proof of Theorem~\ref{tmain}. For simplicity let us assume that we are given two $d\times d$ matrices $A$ and $B$ which are almost commuting with respect to the rank metric. First, we need to find a large subspace $W\subset \C^d$ and a decomposition $W = \bigoplus_{i=1}^N B_i$, such that each space $B_i$ has the following two properties: \begin{enumerate} \item there exists $R_i\in \N$, an ideal $\fa_i\subset \C[X,Y]$, and a linear embedding $\phi_i \colon B_i \to \C[X,Y]/\fa_i$ whose image consists of all elements of degree at most $R_i$, and \item ``$B_i$ is almost invariant for the actions of $A$ and $B$''. \end{enumerate} Most of Section~\ref{sec-proof} is devoted to finding such $W$, culminating in Lemma~\ref{lemma-final}. This allows us to replace the original $A$ and $B$ with direct sums of multiplication operators in commutative algebras, restricted to ``balls in the algebras'', i.e. to subspaces of polynomials with degree bounded by $R_i$. The reduction of the proof of Theorem~\ref{t1} to finding such $W$ is described in Lemma~\ref{lemma-first-red}. The property that $A$ and $B$ are either self-adjoint or unitary is used in two ways. The first use is controlling the nilpotent elements in the resulting commutative algebras. This is done in Lemma~\ref{lemma-reg}. While controlling the nilpotent elements greatly simplifies the proof, the authors believe it is not essential. The second, more crucial, use is making sure that the subspace $W$ is large. Informally speaking, the assumption that $A$ and $B$ are either self-adjoint or unitary allows us to argue that if $W$ is small, then we can add some extra subspaces $B_i$ in the orthogonal complement of $W$ (see Lemma~\ref{lemma-bootstrap}). The argument is very similar to the ``Ornstein-Weiss trick'' (see \cite{OW}), and the assumption on $A$ and $B$ allows us to replace ``disjointedness'' with ``orthogonality''. After finding $W$ we still need to consider the operators of multiplication by $X$ and $Y$ in $\C[X,Y]/\fa_i$ restricted to polynomials of degree bounded by $R_i$. These two restrictions clearly almost commute, and we need to perturb them with small rank operators to obtain commuting operators. In order to be able to carry out the Ornstein-Weiss trick in our setting, we make use of the effective Nullstellensatz (encapsulated in Theorem~\ref{cory-effective}) and the Macaulay theorem on growth in graded algebras (encapsulated in Corollary~\ref{corymac}). The final commutative algebra tool which we use is the standard Nullstellensatz (Proposition~\ref{alghom}). The effective Nullstelensatz (i) allows us to argue that the embeddings $\phi_i$ exist, i.e. reduce ``the local situation to the commutative algebra'', and (ii) together with the assumption that $A$ and $B$ are either unitary or self-adjoint, it allows us to control the nilpotent elements in the resulting commutative algebras $\C[X,Y]/\fa_i$. It is used in Lemma~\ref{lemma-reg}. The Macaulay theorem (i) allows us to argue that the complement o $W$ is small, and (ii) it allows us to argue that the commuting perturbations of multiplication operators in commutative algebras which we find, are indeed small rank perturbations. It is used in Lemmas~\ref{lemma-bootstrap} and~\ref{lemma-complicated}. \subsection*{Definitions and the general statement} Elements of $\Mat(d)$ will be called \emph{$d$-matrices}. Tuples of $d$-matrices will be called \emph{$d$-matrix tuples}, and will be denoted with curly letters, e.g. $\cal A = (A_1,\ldots, A_n)$ and $\cal U = (U_1,\ldots, U_n)$. For $a\in \Np$, the symbol $[a]$ denotes the set $\{1,2,\dots,a\}$, and we let $[0]$ denote the empty set. We say that a matrix tuple $\cal A = (A_1,\ldots, A_n)$ is \emph{commuting} if for all $i, j \in [n]$ we have $A_{i}A_{j} - A_{j}A_{i} = 0$. More generally, for $\eps \ge 0$ we say that $\cal A$ is $\eps$-commuting if $$ \max_{i,j\in [n]} \rank(A_{i}A_{j} - A_{j}A_{i}) \le \eps. $$ If $d\in\Np$ and $\cal A = (A_1,\ldots, A_n)$, $\cal B = (B_1,\ldots, B_n)$ are two $d$-matrix tuples, then we let $$ d_{\rank} (\cal A, \cal B) := \max_{i\in [n]} \rank(A_i-B_i). $$ Given a matrix $A$, we denote the adjoint of $A$ by $A^\ast$. We say that a $d$-matrix tuple $\cal M=(M_1,M_2,\dots, M_n)$ is \emph{$\sta$-closed} if for every $i\in [n]$ there exists $j\in [n]$ such that $M_i^\ast=M_j$. Our general result is as follows. \begin{theorem}\label{t1} For every $\eps\ge 0$ and $n\in\Np$ there exists $\de \ge 0$ such that if $$ \cal A=(A_1,\ldots, A_n) $$ is a $\sta$-closed $\de$-commuting matrix tuple then we can find a commuting matrix tuple $\cal B$ with $$ d_\rank(\cal A, \cal B) \le \eps. $$ \end{theorem} Let us argue how to deduce Theorem~\ref{tmain} from Theorem~\ref{t1}. First, we note that if we replace the expression \emph{a $\sta$-closed $\de$-commuting matrix tuple} in the statement of Theorem~\ref{t1} by \emph{a $\de$-commuting matrix tuple such that each of the matrices $A_1,\ldots A_n$ is either self-adjoint or unitary} then we obtain the statement of Theorem~\ref{tmain}. But if $(A_1,\ldots, A_n)$ is any matrix tuple, then $(A_1,\ldots, A_n, A_1^\ast,\ldots, A_n^\ast)$ is a $\sta$-closed matrix tuple. As such, in order to deduce Theorem~\ref{tmain} from Theorem~\ref{t1} it is enough to prove the following proposition. \begin{proposition} For every $n\in \Np$, every $\de>0$ and every $d\in \Np$, we have that if $(A_1,\ldots, A_n)$ is a $\de$-commuting $d$-matrix tuple and each of the matrices $A_1,\ldots, A_n$ is either unitary or self-adjoint, then the $d$-matrix tuple $(A_1,\ldots, A_n,A_1^\ast,\ldots, A_n^\ast)$ is $\de$-commuting as well. \end{proposition} \begin{proof} Using induction, it is enough to show that if $A$ is a $d$-matrix and $B$ is either a unitary or a self-adjoint $d$-matrix with $\rank(A,B) \le \de$ then also $\rank(A,B^\ast)\le \de$. If $B$ is self-adjoint then there is nothing to prove. If $B$ is unitary then we will use the fact that $B^\ast = B^{-1}$. We let $$ W:= \ker (AB-BA), $$ and by assumption we have $\dim(W) \ge 1-\de$. For $v\in B(W)$ we can write $v= B(w)$ for some $w\in W$, hence we obtain that $$ B^{-1}A(v) = B^{-1}AB(w)= B^{-1}BA(w)= A(w). $$ On the other hand we can write $$ AB^{-1}(v) = AB^{-1}B(w) = A(w). $$ This shows that $B(W)\subset \ker(AB^{-1}-B^{-1}A)$, finishing the proof because $$ \dim(B(W)) = \dim(W)\ge 1-\de. $$ \end{proof} \begin{remark} With a little bit more effort we could also deal with matrix tuples whose all elements are normal matrices with spectrum contained in the union of the real line and the unit circle. \end{remark} For the rest of the paper we fix a positive natural number $n$. From now on all matrix tuples will have length $n$. \section{Commutative algebra preliminaries} Let $\C$ be the field of complex numbers. The ring $\C[X_1,\ldots, X_n]$ will be denoted by $\C[X]$. Recall that an ideal $\fa \subset \Poly$ is \emph{radical} if for all $m\in \N_+$ and $f\in \Poly$ we have that $f^m \in \fa$ implies $f\in \fa$. By Hilbert's Nullstellensatz we have that $\fa = \bigcap \fm$, where the intersection is over all maximal ideals which contain $\a$. Given an arbitrary ideal $\fa$ we denote by $\rad(\fa)$ the \emph{radical of $\fa$}, i.e.~the radical ideal defined as $\rad(\fa) : = \{ f\in \Poly: \exists m \in \Z_+ \text{ with } f^m \in \fa\}$. The next theorem follows from the effective Nullstellensatz of Grete Hermann \cite{MR1512302} and the Rabinowitsch trick (see e.g. \cite[Theorem 1 and Corollary afterwards]{MR916719}). \begin{theorem}\label{cory-effective} There exists an increasing function $K\colon \N \to \N$ such that we have the following properties. Let $f, f_1,\ldots, f_k\in \Poly$ be polynomials of degree at most $R$, and let $\fa$ be the ideal generated by $f_1,\ldots, f_k$. \begin{enumerate} \item If $f \in \fa$ then there exist $h_1,\ldots, h_k\in \Poly$ such that $$ h_1f_1 + \ldots + h_k f_k = f $$ and $\deg(h_if_i) \le K(R)$. \item If $f\in \rad(\fa)$ then we can find $m\in \N$ and $g_1,\ldots, g_k\in \Poly$ such that $$ g_1f_1+\ldots +g_kf_k =f^m $$ and $\deg(g_if_i)\le K(R)$. \end{enumerate} \qed \end{theorem} For the next proposition we need to recall some definitions. A \emph{standard graded $\C$-algebra} is a $\C$-algebra $A$ together with a family of vector spaces $A_i$, $i\in \N$, such that \begin{enumerate} \item $A_0 = \C$, $A = \oplus_{i\in \N} A_i$, \item $A$ is generated as a $\C$-algebra by finitely many elements of $A_1$, \item for all $i,j\in \N$ we have $A_i A_j \subset A_{i+j}$. \end{enumerate} A \emph{filtration} on a $\C$-algebra $A$ is an ascending family $F_0\subset F_1\subset \ldots $ of linear subspaces of $A$ such that \begin{enumerate} \item $F_0 = \C$, $A = \bigcup_{i\in \N} F_i$, \item for all $i,j\in \N$ we have $F_i F_j \subset F_{i+j}$. \end{enumerate} \newcommand{\gr}{\operatorname{gr}} Given an algebra $A$ with a filtration $F_i$, we can associate to it a graded algebra $\gr(A)$ as follows. As a $\C$-vector space we let $\gr(A) := F_0 \oplus \bigoplus_{i>0} F_i/F_{i-1}$. We define the multiplication on $\gr(A)$ first on the elements of the form $a+F_i$ and $b + F_j$, where $i,j\ge 0$, $a\in F_{i+1}$, $b\in F_{j+1}$, by the formula $(a + F_i) \cdot (b + F_j) := ab+ F_{i+j+1}$. In general we extend this multiplication to all of $\gr(A)$ by $\C$-linearity. \begin{remark} The reason why $\gr(A)$ is not always a \emph{standard} graded algebra is that it may happen not to be generated by the elements of $F_1/F_0$. This may be the case even if $A$ is generated by finitely many elements of $F_1$ as a $\C$-algebra. For example, let $A := \C[X_1]$, let $F_0=\C$, let $F_1$ be the vector space of polynomials of degree at most $1$, and finally for $i\ge 2$ let $F_i$ be the vector space of polynomials of degree at most $2i-1$. In this case we have $(X_1+ F_0)^2 = X_1^2 + F_1$, and therefore $(X_1+F_0)^3 =X_1^3 + F_2$, i.e. $(X_1+F_0)^3$ is equal to $0$ in $\gr(A)$. In fact, it is not difficult to construct examples where $\gr(A)$ fails to be finitely-generated, even when $A$ is generated by finitely many elements of $F_1$. \end{remark} We say that $F_i$ is a \emph{standard} filtration on $A$ if the associated graded algebra $\gr(A)$ is standard. The following is a consequence of Macaulay's theorem \cite{MR1576950}. We will use the exposition from \cite[Section 5]{2017arXiv170301761E}. \begin{proposition}\label{promac} Let $A$ be a $\C$-algebra with a standard filtration $F_i$. Then for every $i>0$ we have $$ \dim_\C(F_i/ F_{i-1}) \le \frac{\dim_\C(F_1)}{i} \dim_\C(F_{i-1}). $$ \end{proposition} \begin{proof} For a natural number $k$ and a real number $x$ we let ${x \choose k}$ denote the number $\frac{1}{k !} \cdot x(x-1)\cdot\ldots\cdot(x-k+1)$. Let us fix $i>0$. After applying \cite[Theorem 5.10]{2017arXiv170301761E} to the standard graded algebra $\gr(A)$ we obtain the following. Let $x$ be the unique real number such that $x \ge i-1$ and $$ \dim_\C (F_{i-1} ) = { x \choose i-1}. $$ Then we have that $$ \dim_\C (F_i) \le {x+1 \choose i}. $$ In particular, we also obtain that \begin{equation}\label{gog} \frac{ \dim(F_i)}{ \dim(F_{i-1})} \le \frac{{x+1 \choose i}}{{ x \choose i-1}} = \frac{x+1}{i}. \end{equation} On the other hand, we have $\dim(F_1) = {\dim(F_1)\choose 1}$, and the function $X \mapsto {X \choose k}$ is increasing for $X \ge k-1$ (see \cite[Lemma 5.6]{2017arXiv170301761E}). Thus if we apply \cite[Theorem 5.10]{2017arXiv170301761E} $i-2$ times starting with $\dim(F_1) = {\dim(F_1)\choose 1}$, then we obtain $$ \dim_\C (F_{i-1}) \le {\dim(F_1) +i-1 \choose i-1}, $$ implying that $x \le \dim(F_1) + i - 1$. Together with \eqref{gog}, this shows that $$ \frac{ \dim(F_i)}{ \dim(F_{i-1})} \le \frac{i+\dim(F_1)}{i}, $$ so the proposition follows since $$ \dim(F_i) = \dim(F_i/ F_{i-1}) + \dim(F_{i-1}). $$ \end{proof} \begin{definition}\label{fil} Given an ideal $\fa\subset \Poly$, we introduce a standard filtration $F^\fa_i$ on $\Poly /\fa$ by defining $F^\fa_i$ to be the space of all those elements of $\Poly/\fa$ which can be written as $f +\fa$ with $\deg(f) \le i$. \end{definition} Applying Proposition~\ref{promac} to the filtration $F^\fa_i$, we obtain the following Corollary. \begin{cory}\label{corymac} Let $\fa\subset \Poly$ be an ideal. Then for any $i>0$ we have $$ \dim_\C(F^\fa_i/ F^\fa_{i-1}) \le \frac{n}{i} \dim_\C (F^\fa_{i-1}). $$\qed \end{cory} We now proceed to derive some properties of multiplication operators restricted to the filtration $F^\fa_i$. We start with a simple consequence of Hilbert's Nullstellensatz. When for some $k\in \Np$ we consider the space $\C^k$ as a $\C$-algebra, it is meant to be with the point-wise multiplication. \begin{proposition}\label{alghom} Let $\fa\subset \C[X]$ be an ideal and let $V\subset \Poly$ be a finite dimensional $\C$-linear subspace with the property that $V \cap \rad(\fa) = \{0\}$. Then there exists a surjective algebra homomorphism $\si \colon \Poly \to \C^{\dim(V)}$ which is injective on $V$ and such that $\fa \subset \ker(\si)$. \end{proposition} \begin{proof} We prove by induction on $\dim(V)$ the following statement: There exist distinct maximal ideals $\fm_1,\ldots, \fm_{\dim(V)}$ such that for all $i$ we have $\rad(\fa)\subset \fm_i$ and $$ V\cap \fm_1\cap \ldots\cap \fm_{\dim(V)} = \{0\}. $$ For the case $\dim(V) = 1$ let us first choose a non-zero element $v\in V$. Now since $v\notin \rad(\fa)$ and $\rad(\fa)$ is equal to an intersection of maximal ideals, we can find a maximal ideal $\fm_1$ such that $\rad(\fa) \subset \fm_1$ and $v\notin \fm_1$. Let us therefore assume that we know the inductive statement when \\ $\dim(V) = k$ for some $k$ and let us fix $V$ such that $\dim(V) = k+1$. Let $W\subset V$ be a $k$-dimensional subspace. By the inductive assumption we can find $\fm_1,\ldots, \fm_{k}$ such that $W\cap\fm_1\cap \ldots\cap \fm_k =\{0\}$. Thus the intersection $V\cap\fm_1\cap \ldots\cap \fm_k$ is at most one-dimensional. It cannot be zero-dimensional because the composition $V \into \Poly\ \to \Poly/ (\fm_1\cap \ldots\cap \fm_k)$ has kernel, since the right-hand side is isomorphic to $\C^k$. Thus the intersection $V\cap\fm_1\cap \ldots\cap \fm_k$ is one-dimensional. Let $v$ be a non-zero element of $V\cap\fm_1\cap \ldots\cap \fm_k$. Since $v\notin \rad(\fa)$ and $\rad(\fa)$ is equal to an intersection of maximal ideals, we can find a maximal ideal $\fm_{k+1}$ such that $\rad(\fa) \subset \fm_{k+1}$ and $v\notin \fm_{k+1}$. Thus $V\cap\fm_1\cap \ldots\cap \fm_k \cap \fm_{k+1} = \{0\}$, finishing the proof of the inductive claim. Now we can define $\si$ as being the quotient map $\Poly \to \Poly/(\fm_1\cap \ldots\cap \cap \fm_{\dim(V)})$. This finishes the proof. \end{proof} \begin{definition}\label{mumaps} Given an ideal $\fa \subset \Poly$ we denote by $\mu^\fa_i\colon \Poly/\fa \to \Poly/\fa$ the linear map defined by $\mu^\fa_i(f) = X_i\cdot f$. \end{definition} \begin{corollary}\label{cory-alghom} Let $\fa\subset \C[X]$ be an ideal and let $R\in \N$ be such that $$ F^\fa_R\cap \left(\rad(\fa)/\fa\right) = \{0+\fa\}. $$ Then there exist simultaneously diagonalisable linear maps $$ M_1,\ldots, M_n\colon F^\fa_R \to F^\fa_R $$ such that for $v\in F^\fa_{R-1}$ and all $i=1,\ldots, n$ we have $M_i(v) = \mu^\fa_i(v)$. \end{corollary} \begin{proof} Let $d := \dim(F^\fa_R)$, and let $f_1,\ldots, f_d\in \Poly$ be such that $f_i+\fa$ is a basis of $F^\fa_R$. Let $V\subset \Poly$ be the linear span of the elements $f_i$. Let us observe that $V\cap \rad(\fa) = \{0\}$. Indeed, if $f\in V\cap \rad(\fa)$ then $f+\fa \in F^\fa_R\cap \rad(\fa)/\fa$ and so by assumption we see that $f \in \fa$. Hence by the previous proposition we can find a surjective algebra homomorphism $$ \si\colon \Poly/\fa \to \C^d $$ such that $\si$ is injective on $F^\fa_R$. Let $\tau\colon \C^d \to F^\fa_R$ be the unique linear isomorphism such that for $v\in F^\fa_R$ we have $\tau(\si(v)) = v$. Since $\si$ is surjective, it follows that for all $v\in \C^d$ we have $\si(\tau(v))=v$. For $v\in F^\fa_R$ let us define $$ M_i(v): = \tau(\si(X_i\cdot v)). $$ If $v\in F^\fa_{R-1}$ then $X_i\cdot v\in F^\fa_R$ and so $M_i(v) = \tau(\si(X_i\cdot v) = X_i\cdot v = \mu^\fa_i(v)$. Thus in order to finish the proof we only need to check that the maps $M_i$ are simultaneously diagonalisable. Let $e_1,\ldots, e_d$ be the standard basis of $\C^d$. In particular $\tau(e_1),\ldots, \tau(e_d)$ is a basis of $F^\fa_R$, and we claim that for every $i\in \{1,\ldots, n\}$ we have that the vectors $\tau(e_j)$, $j=1,\ldots, d$, are eigenvectors for $M_i$. Indeed, first we note that for every $i$ and $j$ we have that $\si(X_i)\cdot e_j$ is a multiple of $e_j$, and so we can define numbers $\la_{ij}\in \C$ by the formula $$ \si(X_i)\cdot e_j = \la_{ij} e_j. $$ Now we can write \begin{align*} M_i(\tau(e_j)) &= \tau\Big(\si(X_i\cdot \tau(e_j))\Big) = \tau\Big(\si(X_i) \cdot \si(\tau(e_j))\Big) \\ &= \tau(\si(X_i)\cdot e_j) = \tau(\la_{ij}e_j) = \la_{ij} \tau(e_j), \end{align*} finishing the proof. \end{proof} \section{Proof of Theorem~\ref{t1}}\label{sec-proof} We will first prove several lemmas. The first lemma, informally speaking, allows us to deduce Theorem~\ref{t1} provided that we can construct large subspaces by ``growing balls around points''. To make it precise we state a few definitions. Given two positive natural numbers $a,b$ we let $\map(a,b)$ be the set of all maps from $[a]$ to $[b]$, and furthermore we let $\maple a b := \bigcup_{ i=0}^a \map(i,b)$. Let $W$ be a $\C$-vector space and let $\cal M = (M_1,\ldots, M_n)$ be a tuple of endomorphisms of $W$. Given $R\in\N$ and $\al\in \map(R,n)$ we let $$ \cal M_\al := M_{\al(1)}\cdot \ldots \cdot M_{\al(R)}. $$ Note that the unique element of $\map(0,n)$ is the empty set. Our convention is that $\cal M_\emptyset$ is the identity map. Given $w\in W$ we let $B_{\cal M}(w,R)$ to be the linear span of the vectors $\cal M_\al (w)$, where $\al\in \maple R n$. We will call $B_{\cal M} (w,R)$ the \emph{$R$-ballspace for $\cal M$ with root $w$}. If $\cal M$ is clear from the context, then we denote $B_{\cal M}( w,R)$ simply with $B(w, R)$. Recall from Definition~\ref{mumaps} that given an ideal $\fa \subset \Poly$ and $i\in [n]$, we denote by $\mu^\fa_i\colon \Poly/\fa \to \Poly/\fa$ the linear map defined by $\mu^\fa_i(f +\fa) = (X_i\cdot f) +\fa$. Note that the $R$-ballspace for $\tuple {\mu^\fa}n$ with root $1+\fa \in \Poly/\fa$ is equal to $F^\fa_R$. In general we will say that $B_{\cal M}(w,R)$ is \emph{regular} if there exists an ideal $\fa \subset \Poly$ and a linear isomorphism $\phi\colon B_{\cal M}(w,R) \to F^\fa_R$ such that the following two conditions hold. \begin{enumerate} \item For every $v \in B_{\cal M}(w,R-1)$ and $i\in [n]$ we have $\phi(M_i(v)) =\mu^\fa_i(\phi(v))$. \item If for some $f\in \Poly$ and $m \in \Np$ we have $f +\fa \in F^\fa_R$ and $f^m\in \fa$ then $f\in \fa$. In other words we have $F^\fa_R \cap (\rad(\fa)/\fa) = \{0\}$. \end{enumerate} Let $d\in \N$ and let $\tuple Un$ be a $d$-matrix tuple. Given $R\in \N$ and a subspace $W\subset \C^d$ we say that $W$ is an $R$-multi-ballspace if there exist $w_1,\ldots, w_k\in W$ and natural numbers $R_1,\ldots, R_k$ with $R_j\ge R$ such that \begin{enumerate} \item the ballspaces $B(w_j,R_j)$ are regular, and \item $W$ is equal to the direct sum $\bigoplus_{j=1}^k B(w_j, R_j)$. \end{enumerate} The \emph{roots} of such $W$ are the points $\{w_1,\ldots, w_k\}$. If all elements of a matrix tuple $\cal A$ can be diagonalised simultaneously, then $\cal A$ will be called \emph{simultaneously diagonalisable}. Clearly, if $\cal A$ is a simultaneously diagonalisable tuple then it is also a commuting tuple. \begin{lemma}\label{lemma-first-red} For every $\eps>0$ there exists $R\in \N$ and $\de>0$ such that the following holds. Suppose that $d\in \N$, let $\cal M$ be a $d$-matrix tuple, and let $W\subset \C^d$ be an $R$-multi-ballspace with $\dim(W) \ge (1-\de) \cdot d$. Then there exists a simultaneously diagonalisable $d$-matrix tuple $\cal A$ such that $$ \drank(\cal M , \cal A) \le 2\eps. $$ \end{lemma} \begin{proof} Let $R$ be such that $\frac{n}{R}< \frac{\eps}{2}$ and let $\de$ be such that $\de<\frac{\eps}{2}$. Let $\cal M = \tuple Mn$ be a $d$-matrix tuple, let $w_1,\ldots w_k\in \C^d$ and let $R_1,\ldots, R_k\in \N$ be such that $R_i\ge R$ and such that the ballspaces $B(w_j,R_j)$ are regular and $W = \bigoplus_{j=1}^k B(w_j, R_j)$. We need to find a commuting simultaneously diagonalisable tuple $\cal A = \tuple An$ such that $\drank(\cal M, \cal A) \le 2\eps$. For every $j=1,\ldots, k$, let $\phi_j\colon B(w_j,R_j) \to F^{\fa_j}_{R_j}$ be the linear isomorphism witnessing the regularity of the ballspace $B(w_j,R_j)$. By Corollary~\ref{cory-alghom}, for every $j=1,\ldots, k$ we can find maps $M_{ij}\colon F^{\fa_j}_{R_j}\to F^{\fa_j}_{R_j}$, where $i=1,\ldots, n$, such that the maps $M_{1j}, \ldots, M_{nj}$ pairwise commute, are simultaneously diagonalisable, and for $v\in B(w_j, R_j-1)$ we have $M_i(v) = \phi_j^{-1}\cdot M_{ij}\cdot \phi_j(v)$. Let us fix a projection $\pi \colon \C^d \to \bigoplus_{j=1}^k B(w_j,R_j)$, and for $i=1,\ldots, n$ let $$ A_{i} := \left(\bigoplus_j\phi_j^{-1}\cdot M_{ij}\cdot \phi_j \right) \circ \pi. $$ It is clear that the maps $A_1,\ldots, A_n$ are simultaneously diagonalisable. Also for every $v\in \bigoplus_{j=1}^k B(w_j,R_j-1)$ we have $A_i(v) = M_i(v)$, so $$ \drank(\cal M, \cal A) \le \frac{1}{d} \left(\dim\ker(\pi) + \sum_{j=1}^k (\dim B(w_j,R_j) - \dim B(w_{j},R_{j}-1))\right). $$ Since the ballspaces $B(w_j,R_j)$ are regular, by Proposition~\ref{promac} we have $$ \dim B(w_j,R_j) - \dim B(w_{j},R_{j}-1) \le \frac{n}{R_j} \cdot \dim B(w_j,R_j-1) \le \frac{n}{R} \cdot \dim B(w_j,R_j). $$ Hence we see that $$ \sum_{j=1}^k (\dim B(w_j,R_j) - \dim B(w_{j},R_{j}-1)) \le \frac{n}{R} \sum_{j=1}^k \dim B(w_j,R_j) < \frac{\eps}{2} \cdot d. $$ Thus altogether we have $$ \drank(\cal M , \cal A) < \frac{\eps}{2} + \de \le \eps, $$ finishing the proof. \end{proof} Given $r,d\in \N$, a $d$-matrix tuple $\cal M = \tuple Mn$, and $v \in \C^d$, we say that $\cal M$ is \emph{$r$-commutative} on $v$ if for any $i\le r$, any $\al \in \map(i,n)$ and any permutation $\si\colon [i] \to [i]$ we have $$ \cal M_\al(v) = \cal M_{\al\circ \si}(v). $$ If $W\subset\C^d$ then we say that $\cal M$ is \emph{$r$-commutative on $W$ } if for every $v\in W$ we have that $\cal M$ is $r$-commutative on $v$. \begin{lemma} \label{lemma-pair-reduction} For every $R\in \N$ and $\eps>0$ there exists $\eta>0$ such that if $d\in \N$ and $\cal U$ is an $\eta$-commuting $d$-matrix tuple, then there exists a subspace $W\subset \C^d$ such that $U$ is $R$-commutative on $W$ and $\dim(W) \ge d(1-\eps)$. \end{lemma} \begin{proof} Let us prove by induction on $R$ that for every $\eps>0$ there exists $\eta>0$ such that if $d\in \N$ and $\cal U$ is an $\eta$-commuting $d$-matrix tuple, then $$ \dim \bigcap_{\al,\be\in \maple R n} \ker([\cal M_\al, \cal M_\be]) \ge d(1-\eps). $$ When $R=2$, we can set $\eta := \frac{\eps}{n^2}$. Indeed, if $\cal U$ is an $\eta$ commuting tuple then by definition for $i,j\in [n]$ we have $\dim\ker([U_i,U_j]) \ge (1-\eta)d$. And so we have $$ \dim \left( \bigcap_{i,j\in [n]} \ker([U_i,U_j])\right) \ge (1-n^2\eta)d \ge (1-\eps)d. $$ Thus let us assume that we have shown the inductive statement for some $R$ and let us prove it for $R+1$. Let us fix $\eps>0$ and let $\eta$ be given by the inductive assumption for $\frac{\eps}{n+1}$. Thus given an $\eta$-commuting tuple $\cal U$ we obtain a subspace $W\subset \C^d$ such that $\dim(W) \ge d(1-\frac{\eps}{n+1})$ and $\cal U$ is $R$-commutative on $W$, i.e. for any $w\in W$, any $\al\in \map( k, n)$ with $k\le R$ and any permutation $\si \colon [k] \to [k]$ we have $$ \cal U_\al (w) = \cal U_{\al\circ \si} (w). $$ Let us define $V := W \cap \bigcap_{i=1}^n M_i^{-1}(W)$. Clearly $\dim V \ge d(1-\eps)$. Now let $\be \in \map (R+1, n)$, let $\tau\colon [R+1]\to [R+1]$ be a permutation, and let $v\in V$. Let $i\in [n]$ be such that for some $2\le j_1 \le R+1$ we have $\tau(j_1) =i$ and for some $s \le j_2 \le R+1$ we have $\beta(\tau(j_2)) = i$. We can find such $i$ because $R\ge 3$. Since in particular $v\in W$, we can find $\ga\in \map(R,n)$ and a permutation $\rho\colon [R]\to [R]$ such that $$ \cal M_{\be}(v) = \cal M_{\ga}\circ M_i (v) $$ and $$ \cal M_{\be\circ \tau}(v) = \cal M_{\ga\circ \rho}\circ M_i (v). $$ Since $M_i(v) \in W$, we have $$ \cal M_{\ga\circ \rho}\circ M_i (v) =\cal M_{\ga}\circ M_i (v), $$ which finishes the proof. \end{proof} \begin{definition}\label{def-pair} If $r\in \N$, $\eps>0$, and $A$ and $B$ are subspaces of $\C^d$, then we say that $(A,B)$ is an $(r,\eps)$-pair for the $d$-tuple $\cal M=(M_1,M_2,\dots,M_n)$ if \begin{enumerate} \item $A\subset B$ and $\dim(B/A) \le \eps\cdot d$, and \item for every $\al \in \maple rn$ and $v\in A$ we have $\cal M_\al (v) \in B$. \end{enumerate} \end{definition} \begin{lemma}\label{lemma-complicated} Let $\eps>0$, let $r,R,d\in \N$ with $n\le r<R$, and let $\cal M = \tuple Mn$ be a $\sta$-closed $d$-matrix tuple. Let $(A_1,B_1)$ be an $(R,\eps)$-pair and let $W = \bigoplus_{j=1}^k B(w_j, R_j)$ be an $R$-multi-ballspace for $\cal M$ contained in $B_1$. Furthermore let us assume that the ballspaces $B(w_j, R_j+r)$ are regular for all $j$. Finally, let $B_2$ be the orthogonal complement of $W$ in $B_1$. Then there exists a subspace $A_2 \subset A_1\cap B_2$ such that $(A_2, B_2)$ is an $(r, \eps+ \frac{n}{R}2^r)$-pair. \end{lemma} \begin{proof} Let $V\subset \C^d$ be the space spanned by the ballspaces $B(w_j, R_j+r)$, $j=1,\ldots, k$. Let $V^\perp\subset \C^d$ be the space orthogonal to $V$ and let $A_2 =A_1\cap V^\perp$. Let us show that $\dim(B_2/A_2) \le d(\eps + \frac{n}{R}2^r)$. By basic linear algebra, it is easy to check that $\dim(B_2/A_2)$ is bounded from above by \begin{equation}\label{apa} \dim(B_1/A_1) + \dim(V) - \dim(W). \end{equation} We can bound \eqref{apa} by $$ \dim(B_1/A_1) + \sum_{j=1}^k (\dim B(w_j,R_j+r) - \dim B(w_j,R_j)). $$ By Corollary~\ref{corymac}, the quantity above is at most $$ \eps\cdot d + \sum_{j=1}^k \dim B(w_j,R_j)\left(\left( 1+ \frac{n}{R}\right)^r -1\right) \le \eps\cdot d + d \cdot \frac{n}{R} \cdot 2^r, $$ where we use the inequality $(1+x)^r \le 1+(2^r-1)x$, valid for $x\in [0,1]$ and $r\ge 1$. Therefore we obtain that $\dim(B_2/A_2) \le d(\eps + \frac{n}{R}2^r)$. Thus to finish the proof we only need to show that for $x\in A_2$ and $\al \in \map(q,n)$ with $q\le r<R$ we have $\cal M_\al(x) \in B_2$. In other words, we need to show that if $x \in A_1$ is orthogonal to $B(w_j, R_j+r)$ then $\cal M_\al (x)$ is orthogonal to $B(w_j,R_j)$. Indeed, let $w\in B(w_j,R_j)$. Since $\cal M$ is $\sta$-closed, we have $\cal M_\al^\ast(w) \in B(w_j,R_j+r)$ and hence $$ \langle \cal M_\al (x), w \rangle = \langle x, \cal M_\al^\ast(w) \rangle = 0, $$ finishing the proof. \end{proof} Let $\C\langle Y_1,\ldots, Y_n\rangle$ be the ring of polynomials in $n$ non-commuting variables, and let $\pi \colon \C\langle Y_1,\ldots, Y_n\rangle \to \Poly$ be the algebra homomorphism such that 8$\pi(Y_i) = X_i$. Let $c\colon \Poly \to \C\langle Y_1,\ldots, Y_n\rangle$ be the unique $\C$-linear map such that for any $\al \in \map(d,n)$ with $\al(1)\le \al(2)\le \ldots \le \al(d)$ we have $c\circ\pi( Y_{\al(1)}\ldots Y_{\al(d)}) = Y_{\al(1)}\ldots Y_{\al(d)}$. In other words, the map $c$ allows us to treat commutative polynomials as non-commutative ones, by fixing an order on the variables. Given a matrix tuple $\cal M = (M_1,\ldots, M_n)$ and $f\in \Poly$, we define $f(\cal M)$ to be the matrix $c(f)(M_1,\ldots, M_n)$. Let us define a ${}^\ast$-operation on $\C\langle Y_1,\ldots Y_n\rangle$ in the following way. For any $\al \in \map(d,n)$ and $a\in \C$ we define $$ \left( a \cdot Y_{\al(1)}\cdot \ldots \cdot Y_{\al(d)} \right)^\ast := \ov{a} \cdot Y_{\al(d)}\cdot \ldots \cdot Y_{\al(1)}, $$ and we extend this definition to arbitrary elements of $\C\langle Y_1,\ldots Y_n\rangle$ by linearity. For $f\in \Poly$ we define $f^\ast(\cal M) := (c(f)^\ast)(M_1,\ldots, M_n)$. The following simple observation will be used without reference. \begin{lemma} For any matrix tuple $\cal M$ and any $f\in \Poly$ we have that the matrices $f(\cal M)$ and $f^\ast(\cal M^\ast)$ are adjoint to each other. \qed \end{lemma} We will also need the following lemma. \begin{lemma}\label{lemma_simple2} Let $f\in \Poly$ and let $k\in \N$. Let $\cal M$ be a $\sta$-closed $d$-matrix tuple and let $x\in \C^d$ be such that $\cal M$ is $(2k\cdot \deg(f))$-commutative at $x$. Then $$ \left[f^\ast(\cal M^\ast) f(\cal M)\right]^k (x) = \left[f^\ast(\cal M^\ast)\right]^k\left[f(\cal M)\right]^k(x) $$ \qed \end{lemma} \begin{proof} For $f\in \Poly$ let us define $\bar f$ to be the polynomial which arises from $f$ by conjugating the coefficients. Then the left-hand side is equal to $$ \left[\bar f(\cal M^\ast) f(\cal M)\right]^k(x), $$ and the right hand side is equal to $$ \left[\bar f (\cal M^\ast)\right]^k\left[f(\cal M)\right]^k(x). $$ These two expressions are clearly equal if $\cal M$ is $(2k\cdot \deg(f))$-commutative at $x$. \end{proof} Recall that $K(R)$ is a function defined in Theorem~\ref{cory-effective}. \begin{lemma}\label{lemma-reg} Let $d\in \Np$ and let $\cal M = (M_1,\ldots, M_n)$ be a $\sta$-closed $d$-matrix tuple. Let $R\ge 0$ and let $v\in \C^d$ be such that $\cal M$ is $(2\cdot K(R))$-commutative at $v$. Then the ballspace $B(v, R)$ is regular. \end{lemma} \begin{proof} Given $\al \in \map(q,n)$, we define $X_\al \in \Poly$ to be the monomial $$ X_\al := X_{\al(1)}\ldots X_{\al(q)}. $$ Let $P\subset \Poly$ be defined as follows. We let $f\in P$ if and only if $\deg(f) \le R$ and $f(\cal M)(v) =0$. Let $\fa$ be the ideal generated by $P$. Let us define a map $\phi \colon B(v,R) \to \Poly/\fa$ as follows: $$ \phi( \cal M_\al (v)) := X_{\al}+\fa. $$ Let us check that $\phi$ is well-defined. For this let us assume that $$ \sum_{\al \in \maple R n} s_\al \cal M_\al(v) = \sum_{\al \in \maple R n } t_\al \cal M_\al(v), $$ where $s_\al, t_\al \in \C$. But then $$ \sum_{\al \in \maple R n } (s_\al - t_\al) M_\al(v) = 0, $$ and therefore $\sum_{\al \in \maple R n } (s_\al-t_\al) X_\al \in P$. In particular we get that $$ \sum_{\al \in \maple R n} s_\al X_\al + \fa = \sum_{\al \in \maple R n } t_\al X_\al + \fa, $$ which shows that $\phi$ is well-defined. Now let us see that $\phi$ is injective. Indeed suppose that $$ \phi \left( \sum_{\al} s_\al \cal M_\al(v) \right) = 0, $$ where $\al$ runs through the elements of $\maple R n$, and $s_\al \in \C$. Then \\ $\sum_\al s_\al X_\al \in \fa$, and so we can find $f_i \in P$ and $h_i \in \Poly$ with $\deg(h_if_i) \le K(R)$ such that $$ \sum_{i=1}^k h_if_i = \sum_\al s_\al X_\al. $$ But since $\cal M$ is $K(R)$-commutative at $v$, we have $$ \sum_{i=1}^k h_i(\cal M) f_i(\cal M)(v) = \sum_\al s_\al \cal M_\al (v). $$ The left-hand side is equal to $0$ since $f_i\in P$, and so we see that \\ $\sum_{\al} s_\al \cal M_\al (v) = 0$. This finishes the proof of injectivity of $\phi$. Since clearly the image of $\phi$ is equal to $F^\fa_R$, it remains to prove that $$ F^\fa_R \cap (\rad(\fa)/\fa) = \{0+\fa\}. $$ By Theorem~\ref{cory-effective}, if $f \in \Poly$ is such that $\deg(f) \le R$ and $f\in \rad(\fa)$ then we can find $m \in\N$, elements $f_i \in P$ and $g_i\in \Poly$ with $\deg(g_if_i) \le K(R)$, such that $$ f^m = \sum_{i=1}^k g_i f_i, $$ Since $\cal M$ is $K(R)$-commutative at $v$, we have $$ 0 = \sum_{i=1}^k g_i(\cal M) f_i(\cal M)(v) = f(\cal M)^m(v), $$ and so by $2K(R)$-commutativity we also have $$ 0 =f(\cal M)^m(v) = f^\ast(\cal M^\ast) ^m f(\cal M)^m(v) = \left(f^\ast(\cal M^\ast)f(\cal M)\right)^m (v). $$ In particular we can define $t$ to be the smallest positive integer such that $$ \left(f^\ast(\cal M^\ast)f(\cal M)\right)^t (v) = 0. $$ We will show $t=1$. By way of contradiction, let us consider two cases: first let us assume that $t$ is even and equal to $2l$ for some $l\ge 1$. By Lemma~\ref{lemma_simple2}, we have \begin{equation}\label{eq-adjoint} 0 = f^\ast(\cal M^\ast)^{2l} f(\cal M)^{2l}(v) = (f^\ast(\cal M^\ast)f(\cal M))^{2l}(v). \end{equation} Therefore, we also have \begin{align*} 0 &= \langle (f^\ast(\cal M^\ast) f(\cal M))^{2l}(v) , v\rangle \\ &= \langle (f^\ast(\cal M^\ast) f(\cal M))^{l}(v), (f^\ast(\cal M^\ast) f(\cal M))^{l}(v) \rangle, \end{align*} which shows that $(f^\ast(\cal M^\ast) f(\cal M))^{l}(v) = 0$. By Lemma~\ref{lemma_simple2}, we obtain that $$ f^\ast(\cal M^\ast)^l f(\cal M)^l (v) =0, $$ and since $f^\ast(\cal M^\ast)^l$ and $f(\cal M)^l$ are adjoint to each other, we obtain that $f(\cal M)^l(v)=0$, which contradicts the minimality of $t$. In the second case let us assume that $t$ is odd and equal to $2l+1$ for some $l\ge 1$. We proceed in a similar fashion. By Lemma~\ref{lemma_simple2} we have that $\left( f^\ast(\cal M^\ast) f(\cal M)\right)^{2l+1}(v) = 0$. Hence, we also have \begin{align*} 0 &= \langle \left( f^\ast(\cal M^\ast) f(\cal M)\right)^{2l+1}(v), f^\ast(\cal M^\ast) f(\cal M)(v)\rangle \\ &= \langle \left( f^\ast(\cal M^\ast) f(\cal M)\right)^{l+1}(v), \left( f^\ast(\cal M^\ast) f(\cal M)\right)^{l+1}(v) \rangle, \end{align*} and since $l+1< t$, we obtain a contradiction exactly as in the first case. Thus all in all we have showed that $f^\ast(\cal M^\ast)f(\cal M) (v) = 0$. Since $f^\ast(\cal M^\ast)$ and $f(\cal M)$ are self-adjoint, we also have that $f(\cal M)(v) = 0$. This shows that $f\in \rad(\fa)$, finishing the proof. \end{proof} \begin{lemma}\label{lemma-bootstrap} Let $R\in \N$, $d\in \N$, let $\cal M$ be a $\sta$-closed $d$-matrix tuple, let $A\subset \C^d$, and let us assume that $\cal M$ is $2K(2R)$-commutative on $A$. Then there exists $k\in \N$ and $w_1,\ldots, w_k\in A$ such that the $R$-ballspaces $B_{\cal M}(w_i,R)$ are regular, pairwise orthogonal, and we have that \begin{equation}\label{todo6} \sum_{i=1}^k B_{\cal M} (w_i, R) \ge \frac{1}{e^n}\cdot \dim(A), \end{equation} where $e=2.71...$ \end{lemma} \begin{proof} Note that by Lemma~\ref{lemma-reg} all $R$-ballspaces with roots in $A$ are regular. Let us consider the set $Q:=\{(w_1,\ldots, w_k) \in A^{\oplus\N}\}$ of all finite tuples of elements of $A$ with the property that the ballspaces $B_{\cal M}(w_1,R), \ldots, B_{\cal M}( w_k,R)$ are pairwise orthogonal to each other. Let $(w_1,\ldots, w_k)\in Q$ be a tuple for which the number $$ \sum_{i=1}^k \dim(B_{\cal M}(w_1,R)) $$ is maximal. By Lemma~\ref{lemma-reg}, it is enough to show that $\sum_{i=1}^k \dim(B_{\cal M}(w_i,R)) \ge \frac{1}{e^n}\cdot \dim A$. Consider the vector space $V$ spanned by the ballspaces $B_{\cal M}(w_1,2R)$. By Lemma~\ref{lemma-reg}, the ballspaces $B_{\cal M}(w_i,2R)$ are regular, and so by Corollary~\ref{corymac} we have that $$ \dim B_{\cal M}(w_i, 2R) \le \left(1 +\frac{n}{2R}\right)\left(1 +\frac{n}{2R-1}\right) \ldots \left(1 + \frac{n}{R+1}\right) \dim B_{\cal M}(w_i, R), $$ which easily implies that $$ \dim B_{\cal M}(w_i, 2R) \le e^n \dim B_{\cal M} (w_i, R). $$ This shows that \begin{equation}\label{done43} \dim(V) \le e^n \dim(\bigoplus_{i=1}^k B_{\cal M} (w_i, R)). \end{equation} Let us observe that if $x\in A$ is orthogonal to $V$ then $B_{\cal M}(x,R)$ is orthogonal to the space $\bigoplus_{i=1}^k B_{\cal M} (w_i, R)$. Indeed, since $\cal M$ is $\sta$-closed, for any $\al \in \maple R n$ and any $w\in B_{\cal M}(w_i,R)$ we have that $\cal M_\al^\ast(w)\in B_{\cal M}(w_i,2R)$. It follows that $$ \langle \cal M_\al(x),w\rangle = \langle x , \cal M_\al^\ast (w) \rangle = 0. $$ But by the maximality of $(w_1,\ldots, w_k)$, the above shows that there are no points in $A$ orthogonal to $V$, so in fact we have $A\subset V$. In particular, we have $\dim(V) \ge \dim A$, and hence by \eqref{done43} we have $\dim A \le e^n \dim(\bigoplus_{i=1}^k B_{\cal M} (w_i, R))$, finishing the proof. \end{proof} The final lemma which we need for the proof of Theorem~\ref{t1} is an "Ornstein-Weiss type" lemma. \begin{lemma}\label{lemma-final} For every $\de>0$ and $r\in \N$ there exists $\eta>0$ such that if $d \in \N$, and $\cal M$ is an $\eta$-commuting $\sta$-closed $d$-matrix tuple, then there exists an $r$-multiball $W\subset \C^d$ for $\cal M$, such that $\dim(W) \ge (1-\de)\cdot d$. \end{lemma} \begin{proof} Let us fix $\de>0$ and $r\in \N$. Let us first fix $k\in \N$ such that $k(1-\frac{1}{e^n})^k < \de$, and then let us choose $\eps >0$ and natural numbers $r_0> r_1>\ldots > r_k = r$ such that for $i=0,\ldots, k-1$ we have $$ \eps +\frac{n}{r_i} 2^{r_{i+1}} < (1-\frac{1}{e^n})^k. $$ By Lemma~\ref{lemma-pair-reduction}, we can fix $\eta$ to be such that if $\cal M$ is $\eta$-commuting then there exists a subspace $S\subset \C^d$ such that $\dim(S) \ge (1-\eps)d$ and $\cal M$ is $2K(2(r_0+r_1))$-commutative on $S$. We will prove by induction on $i$ the following statement: for every $i=1,\ldots, k$ there exists $g(i)\in \N$, roots $w_1,\ldots, w_{g(i)}\in S$ and radii $R_1,\ldots, R_{g(i)}$ with $r_i \le R_j \le r_0$ for all $j$, such that the balls $B_{\cal M} (w_j, R_j)$ are pairwise orthogonal and $$ \sum_{j=1}^{g(i)} \dim B_{\cal M} (w_j,R_j) \ge d\left(1- i(1-\frac{1}{e^n})^i\right), $$ which is enough to finish the proof because of the choice of $k$. Let $\eps_i:= i(1-\frac{1}{e^n})^i$. For $i=1$ the inductive claim is implied by Lemma~\ref{lemma-bootstrap}. Suppose that the inductive claim holds for some $i\in \{1,\ldots,k-1\}$ and let us prove it for $i+1$. Let $W_i = \oplus_{j=1}^{g(i)} B_{\cal M} (w_j,R_j)$ and let $W_i^\perp$ be the orthogonal complement of $W_i$ in $\C^d$. Since $\cal M$ is $2K(2(r_0+r_{i+1}))$-commutative on $S$, we have that all the ballspaces $$ B_{\cal M} (w_j, R_j + r_{i+1}) $$ are regular, and so we can apply Lemma~\ref{lemma-complicated} for the $(r_i,\eps)$-pair $(S, \C^d)$ and the $r_i$-multiballspace $W_i$. As a result we obtain a subspace $S_i \subset S\cap W_i^\perp$ such that $(S_i,W_i^\perp)$ is an $(r_{i+1}, \eps + \frac{n}{r_i}2^{r_{i+1}})$-pair. Now, by Lemma~\ref{lemma-bootstrap} we obtain $g(i+1) \in \N$ and roots $w_{g(i)+1}, w_{g(i)+2},\ldots, \\ w_{g(i+1)} \in S_i$, such that the ballspaces $B_{\cal M}(w_{g(i)+k}, r_{i+1})$, $k=1,\ldots, g(i+1)-g(i)$, are regular, pairwise orthogonal, and $$ \sum_{k=1}^{g(i+1)-g(i)} \dim B_{\cal M}(w_{g(i)+k}, r_{i+1}) \ge \frac{1}{e^n}\dim(S_i). $$ Since $(S_i,W_i^\perp)$ is an $(r_{i+1}, \eps + \frac{n}{r_i}2^{r_{i+1}})$-pair, we have $$ \dim S_i \ge \dim W_i^\perp - d(\eps + \frac{n}{r_i}2^{r_{i+1}}) $$ and so \begin{align*} \sum_{k=1}^{g(i+1)-g(i)} \dim B_{\cal M}(w_{g(i)+k}, r_{i+1}) &\ge \frac{1}{e^n} \dim W_i^\perp - \frac{d(1-\frac{1}{e^n})^k}{e^n}\\ &\ge \frac{1}{e^n} \dim W_i^\perp - d(1-\frac{1}{e^n})^{i+1}. \end{align*} Therefore we have also \begin{align*} \sum_{j=1}^{g(i+1)} \dim B_{\cal M} (w_j,R_j) &\ge \dim W_i + \frac{1}{e^n} \dim W_i^\perp - d(1-\frac{1}{e^n})^{i+1} \\ &= d(1 -(1-\frac{1}{e^n})^{i+1}) - (1-\frac{1}{e^n})\dim W_i^\perp. \end{align*} By the inductive assumption, we have $\dim W_i^\perp \le d\cdot i(1-\frac{1}{e^n})^i$, so altogether we have \begin{align*} \sum_{j=1}^{g(i+1)} \dim B_{\cal M} (w_j,R_j) &\ge d(1 -(1-\frac{1}{e^n})^{i+1}) - d\cdot i(1-\frac{1}{e^n})^{i+1} \\ &= d -(i+1)(1-\frac{1}{e^n})^{i+1}, \end{align*} which is the inductive statement we wanted to show. Hence the lemma follows. \end{proof} We have now everything in place to prove Theorem~\ref{t1}. \begin{proof}[Proof of Theorem~\ref{t1}] Let us fix $\eps>0$. By Lemma~\ref{lemma-first-red}, we can fix $R\in \N$ and $\eta>0$ such that if $\cal A$ is a $d$-matrix tuple for some $d\in \N$, and $W\subset \C^d$ is an $R$-multi-ballspace for $\cal A$ with $\dim(W) \ge (1-\eta)d$, then we can find a commuting $d$-matrix tuple $\cal B$ such that $$ d_\rank(\cal A, \cal B) \le \eps. $$ However by Lemma~\ref{lemma-final}, we can find $\de>0$ such that if $d\in \N$ and $\cal A$ is a $\sta$-closed $\de$-commuting tuple then there exists an $R$-multi-ballspace $W\subset \C^d$ for $\cal A$ such that $$ \dim(W) \ge (1-\eta)d. $$ This finishes the proof. \end{proof} \section{Abels' group is not stable with respect to the rank metric}\label{sec_abels} We finish the article with the following proof. \begin{proof}[Proof of Theorem~\ref{abels}] The centre $Z(A_p)$ of $A_p$ is the group of matrices of the form $$ \begin{pmatrix} 1 & 0 & 0 & \ast \\ & 1 & 0 & 0 \\ & & 1 & 0 \\ & & & 1 \end{pmatrix}, $$ isomorphic to $\Z[\frac{1}{p}]$. Consider the central subgroup $H$ of the elements of the form $$ \begin{pmatrix} 1 & 0 & 0 & x \\ & 1 & 0 & 0 \\ & & 1 & 0 \\ & & & 1 \end{pmatrix}, $$ where $x\in \Z$. Let $\De$ be the quotient group $A_p/H$ and let $\pi\colon A_p \to \De$ be the quotient map. Let $\ga_1,\ldots, \ga_g$ be generators of $A_p$. Let $F_1,F_2,\ldots\subset \De$ be a sequence of F{\o}lner sets in $\De$. Let $S\subset D$ be the set $\{\pi(\ga_1),\ldots, \pi(\ga_g)\}$. For $i\in \Np$ let $\inter F_i$ be the subset of those $f\in F_i$ such that for all $s\in S$ we have $sf\in F_i$. For $i\in \Np$ and $j=1,\ldots,g$ let $A_i^j$ be a permutation of $F_i$ which is equal to $\pi(\ga_j)$ on $\inter(F_i)$ (there is in general no unique such permutation). In what follows we will think of $A_i^j$ as permutation matrices - in particular they are unitary matrices. Since $F_i$ is a F{\o}lner sequence, we have, for any $k\in \{1,\ldots, r\}$ that $$ \rank(P_k(A_i^1,\ldots, A_i^g)-\Id_{|F_i|}) \xrightarrow[i\to \infty]{} 0, $$ since the left-hand side is bounded from above by $1-\frac{|\inter F_i|}{|F_i|}$. By way of contradiction, let us assume that $\Ga$ is stable with respect to the rank metric. It follows that we can find $g$ sequences of matrices $B_i^1,\ldots, B_i^g$ with $$ \frac{1}{|F_i|}\dim\im(\widehat{A_i}-\widehat{B_i}) \xrightarrow[i\to \infty]{} 0 $$ and such that $P_i(B_1,\ldots, B_g) = 1$ for all $i=1,\ldots, r$. In particular for each $i=1,2,\ldots$ we get a representation $\rho_i\colon A_p \to GL(n_i,\C)$ for suitable $n_i \in \N$. Now let $t$ be a generator of $H$. Since $t$ is a central element, we have that each eigenspace of $\rho_i(t)$ is preserved under the action of $A_p$. Let $V_i\subset \C^{n_i}$ be the eigenspace of $\rho_i(t)$ corresponding to the eigenvalue $1$, i.e.~the set of all $v\in \C^{n_i}$ such that $\rho(t_i)(v) = v$. We have representations $\bar \rho_i$ of $A_p/H$ on all the spaces $V_i$. Now let $K\subset Z(A_p)$ be the subgroup of $Z(A_p)$ of elements of the form $\frac{n}{p}$, where $n\in \Z$, and let $\bar K$ be the image of $K$ in $Z(A_p)/H$. Since $\bar K$ is finite, we may assume that $i$ is big enough so that $\rank(\bar \rho_i(\ga)-1) \ge \frac{1}{2}$ for all $\ga\in\bar K \setminus \{e\}$. But for every element $\eta\in Z(A_p)\setminus H$ there exists $n\in \Np$ such that $n\cdot \eta \in K\setminus H$. It follows that for every $\ga\in Z(A_p)\setminus H$ we have that $\rho_i(\ga)$ does not act as the identity on $V_i$. This shows that $\bar\rho_i$ is injective on $Z(A_p)/H$. But $\De$ is finitely-generated, and hence $\rho_i(\De)$ is a finitely-generated linear group, which by Malcev's theorem~\cite{mal} shows that $\De$is residually finite. But the abelian group $Z(A_p)/H\subset \De$ is not residually finite (see~\cite{blt} for a short argument), which is a contradiction. This finishes the proof. \end{proof}
1,116,691,501,020
arxiv
\section{Introduction} There have been two different traditional views on the formation history of the Milky Way. The first model was introduced by Eggen, Lynden-Bell \& Sandage (1962) to explain the kinematics of metal poor halo field stars in the solar neighbourhood. According to their view the Galaxy formed in a monolithic way, by the free fall collapse of a relatively uniform, star-forming cloud. After the system became rotationally supported, further star formation took place in a metal-enriched disk, thereby producing a correlation between kinematics and metallicity: the well-known disk-halo transition. In later studies Searle \& Zinn (1978) noted the lack of an abundance gradient and a substantial spread in ages in the outer halo globular cluster system. This led them to propose an alternative picture in which our Galaxy's stellar halo formed in a more chaotic way through merging of several protogalactic clouds. (See Freeman 1987 for a complete review). This second model resembles more closely the view of the current cosmological theories of structure formation in the Universe. These theories postulate that structure grows through the amplification by the gravitational forces of initially small density fluctuations (Peebles 1970; White 1976; Peebles 1980, 1993). In all currently popular versions small objects are the first to collapse; they then merge forming progressively larger systems giving rise to the complex structure of galaxies and galaxy clusters we observe today. This hierarchical scenario is currently the only well-studied model which places galaxy formation in its proper cosmological context (see White 1996 for a comprehensive review). Numerical simulations of large-scale structure formation show a remarkable similarity to observational surveys (e.g. Jenkins et al. 1997, and references therein; and Efstathiou 1996 for a review). For galaxy formation, the combination of numerical and semi-analytic modelling has proved to be very powerful, despite the necessarily schematic representation of a number of processes affecting the formation of a galaxy (Katz 1992; Kauffmann, White \& Guiderdoni 1993; Cole et al. 1994; Navarro \& White 1994; Steinmetz \& Muller 1995; Kauffmann 1996; Mo, Mao \& White 1998; Somerville \& Primack 1999; Steinmetz \& Navarro 1999). This general framework, where structure forms bottom-up, provides the background for our work. We are motivated, however, not only by this theoretical modelling, but also by the increasing number of observations which suggest substructure in the halo of the Galaxy (Eggen 1962; Rodgers, Harding \& Sadler 1981; Rodgers \& Paltoglou 1984; Ratnatunga \& Freeman 1985; Sommer-Larsen \& Christensen 1987; Doinidis \& Beers 1989; Arnold \& Gilmore 1992; Preston, Beers \& Shectman 1994; Majewski, Munn \& Hawley 1994; Majewski, Munn \& Hawley 1996). Detections of lumpiness in the velocity distribution of halo stars are becoming increasingly convincing, and the recent discovery of the Sagittarius dwarf satellite galaxy (Ibata, Gilmore \& Irwin 1994) is a dramatic confirmation that accretion and merging continue to affect the Galaxy. There have been a number of recent studies of the accretion and disruption of satellite galaxies (Quinn, Hernquist \& Fullagar 1993; Oh, Lin \& Aarseth 1995; Johnston, Spergel \& Hernquist 1995; Vel\'azquez \& White 1995, 1999; Sellwood, Nelson \& Tremaine 1998). Much of this work has been limited to objects which remain mostly in the outer parts of the Galaxy, which may be well represented by a spherical potential plus a small perturbation due to the disk (Johnston, Hernquist \& Bolte 1996; Kroupa 1997; Klessen \& Kroupa 1998). In this situation simple analytic descriptions of the disruption process, of the properties of the debris, etc. are possible (Johnston 1998). However, it is questionable whether such descriptions can be applied to most of the regions probed by past or current surveys of the halo, which are quite local: in this case the influence of the disk cannot be disregarded or treated as a small perturbation. Since formation models for the Galaxy should address the broader cosmological setting, we are naturally led to ask what should be the signatures of the different accretion events that our Galaxy may have suffered through its lifetime. Should this merging history be observable in star counts, kinematic or abundance surveys of the Galaxy? How prominent should such substructures be? How long do they survive, or equivalently, how well-mixed today are the stars which made up these progenitors? What can we say about the properties of the accreted satellites from observations of the present stellar distribution? Our own Galaxy has a very important role in constraining galaxy formation models, because we have access to 6-D information which is available for no other system. Observable structure which could strongly constrain the history of the formation of galaxies is just at hand. This paper will try to answer some of the questions just posed. We focus on the growth of the stellar halo of the Galaxy by disruption of satellite galaxies. We have run numerical simulations of this process, and have studied the properties of the debris after many orbits, long after the disruption has taken place. We analyse how the debris phase-mixes by following the growth of its entropy and the variations of the volume it fills in coordinate space. We also study the evolution of its kinematical properties. In order to model the characteristic properties of the disrupted system, such as its size, density and velocity dispersion, we develop a simple analytic prescription based on a linearized Lagrangian treatment of its evolution in action-angle variables. We apply our results to derive the observable properties of an accreted halo in the solar neighbourhood. We also analyse the clump of halo stars detected near the NGP by Majewski et al. (1994), and obtain an order of magnitude estimate for the initial properties of the progenitor system. Our paper is organized as follows. Section 2 presents our numerical simulations. In Section 3 we analyse the characteristics of the debris in these models, and in Section 4 we develop an analytic formalism to understand their properties. We apply this formalism to describe the characteristics of an accreted halo in this same section. In Section 5 we compare our modelling with the observations of Majewski et al. (1994). We leave for the last section the discussion of the results, their validity, and the potential of our approach for understanding the formation of our Galaxy. \section{The simulations} To study the disruption of a satellite galaxy of the Milky Way, we carry out N-body simulations in which the Galaxy is represented by a fixed, rigid potential and the satellite by a collection of particles. The self-gravity of the satellite is modelled by a monopole term as in White (1983) and Zaritsky \& White (1988). \subsection{Model} The Galactic potential is represented by two components: a disk described by a Miyamoto-Nagai (1975) potential, \begin{equation} \label{eq:disk} \Phi_{\rm disk} = - \frac{G M_{\rm disk}}{\sqrt{R^2 + (a + \sqrt{z^2 + b^2})^2}}, \end{equation} where $M_{\rm disk} = 10^{11}\, {\rm M_{\odot}}$, $a = 6.5\, {\rm kpc}$, $b = 0.26 \,{\rm kpc}$, and a dark halo with a logarithmic potential, \begin{equation} \label{eq:halo} \Phi_{\rm halo} = v^2_{\rm halo} \ln (r^2 + d^2), \end{equation} with $d = 12 \,{\rm kpc}$ and $v_{\rm halo} = 131.5 \, {\rm km\, s^{-1}}$. This choice of the parameters gives a circular velocity at the solar radius of \mbox{$210 \,{\rm km\, s^{-1}}$}, and of \mbox{$200\, {\rm km \, s^{-1}}$} at $\sim 100$ kpc. We have taken two different initial phase-space density distributions for our satellites: $i$) two spherically symmetric Gaussian distributions in configuration and velocity space of 1 kpc (5 kpc) width and $5 - 25$ ${\rm km \, s^{-1}}$ (20 ${\rm km \, s^{-1}}$) velocity dispersion, corresponding to masses of $\sim 5.9 \times 10^7 - 1.5 \times 10^9 \, {\rm M}_{\odot}$ ($4.7 \times 10^9 \, {\rm M}_{\odot}$); and $ii$) a Plummer profile (1911) \begin{equation} \label{eq:density_sat} \rho(r) = \frac{\rho_0}{(r^2 + r^2_0)^{5/2}}, \end{equation} with $\rho_0 = 3 M/4 \pi r_0^3$, $M$ being the initial mass of the satellite and $r_0$ its scale length. In this second case, the distribution of initial velocities is generated in a self-consistent way with the density profile. For the characteristic parameters we chose $M = 10^{7} - 10^9 \, {\rm M_{\odot}}$ and $r_0 = 0.53 - 3.0 \, {\rm kpc}$, giving a one-dimensional internal velocity dispersion $\sigma_{1D} = 2.9 - 11.3 \, {\rm km \, s^{-1}}$. The force on particle $i$ due to the self-gravity of the satellite is represented by \begin{equation} \label{eq:self_grav} {\mathbf F}({\mathbf x}_{i}) = - \frac{G M_{\rm in}}{(r_{i}^2 + \epsilon^2)^{3/2}} {\mathbf r}_i, \end{equation} where $M_{\rm in}$ is the mass of the satellite inside $r_i = |{\mathbf x}_i - {\mathbf x}_c|$, ${\mathbf x}_c$ being the position of the expansion centre defined by a test particle with the same orbital properties as those of the satellite. The value for the softening $\epsilon$ is $ 0.25 \,r_0$. The approximation for the self-gravity of the satellite may not be very accurate during the disruption process, where tidal forces are strong and elongations in the bound parts of the satellite are expected. However, because we are interested in what happens after many perigalactic passages, well after the satellite has been tidally destroyed, our conclusions on the whole process are unaffected by details of the disruption process. In total we ran sixteen different simulations, six of which we analyse and describe in full detail in Section 3. Some of the remaining simulations are used in Section 4 for comparison with the analytic predictions and the rest are briefly mentioned in the discussion. The characteristic properties of our six principal simulations are summarized in Table~1. They differ only in their orbital parameters and all initially have a Plummer profile and a mass of $10^7 \, {\rm M}_{\odot}$. We have imposed the restriction that the orbits pass close to the solar circle in order to be able to compare the results of the experiments with the known properties of the local stellar halo. In all cases the satellite was represented by $10^5$ particles of equal mass. \begin{figure} \label{fig1} \center{\psfig{figure=figure1.eps,height=18.5cm,width=7.48cm}} \caption[]{Projections of the orbits of the satellite on different orthogonal planes, where XY coincides with the plane of the Galaxy. All distances are in $\rm kpc$.} \end{figure} \begin{table} \caption{0rbital parameters for the different experiments.} \begin{tabular}{ccccc} \hline Experiment & pericentre & apocentre & $z_{\rm max}$ & period \\ & (kpc) & (kpc) & (kpc) &(Gyr) \\ \hline 1 & 10.9 & 51.5 & 25.0 & 0.69 \\ 2 & 13.5 & 93.1 & 69.1 & 1.23 \\ 3 & 5.0 & 51.5 & 5.1 & 0.64 \\ 4 & 9.2 & 96.5 & 12.0 & 1.24 \\ 5 & 0.5 & 45.5 & 30.1 & 0.56 \\ 6 & 6.0 & 37.0 & 24.8 & 0.48 \\ \hline \end{tabular} \end{table} In Figure~1 we show projections of orbits 1--6 in three orthogonal planes, where XY always coincides with the plane of the Galaxy. Notice that the plane of motion of a test particle on these orbits changes orientation substantially showing that the non-sphericity induced by the disk significantly affects the motion of the satellite. While orbiting the Galaxy, the satellite loses all of its mass. As expected, the most dramatic effects take place during pericentric passages. The satellites do not survive very long, being disrupted completely after 3 passages. This means that for our experiments, for any relatively low density satellite on an orbit which plunges deeply into the Galaxy with a period of 1 Gyr or less, the disruption itself occupies only a relatively small part of the available evolution time. \section{Properties of the debris: Simulations} \subsection{Entropy as a measure of the phase-mixing} The state of a collisionless system is completely specified by its distribution function $f({\mathbf x},{\mathbf v}, t)$. In making actual measurements, it is often more useful to work with the coarse-grained distribution function $\langle \, \!f\!\, \rangle $, which is the average of $f$ over small cells in phase-space. An interesting property of the coarse-grained distribution function is that it can yield information about the degree of mixing of the system (Tremaine, H\'enon \& Lynden-Bell 1986; Binney and Tremaine 1987). In statistical mechanics the entropy is defined as \begin{equation} \label{eq:def_entropy} S = -\int d^3x ~d^3v ~ f({\mathbf x},{\mathbf v},t) \ln f({\mathbf x},{\mathbf v},t). \end{equation}\noindent Since the coarse-grained distribution function decreases as the system evolves towards a well-mixed state, an entropy calculated using $\langle \, \!f\!\, \rangle $ will increase, whereas one calculated using $f$ will remain constant, a consequence of the collisionless Boltzmann equation: $Df/Dt = 0$. We therefore quantify the mixing state of the debris by calculating its coarse-grained entropy as a function of time. We represent the coarse-grained distribution function by taking a partition in the 6-dimensional phase-space and counting how many particles fall in each 6-D box. Naturally the size chosen for the partition and the discreteness of the simulations will affect the result. We can quantify the expected discreteness noise in the following way. \begin{figure*} \label{fig2} \flushleft{\psfig{figure=figure2.eps,angle=90,height=10cm,width=16.4cm}} \caption[]{Evolution of the entropy of the system for the different experiments, as a function of time in (a), and scaled with the mixing time-scale in (b). The error in the scaled entropy is of the order of 0.06.} \end{figure*} The uncertainty in the entropy can be attributed to fluctuations in the number counts, which we can estimate as Poissonian, $\propto \sqrt{N_i}$ in each occupied cell. Therefore, the uncertainty in the entropy in each cell is \[ \Delta S_i \approx \frac{\Delta N_i}{N} \left(1 + \ln \frac{N_i}{N}\right) \approx \frac{\sqrt{N_i}}{N} \ln N \] for $N \gg 1$. The total uncertainty is thus \begin{equation} \label{eq:error_entr} \Delta S \approx \frac{\ln N}{\sqrt{N}} \end{equation} which, for experiments with $10^5$ particles is $0.04$. In order to have a normalized measure of the mixing properties of the debris, we also computed the entropy of points equidistant in time along the corresponding orbit. After a very long integration, the orbit will fill the available region in phase-space, whose shape and size are determined by its integrals of motion. In this way, by comparing the entropy calculated for the debris with the `entropy of the orbit', we have a measure of how well mixed the debris is. We plot this `normalized' entropy in \mbox {Figure~2(a)} as a function of time. Note that the orbits which have the shortest periods show the most advanced state of mixing, but that this is not complete after a Hubble time. The degree of mixing basically depends on the range of orbital frequencies in the satellite, essentially as $(\Delta \nu) ^{-1}$ (Merritt 1999). This means, for example, that a small satellite will disperse much more slowly than a larger one on the same orbit. On the other hand a satellite set close to a resonance will mix on a much longer time scale. One can also imagine that if there are fewer isolating integrals than degrees of freedom so that chaos might develop, a satellite located initially in a chaotic region will have a large spread $\Delta \nu$ because of the extreme sensitivity to the initial conditions. Therefore the mixing timescale (no longer a {\it phase}-mixing timescale) will be very short, since the neighbouring orbits diverge exponentially, instead of like power-laws. If indeed the mixing rate is set by the spread in the orbital frequencies $\nu$ of the satellite, by normalising the time variable with this timescale we should be able to derive a unique curve for the entropy evolution $S = S_{\rm max} f(t/T_{\rm mix})$. In what follows we shall assume that the behaviour of the system is regular as seems to be the case for our experiments. Let us recall that any regular motion can be expressed as a Fourier series in three basic frequencies (Binney \& Spergel 1984, Carpintero \& Aguilar 1998). The motion is therefore a linear superposition of waves of the basic frequencies with different amplitudes. Terms in this expansion which have the largest amplitude will be the dominant terms and may be used to define three independent (basic) frequencies. By performing a spectral dynamics analysis as outlined by Carpintero \& Aguilar (1998) for ten randomly selected particles in our satellites in each experiment, we compute the frequencies associated with the largest amplitude terms in the $x$- (or $y$, since the problem is axisymmetric) and $z$-motions, and their dispersion around the mean. We then define \begin{equation} T_{\rm mix}^{-1} = min\{\sigma(\nu_{x}^{(1)}),\sigma(\nu_{x}^{(2)}), \sigma(\nu_{z}^{(1)}),\sigma(\nu_{z}^{(2)})\}, \end{equation} The curves obtained by scaling time with $T_{\rm mix}$ are shown in \mbox{Figure~2(b)} and they can be well fitted with the function \begin{equation} \label{eq:entr_fit} \frac{S}{S_{\rm max}} = 0.78 - 0.69 \,\exp(-27.03 \,\frac{t}{T_{\rm mix}}). \end{equation} The good fit and small dispersion confirms that mixing is governed primarily by the spread in frequency. \subsection{Configuration space properties} To analyse the spatial properties of the debris several Gyr after disruption, we have plotted smoothed isodensity surfaces and calculated different characteristic densities. In Figures~3 and 4 we show the density surface at approximately $10^{-6}$ times the initial density of the satellite. This encompasses most of the satellite's mass. \begin{figure} \label{fig3} \center{\psfig{figure=figure3.eps,height=11.7cm}} \caption[]{Isodensity surface of $10^{-6} \rho_0$ after 14 Gyr, seen from the Galactic plane, for the different experiments. } \end{figure} \begin{figure} \label{fig4} \center{\psfig{figure=figure4.eps,height=11.7cm}} \caption[]{Isodensity surface of $10^{-6} \rho_0$ after 14 Gyr, seen from the Galactic pole, for the different experiments. } \end{figure} This density surface practically does not change over the last 2 Gyr for experiments 3, 5, 6, showing that the system has reached a stage where it fills most of its available 3-D coordinate space. The shape of this isodensity surface also gives a measure of how advanced the disruption is. The form of the accessible \mbox{3-D} configuration volume is basically a torus, defined by the apocentre, pericentre and the inclination of the orbit. In Figures~3 and 4 we clearly see that shape for experiment 6. Experiments 3 and 5 are in an intermediate state and still need to fill part of their tori. In the opposite limit, experiment 2 has filled only a small fraction of its available volume. All this is consistent with what was found using the entropy in the previous subsection. The characteristic extent of the debris is much larger than the initial size of the satellite. Moreover, debris with these properties may well span a very large solid angle on the sky, and so be poorly described as a stream in coordinate space. This is the principal difference between our own experiments and those in which the Galaxy is represented by a spherical potential. In the latter the plane of motion of the satellite has a fixed orientation, and therefore all the particles have to remain fairly close to this plane, naturally giving a stream-like configuration. Late accretion events in the outer halo of the Galaxy will plausibly have this characteristic, as shown in Johnston et al. (1996) and Johnston (1998). However, similar behaviour should not be expected in the solar neighbourhood, or even as far as \mbox{10--15 kpc} from the galactic centre since at such radii no strong correlations are left in the spatial distribution of satellite particles. Any method which attempts to find moving groups purely by counting stars will probably fail in this regime. In Table~2, we present a summary of characteristic densities at different times which were calculated by counting particles within spheres of $0.5 \, {\rm kpc}$ radii. The maximum density is achieved at the pericentre of the orbit, though most of the mass is distributed closer to the apocentre. In all cases the maximum density is between three and four orders of magnitude lower than the initial density of the satellite, and the mean density of the debris is between four and five orders of magnitude lower. These values give another estimate of the degree of mixing of the debris. Note that, in accordance with the entropy computation, experiment 6 has the smallest characteristic densities, meaning that it has reached a rather evolved state, whereas experiment 2 has high densities in comparison to the rest. The maximum density in all of the experiments is roughly comparable (similar or an order of magnitude lower) to the local density of the Milky Way's stellar halo, though the sizes of regions where this density is reached get fairly small, a few ${\rm kpc}^3$, as the evolution proceeds. \begin{table} \caption{Characteristic densities for the different experiments.} \begin{tabular}{cccc} \hline Experiment & time & $\rho_{\rm mean}$ & $\rho_{\rm max}$ \\ & Gyr & $ 10^{2}{\rm M_{\odot}~kpc^{-3}} $ & $10^{2}{\rm M_{\odot}~kpc^{-3}}$ \\ \hline \hline 1 & 5.0 & 67.0 & 886.2 \\ & 10.0 & 14.6 & 223.5 \\ & 12.5 & 7.0 & 152.8 \\ & 15.0 & 6.8 & 181.4\\ \hline 2 & 5.0 & 84.7 & 857.5 \\ & 10.0 & 26.5 & 376.2\\ & 12.5 & 11.5 & 202.4 \\ & 15.0 & 9.7 & 288.4 \\ \hline 3 & 5.0 & 41.5 & 437.4 \\ & 10.0 & 8.9 & 72.6 \\ & 12.5 & 8.7 & 181.4 \\ & 15.0 & 6.9 & 177.6 \\ \hline 4 & 5.0 & 40.8 & 446.9 \\ & 10.0 & 5.9 & 99.3 \\ & 12.5 & 5.7 & 171.9 \\ & 15.0 & 5.1 & 156.6 \\ \hline 5 & 5.0 & 36.4 & 996.9 \\ & 10.0 & 10.9 & 210.1 \\ & 12.5 & 6.1 & 183.3 \\ & 15.0 & 5.7 & 213.9 \\ \hline 6 & 5.0 & 13.8 & 403.0 \\ & 10.0 & 4.3 & 82.1 \\ & 12.5 & 4.3 & 95.5 \\ & 15.0 & 3.4 & 63.0 \\ \hline \end{tabular} \end{table} \subsection{Velocity space properties} Let us now focus on the characteristics of the debris in velocity space. We divided the 3-D coordinate space into boxes and analysed the kinematical properties of the particles inside each box. Figure~5 shows an example. The scatter diagrams indicate that there is a strong correlation between the different components of the velocity vector inside any given box. Notice also the large velocity range in each component when close to the Galactic centre. This shows that the debris can appear kinematically hot. As we shall see this results from a combination of multiple streams within a given box (clearly visible in Figure~5) and of strong gradients along each stream. At a given point on a particular stream the dispersions are usually very small. \begin{figure*} \label{fig5} \center{\psfig{figure=figure5.eps,height=12cm,width=11.6cm}} \caption[]{Scatter plots of the different velocity components for stars in boxes of $\sim$ 3 kpc on a side at different locations for experiment 6 at 13.5 Gyr. Similar characteristics are observed in all our experiments.} \end{figure*} \section{Properties of the debris: Analytical approach} In this section we will develop an analytic formalism to understand and describe the spatial and kinematical properties of the stream. Let us recall that because the disruption of the satellite occurs very early in its history, the stars that were once part of it behave as test particles in a rigid potential for most of the evolution. One of the distinguishing properties of this ensemble of particles is that it initially had a very high density in phase-space, and by virtue of Liouville's theorem, this is true at all times. At late times, however, this is no longer reflected by a strong concentration in configuration space. This evolution can be understood in terms of a mapping from the initial configuration to the final configuration, which we will describe by using the adiabatic invariants, namely the actions. \subsection{Action-Angle variables and Liouville's theorem } \label{sec:general} Let $H = H({\mathbf{q}},{\mathbf{p}})$ be the (time-independent) Hamiltonian of the problem and $({\mathbf{q}},{\mathbf{p}})$ a set of canonical coordinates. We wish to transform the initial set $({\mathbf{q}},{\mathbf{p}})$ to one in which the evolution of the system is simpler, for example, where all the momenta $P_i$ are constant. To meet this last condition, it is sufficient to require that the new Hamiltonian be independent of the new coordinates $Q_i$: $H = H({\bf P}) = E$. The equations of motion then become \[ \dot{Q_i} = \nu_i, \qquad \dot{P_i} = 0,\] \noindent with solutions \[ Q_i = Q_i^0 + \nu_i t, \qquad P_i = P_i^0. \] The generating function that produces this transformation is known as Hamilton's Characteristic function $W({\mathbf{q}},{\mathbf{P}})$, and satisfies the Hamilton-Jacobi partial differential equation: \[ H(q, \frac{\partial W}{\partial q}) = E. \] The solution to this equation involves $N$ constants of integration $\alpha_i$ (including $E$) for a system with $2N$ degrees of freedom. Therefore, the new momenta {\bf P} may be chosen as functions of these $N$ constants of integration. A particularly simple situation occurs if the potential is separable in the original coordinate set $({\mathbf{q}},{\mathbf{p}})$. The characteristic function may then be expressed as $W = \sum_i W_i(q_i,\alpha_1...\alpha_N)$, and the Hamilton-Jacobi equation breaks up into a system of $N$ independent equations of the form: \[ H_i\left(q_i, \frac{\partial W_i}{\partial q_i}, \alpha_1...\alpha_N\right) = \alpha_i, \] each of which involves only one coordinate and the partial derivative of $W_i$ with respect to that coordinate. The transformation relations between the original and new sets of variables are \[ p_i = \frac{\partial W}{\partial q_i}, \qquad Q_i = \frac{\partial W}{\partial P_i}, \] and each component of the characteristic function is given by \begin{equation} \label{eq:W} W_i(q_i,\alpha_1..\alpha_N) = \int dq_i' \, p_i(q_i',\alpha_1..\alpha_N). \end{equation} (For more details, e.g. Goldstein 1953). The actions and angle variables are a set of coordinates that describe simply the evolution of a system of particles. They are particularly useful in problems where the motion is periodic. The actions are functions of the constants $\alpha_i$ and are defined for a set of coordinates $({\mathbf{q}},{\mathbf{p}})$ as \begin{equation} \label{eq:defJphi} J_i= \frac{1}{2 \pi} \oint dq_i\, p_i, \end{equation} and their conjugate coordinates, the angles, are \begin{equation} \phi_i= \frac{\partial W}{\partial J_i}. \end{equation} The evolution of the dynamical system thus becomes: \begin{eqnarray} \label{eq:evol} \phi_i\!\!\!& = &\!\!\! \phi_i^0 + \Omega_i({\mathbf{J}})\, t, \nonumber\\ J_i \!\!\!& = &\!\!\! J_i^0 = constant. \end{eqnarray} \subsubsection{The evolution of the distribution function} Let us assume that the initial distribution function of the ensemble of particles is a multivariate Gaussian in configuration and velocity space \[ f({\bf x}, {\bf v},t^0) = f_0 \exp{\left[-\sum_{i=1}^3 \frac{(x_i- \bar{x}_i^0)^2} {2 \sigma_x^2} \right]} \exp{\left[-\sum_{j=1}^3\frac{(v_j-\bar{v}_j^0)^2}{2 \sigma_v^2} \right]}, \] which we can also express using matrices as \begin{equation} \label{eq:arg_ini} f({\bf x}, {\bf v},t^0) = f_0 \exp{\left[-\frac{1}{2} {{\bf \Delta}_\varpi^0}^{\dagger} {\bf \sigma}_\varpi^0 {\bf \Delta}_\varpi^0 \right]}. \end{equation} Here $t^0$ denotes the initial time. ${\bf \Delta}_\varpi^0$ is a 6-dimensional vector, with three spatial and three velocity components, and ${{\bf \Delta}_\varpi^0}^\dagger$ is obtained by transposing ${\bf \Delta}_\varpi^0$. Explicitly ${\Delta_\varpi^0}_i = x_i - \bar{x}_i^0$ for $i=1..3$ and ${\Delta_\varpi^0}_{i} = v_j - \bar{v}_j^0$ for $i=j+3=4..6$ in a Cartesian coordinate system. The matrix ${\bf \sigma}_\varpi^0$ is diagonal with ${\sigma_\varpi^0}_{ii} = 1/\sigma_x^2$ for i=1..3, and ${\sigma_\varpi^0}_{ii} = 1/ \sigma_v^2$ for i=4..6. As we shall see the matrix formulation is particularly useful to study the evolution of the distribution of particles of the system. At the initial time, we perform a coordinate change from Cartesian to action-angle variables. Since the particles are initially strongly clustered in phase-space, a linearized transformation can be used to obtain the distribution function of the whole system in the (${\bf \phi},\,{\bf J}$) variables. We express this coordinate transformation as \begin{equation} {\bf \Delta}_\varpi^0 = {\bf T}^0 {\bf \Delta}_{w}^0, \qquad \mbox{with} \qquad T_{ij}^0 = \frac{\partial \varpi_i}{\partial w_j} \bigg\vert_{\bar{{\bf x}}^0, {\bar{\bf v}}^0}, \end{equation} where ${\bf \varpi} = (\bf{x},\,\bf{v})$, $w = (\bf{\phi},\,\bf{J})$ and the elements of matrix ${\bf T}^0$ are evaluated at the central point of the system, around which the expansion is performed. By substituting this in Eq.~(\ref{eq:arg_ini}), and by defining $ {\bf \sigma}_w^0 = {{\bf T}^0}^{\dagger} {\bf \sigma}_\varpi^0 {\bf T}^0$ the distribution function in action-angle coordinates becomes \begin{equation} \label{eq:arg_ini_aa} f({\bf \phi}, {\bf J},t^0) = f_0 \exp{ \left[-\frac{1}{2}{{\bf \Delta}_w^0}^{\dagger}{\bf \sigma}_w^0 {\bf \Delta}_w^0\right]}, \end{equation} that is, it is also a multivariate Gaussian, but with dispersions now given by $\sigma_w^0$. The deviation of any individual orbit from the mean orbit, defined by the centre of mass or the central particle of the system, $\Delta_{w_i} = w_i - \bar{w_i}(t)$ may in turn be expressed in terms of the initial action-angle variables as \begin{equation} \label{eq:j} J_i - \bar{J}_i = J_i^0 - \bar{J}_i^0, \end{equation} and \begin{equation} \label{eq:omega'} \phi_i - \bar{\phi}_i(t) = \phi_i^0 - \bar{\phi}_i^0 + \frac{\partial \Omega_i}{\partial J_k}\bigg\vert_{\bar{\bf J}} (J_k - \bar{J}_k)\, t, \end{equation} where we expanded the difference in the frequencies to first order in $J_k - \bar{J}_k$. Eqs.~(\ref{eq:j}) and (\ref{eq:omega'}) can also be written as \begin{equation} {\bf \Delta}_w(t) = {\bf \Theta}^{-1}(t) {\bf \Delta}_w^0, \end{equation} where ${\bf \Theta}(t)$ is the blockmatrix: \begin{equation} \label{eq:matrix_th} {\bf \Theta}(t) = \left[\begin{array}{cc} {\bf{\cal I}_3} & - {\bf \Omega'} t \\ {\bf 0} & {\bf{\cal I}_3} \end{array}\right]. \end{equation} ${\bf{\cal I}_3}$ here is the identity matrix in 3-D, and ${\bf \Omega'}$ represents a $3\times3$ matrix whose elements are $\partial \Omega_i/\partial J_j$. The distribution function in action-angle space in the neighbourhood of the central particle at any point of its orbit $(\bar{{\bf \phi}}(t), \bar{\bf J})$ is then \begin{equation} \label{eq:df-aat} f({\mathbf \phi}, {\bf J},t) = f_0 \exp{ \left[-\frac{1}{2}{\bf \Delta}_w^{\dagger}(t) {\bf \sigma}_w(t) {\bf \Delta}_w(t)\right]}, \end{equation} with ${\bf \Delta}_w(t) = ({\bf \phi} - \bar{\bf \phi}(t), {\bf J} - \bar{\bf J})$ and \begin{equation} {\bf \sigma}_w(t)= {{\bf\Theta}(t)}^{\dagger} {\bf \sigma}_w^0{\bf \Theta}(t), \end{equation} or in terms of the original coordinates $ {\bf \sigma}_w(t)= ({\bf T}^0 {\bf \Theta}(t))^{\dagger} {\bf \sigma}_\varpi^0 ({\bf T}^0 {\bf \Theta}(t)). $ \vspace{0.3cm} {\it Example: 1-D Case.} To understand more clearly what the distribution function in Eq.~(\ref{eq:df-aat}) tells us with respect to the evolution of the system, we consider the 1-D case. The initial distribution function becomes: \[f(\phi, J,t^0)= f_0 \exp\left[-\frac{(\phi-\bar{\phi}^0)^2} {2 \sigma_\phi^2} - \frac{(J-\bar{J})^2}{2 \sigma_J^2} - (\phi-\bar{\phi}^0)(J-\bar{J}) C_{\phi J}\right], \] where $C_{\phi J}$ denotes the initial correlation\footnote{$C_{\phi J}$ is not the correlation coefficient, usually denoted as $\rho$. They are related through $\rho = \frac{- C_{\phi J} \sigma_\phi^2 \sigma_J^2} {1 - C_{\phi J}\sigma_\phi^2 \sigma_J^2}$.} between $\phi$ and $J$. After considering the time evolution of the system (as in Eq.~(\ref{eq:omega'})) we find \[ f(\phi, J,t) = f_0 \exp \left[-\frac{(\phi-\bar{\phi}(t))^2} {2 \sigma_\phi^2} - (J-\bar{J})^2 \left(\frac{1}{2 \sigma_J^2} + \frac{{\Omega'}^2 t^2} {2 \sigma_\phi^2}\right) - (\phi-\bar{\phi}(t))(J-\bar{J}) \left(C_{\phi, J} + \frac{{\Omega'} t}{\sigma_\phi^2}\right)\right],\] where $\Omega' = d \Omega/d J$. This means that the dispersion in the $J$-direction effectively decreases in time and the covariance between $\phi$ and $J$ increases with time. The system becomes an elongated ellipsoid in phase-space as time passes by as a consequence of the conservation of the local phase-space density. This evolution is illustrated in Figure~6. \begin{figure*} \label{fig:aa} \center{\psfig{figure=figure6.eps,height=8cm,width=15.5cm}} \caption{1-D graphical interpretation of Liouville's theorem and the evolution of the system in phase space. The system is initially a Gaussian in action-angle space, with no correlations between $\phi$ and $J$. As time passes by, the system evolves into an ellipsoidal configuration, with principal axes that are no longer aligned with the action or the angle directions. After a some time, the system wraps around in the angles, giving rise to phase-mixing: at the same phase we observe more than one stream, each with a small variance in the action due to the conservation of the area in phase-space.} \end{figure*} \subsubsection{The distribution function in observable coordinates} To compute the characteristic scales of a system that evolved from an initial clumpy configuration, such as satellite debris, we have to relate the dispersions in action-angle variables to dispersions in a set of observable coordinates. The transformation from the action-angle coordinate system to the observable $({\bf x}, {\bf v})$ has to be performed locally since we generally cannot express in a simple way the global relation between the two sets of variables. Because the system has expanded so much along some directions in phase-space, the transformation from (${\bf \phi}$, ${\bf J}$) to $({\bf x}, {\bf v})$ has to be done point to point along the orbit. This transformation is given by the inverse of ${\bf T}$ at time $t$: \begin{equation} T^{-1}_{ij} = \frac{\partial w_i}{\partial \varpi_j} \bigg\vert_{{\bf x}, {\bf v}}, \noindent \end{equation}\noindent where the derivatives are now evaluated at the particular point of the orbit around which we wish to describe the system in $({\bf x}, {\bf v})$ coordinates. In particular, if the expansion is performed around $(\bar{\bf \phi}(t), \bar{\bf J})$ then \begin{equation} {\bf \Delta}_{w}(t) = {\bf T}^{-1} {\bf \Delta}_\varpi(t), \end{equation} and the distribution function may be expressed in the region around $\bar{\bf \varpi} = (\bar{\bf x}, \bar{\bf v})$ as \begin{equation} \label{eq:df_qpt'} f({\bf x}, {\bf v},t) = f_0 \exp{\left[-\frac{1}{2} {\bf \Delta}_\varpi(t)^{\dagger} {\bf \sigma}_{\bf \varpi}(t) {\bf \Delta}_\varpi(t)\right]}, \end{equation} with \begin{equation} {{\bf \Delta}_\varpi}_i(t) = \left\{ \begin{array}{cc} x_i - \bar{x}_i(t), & i=1..3, \\ v_j - \bar{v}_j(t), & i = j+3 =4..6, \end{array}\right. \end{equation} and \begin{equation} \label{eq:sigma_final} {\bf \sigma}_{\bf \varpi}(t) = ({\bf T}^0 {\bf \Theta}(t) {\bf T}^{-1})^{\dagger} {\bf \sigma}_{\bf \varpi}^0 ({\bf T}^0 {\bf \Theta}(t) {\bf T}^{-1}). \end{equation} We find once more that, locally, the distribution function is a multivariate Gaussian, where the variances and covariances depend on their initial values, on the time evolution of the system and on the position along the orbit where the system centre is located at time $t$. If we wish to describe the properties of a group of particles that are located at a different point ${\bf \tilde w}$ than the central particle (i.e. the expansion centre does not coincide with the satellite centre at time $t$) a slightly different approach must be followed. The region of interest is then ${\bf \Delta}_{w}(t) = w' - \bar w(t) = (w' - {\tilde w}) + (- \bar{w}(t) + {\tilde w} ) = {\bf \Delta}'_{w} + {\bf \tilde{D}}(t)$. We replace this in Eq.~(\ref{eq:df-aat}) and write \begin{equation} \label{eq:df-aat_n} f({\bf \phi}, {\bf J},t) = f_0 \exp{ \left[-\frac{1}{2} \left({\bf \Delta}'_w - \tilde{\bf D}(t)\right)^{\dagger} {\bf \sigma}_w(t) \left( {\bf \Delta}'_w - \tilde{\bf D}(t)\right) \right]}, \end{equation} or equivalently \begin{equation} \label{eq:df-aat_n1} f({\bf \phi}, {\bf J},t) = f_0'(t) \exp \left[-\frac{1}{2}{{\bf \Delta}'_w}^{\dagger} {\bf \sigma}_w(t) {\bf \Delta}'_w - \frac{1}{2} {{\bf \Delta}'_w}^{\dagger} {\bf \sigma}_w(t) \tilde{\bf D}(t) - \frac{1}{2} \tilde{\bf D}(t)^\dagger {\bf \sigma}_w(t) {\bf \Delta}'_w \right], \end{equation} where $f_0'(t) = f_0 \exp{[-1/2 \,\tilde{\bf D}(t)^{\dagger} {\bf \sigma}_w(t)\tilde{\bf D}(t)]}$. We may now express \mbox{${\bf \Delta}'_{w} = {\bf T'}^{-1} {\bf \Delta}'_\varpi$}, since the transformation is local again. The distribution function becomes \begin{equation} \label{eq:df-qpt_1} f({\bf x'}, {\bf v'},t) = \tilde{f}_0(t) \exp{ \left[-\frac{1}{2}({\bf \Delta}'_\varpi - {\bf \delta}(t))^{\dagger} \sigma_{\bf \varpi'}(t) ({\bf \Delta}'_\varpi - {\bf \delta}(t)) \right]}, \end{equation} with \begin{equation} {\bf \delta}(t) = {\bf T}'\tilde{\bf D}(t), \qquad \qquad {\bf \sigma}_{\bf \varpi'}(t) = ({\bf T}'^{-1})^{\dagger}{\bf \sigma}_w(t) {\bf T}'^{-1}, \end{equation} and $\tilde{f}_0(t) = f_0'(t)\exp{[-1/2\,({\bf T}^{-1}{\bf \delta}(t))^\dagger {\bf \sigma}_w(t){\bf T}^{-1}{\bf \delta}(t)]}$. This means that the local distribution function is Gaussian centered around ${\bf x_{m}} = {\bf {\tilde x}} + {\bf \delta}(t) $, which in general will not be very different from ${\bf \tilde x}$, with variances given by the elements of ${\bf \sigma}_{\bf \varpi'}(t)$. Thus the same type of behaviour as derived for the region around the system centre holds also if far from it. The formalism here developed is completely general, but the actions will not always be easy to compute. As we mentioned briefly in the beginning of this section, this depends mainly on whether the potential is separable in some set of coordinates. We focus on the spherical case and a simple axisymmetric potential in the next section to show how this procedure can be used to describe the characteristic scales of the debris. We refer the reader to the Appendix for details of the computation. \subsection{Spherical Potential} \subsubsection{Analytic predictions} For a spherical potential $\Phi(r)$, the Hamiltonian is separable in spherical coordinates and depends on the actions $J_\varphi$ and $J_\theta$ only through the combination $J_\varphi + J_\theta = L$. This means that the problem can be reduced to 2-D, and so we may choose a system of coordinates which coincides with the plane of motion of the satellite centre. The position of a particle is given by its angular ($\psi$) and radial ($r$) coordinates on that plane. Thus \begin{eqnarray} \label{eq:jr} L \!\!\!& = &\!\!\! J_\psi = p_\psi, \nonumber \\ J_r \!\!\!& = &\!\!\! \frac{1}{\pi} \int_{r_1}^{r_2} dr \frac{1}{r} \sqrt{2 (E - \Phi(r))\, r^2 - L^2}, \end{eqnarray} where $ L$ is the total angular momentum of the particle, $E$ its energy and $r_1$ and $r_2$ are the turning points in the radial direction of motion. The frequencies of motion and their derivatives needed to compute the matrix ${\bf \Theta}(t)$ and to obtain the time evolution of the distribution function, can be obtained by differentiating the implicit function $g = g(E, L, J_r) \equiv 0$ defined by Eq.~(\ref{eq:jr}). Let us assume that the variance matrix \footnote{Strictly speaking ${\bf \sigma}$ is the inverse of the covariance matrix. However we will loosely refer to ${\bf \sigma}$ as the variance matrix.} in action-angle variables is diagonal at $t=0$. This simplifies the algebraic computations and, since we are only trying to calculate late-time behaviour, this assumption does not have a major influence on our results. As shown in the previous section, the evolution of the system in action-angles is obtained through ${\bf \sigma}_w(t) = {\bf \Theta}(t)^\dagger {\bf \sigma^0}_w {\bf \Theta}(t)$. We find the properties of the debris in configuration and velocity space by transforming the action-angle coordinates $w = (\bf{\phi}, \bf{J})$ locally to the separable $\bf{\omega} = (\bf{x}, \bf{p})$, and then by transforming from $\bf{\omega} = (\bf{x}, \bf{p})$ to $\varpi = (\bf{x}, \bf{v})$. That is ${\bf \sigma_{\varpi}}(t) = {\bf T'}^\dagger {\bf \sigma}_w(t){\bf T'}$, with the $T' = T_{w \rightarrow {\bf \omega}} T_{p \rightarrow v}$. The diagonalization of the variance matrix $\sigma_{\varpi}(t)$ yields the values of the dispersions along the principal axes and their orientation. It can be shown that {\em two of the eigenvalues increase with time}, whereas the other {\em two decrease with time}. This is directly related to what happens in action-angle variables: as we have shown for the 1-D case, the system becomes considerably elongated along an axis which, after a very long time, is parallel to the angle direction. For 2-D (\mbox{3-D}), the evolution in action-angles can also be divided into two (three) independent motions (whether or not the Hamiltonian is separable), so that along each of these directions this same effect can be observed. The directions of expansion and contraction are linear combinations of the four axes $({ \breve{\epsilon}_\psi}, \breve{\epsilon}_r, \breve{\epsilon}_{v_\psi}, \breve{\epsilon}_{v_r})$ and, generally, none is purely spatial or a pure velocity direction. To understand the properties of the debris in observable coordinates, we will examine what happens around a particular point in configuration space. This is equivalent to studying the velocity part of the variance matrix: $\sigma_{\varpi}(v)$. For example, by diagonalising the matrix $\sigma_{\varpi}(v)$ we obtain the principal axes of the velocity ellipsoid at the point $\bar{\bf x}$. Its eigenvalues are the roots of $\det[{\bf \sigma}_{\varpi}(v) - \lambda {\bf{\cal I}}] = 0$. For $t \gg t_{\rm orb}$ \begin{eqnarray} \lambda_1 \lambda_2 \!\!\!& = &\!\!\! t^4 \, (\Omega'_{11} \Omega'_{33}-{\Omega'_{13}}^2)^2 r^2 \frac{p_r^2}{\Omega_r^2} \sigma_{11} \sigma_{33}, \nonumber \\ \lambda_1 + \lambda_2 \!\!\!& = &\!\!\! t^2 r^2 \left[ \sigma_{11} \left(\Omega'_{11} - \frac{\Omega'_{13}}{\Omega_r} \left(\Omega_\psi - \frac{L}{r^2}\right) \right)^2 + \sigma_{33} \left(\Omega'_{13} - \frac{\Omega'_{33}}{\Omega_r} \left(\Omega_\psi - \frac{L}{r^2}\right)\right)^2 \right] \nonumber \\ &+&t^2 \left[ \sigma_{11}{\Omega'_{13}}^2 + \sigma_{33}{\Omega'_{33}}^2\right]\frac{p_r^2}{\Omega_r^2}, \nonumber \end{eqnarray} where the subindices 1 and 3 represent $\psi$ and $r$ respectively, and $\sigma_{ii} = 1/\sigma_{\phi_i}^2$, the initial variance in the angles. Since $\sigma(v_i) = \sqrt{1/\lambda_i}$ both directions in velocity space have decreasing dispersions on the average. So far we did not describe how the debris is spread along the transverse direction to the plane of motion: $\breve{\epsilon}_\vartheta$ and $\breve{\epsilon}_{v_\vartheta}$. This is because we reduced the problem to 2-D in configuration space. However, the problem is not really 2-dimensional since the system has a finite width in the direction transverse to the plane of motion. Now that we have understood the dynamics of the reduced problem, the generalization to \mbox{3-D} is straightforward. If the variance matrix initially is diagonal in action-angle variables, then the dispersions along $\phi_\vartheta$ and $J_\vartheta$ do not change because the frequency of motion in the transverse direction is zero. Thus the velocity dispersion and width of the stream also remain unchanged in the direction perpendicular to the orbital plane. By integrating Eq.~(\ref{eq:df_qpt'}) with respect to the velocities, we compute the density at the point $\bar{\bf x}$ \begin{equation} \rho({\bf \bar{x}},t) = \int_{\Delta v_r}\int_{\Delta v_\varphi} \int_{\Delta v_\theta} dv_\theta \,dv_\varphi \,dv_r \, f({\bf \bar{x}}, {\bf v},t). \end{equation} For $t \gg t_{\rm orb}$, \begin{equation} \label{eq:rho_sph} \rho({\bf \bar{x}},t)= \frac{(2 \pi)^{3/2} f_0 \sigma_{\phi_3}} {|\Omega'_{11} \Omega'_{33}-{\Omega'_{13}}^2|} \left[\sqrt{\left(\frac{1}{\sigma_{\phi_1}^2}+ \frac{1}{\sigma_{\phi_2}^2}\right) \left(\frac{1}{\sigma_{J_1}^2} + \frac{1}{\sigma_{J_2}^2}\right)}\right]^{-1} \frac{\Omega_r L}{r^2 \sin\theta |p_r p_\theta|} \frac{1}{t^2}, \end{equation} where $\sigma_X$ is the initial dispersion in the quantity $X$. This equation shows that the density at the central point of the system decreases, on the average, as $1/t^2$. It tends to be larger near pericentre since it depends on radius as $1/r^2$; moreover it diverges at the turning points of the orbit. Even though the system evolves smoothly in action-angle variables, when this behaviour is projected onto observable space, singularities arise associated with the coordinate transformation. In action-angle variables the motion is unbounded, whereas in configuration space the particle finds itself at a `wall' near the turning points. This divergence shows up in the elements of the transformation matrix $T_{w \rightarrow \varpi}$ (Eq.~(\ref{eq_aptransf_el})), some of which tend to zero, while others diverge keeping the matrix non-singular. Because of the secular evolution of the dispersions, the intensity of the spikes will decrease with time. They are generally stronger at the pericentre of the orbit than at the apocentre, because of the $1/r^2$ dependence of the density. A direct consequence of the secular evolution is that the characteristic sizes of the system, the width and length of the stream, will increase linearly with time, reflecting the conservation of the full 6-D phase-space density. At the turning points one of these scales becomes extremely small. In Figure~7 we plot the predicted behaviour of the dispersions along the principal axes of the velocity ellipsoid as a function of time. We have chosen for the initial conditions a spherically symmetric Gaussian in configuration and velocity space. We follow the evolution of the variance matrix and, in particular, of the velocity dispersions along the three principal axes at the positions of the central particle. In all panels we can clearly see the periodic behaviour associated with the orbital phase of the central particle, superposed on the secular behaviour related to the general expansion of the system along the two directions in the orbital plane. The dispersion in the third panel is on average constant: it is in the direction perpendicular to the plane of motion. Its periodic behaviour is due to the fact that we did not start with a diagonal matrix in action-angles. The initial transformation from $({\bf x},{\bf v})$ to $({\bf \phi}, {\bf J})$ produces cross terms between all three directions. As the system evolves, and we project again onto configuration space, our 6-D ellipsoid rotates continually, producing a contribution in the direction perpendicular to the orbital plane which varies with the frequencies $\Omega_r$ and $\Omega_\theta$. By fitting $\sigma(v)/\sigma_0(v) = a/(1 + t/t_0)$, we find for the velocity dispersion in the first panel $ a = 1.5$ and $t_0 = 0.6$ Gyr, whereas for the dispersion in the second panel $ a = 2.6$ and $t_0 = 0.1$ Gyr. \begin{figure*} \label{f:vel} \flushleft{\psfig{figure=figure7.eps,height=12.0cm,width=15.8cm}} \caption[]{Time evolution of the velocity dispersions along the major axis, computed as outlined in Section~4.2, for the logarithmic spherical potential of Eq.~(\ref{eq:halo}). Two of the dispersions decrease with time as $1/(1 + t/t_0)$ (dotted curve), whereas the third one is constant on the average. The periodic variations are due to the combination of the radial and angular oscillations, as described in the text. The last panel shows the product of the three dispersions which is proportional to the density (full curve). The radial oscillation is shown (dotted curve) so that the occurrence of density spikes can be compared with the location of the turning points of the orbit.} \end{figure*} In the last panel we show the behaviour of the product of the three dispersions, which is proportional to the density (see Eq.~(\ref{eq:rho_sigma})). Note that, since two of the velocity dispersions have decreased approximately a factor of ten, the density has done so by a factor of hundred. Note also the decrease in the amplitude of the spikes and the good correlation of these with the turning points of the orbit. \subsubsection{Comparison to the simulations} In order to assess the limitations of our approach, we will compare our predictions with simulations of satellites with and without self-gravity. We first consider what happens to a satellite with no self-gravity moving in a spherical logarithmic potential. We take two different sets of initial properties for the satellites: $1 \,{\rm kpc}$ width and $\sigma_{1D} = 5 \,{\rm km}{\rm s^{-1}}$, corresponding to an initial mass of $\sim 5.9 \times 10^7 \,{\rm M}_{\odot}$; and $5 \,{\rm kpc}$ width and $\sigma_{1D} = 20\, {\rm km}{\rm s^{-1}}$, corresponding to $M \sim 4.7 \times 10^9 \,{\rm M}_{\odot}$ for the larger satellite. Both begin as spherically symmetric Gaussians in coordinate and velocity space. We launch them on the same orbit so that we can directly study the effects of the change in size. What observers measure are not the velocity dispersions or densities of a stream at a particular point, but mean values given by a set of stars in a finite region. We can estimate the effects of this smoothing by comparing our analytic predictions with the simulations. In the upper panel of Figure~8 we show the time evolution of the density (normalized to its initial value) for the small satellite. The full line represents our prediction and the stars correspond to the simulation. We simply follow the central particle of the system as a function of time, and count the number of particles contained in a cube of 1 kpc on a side surrounding it. Triangles represent the number density from an 8 times larger volume (2 kpc on a side). The agreement between the predictions and the estimated values from the simulations is very good. The representation of a continuous field with a finite number of particles introduces some noise which, together with the smoothing, is responsible for the disagreement. Note, however, how well the simulated density spikes agree with those predicted at the orbital turning points. The overall agreement is slightly better for the small cube than for the large one. This is due to the smoothing which inflates some of the dispersions as a result of velocity gradients along the stream. In the lower panel of Figure~8 we show a similar comparison for the large satellite. In general the prediction does very well here also. Note for the small boxes and at late times, we only have simulation points at the spikes (i.e. when the density is strongly enhanced). This is because the satellite initially has a larger velocity dispersion and therefore spreads out more rapidly along its orbit. \begin{figure*} \label{f:dens} \flushleft{\psfig{figure=figure8.eps,angle=90,height=12.0cm,width=16.4cm}} \caption[]{Time evolution of the density for a satellite moving in a spherical potential (Eq.~(\ref{eq:halo})), with similar orbital parameters as those of Experiment~6 in Table~1. The full line represents our prediction, normalized to the initial density. In the upper panel we plot the density behaviour for the $\sim 5.9 \times 10^7 \, {\rm M}_{\odot}$ satellite (see main text), whereas the lower panel corresponds to the $\sim 4.7 \times 10^9 \, {\rm M}_{\odot}$ satellite. The stars indicate the number of particles that fall in a volume of 1 kpc on a side around the central particle of the system, and the triangles represent the number of particles in a cubic volume of twice the side, both normalized to the initial value. The spike-like behaviour occurs at the turning points of the orbit (see main text -- Eq.~(\ref{eq:rho_sph})).} \end{figure*} We tested the effect of including self-gravity in the small satellite simulation, and found no significant qualitative or quantitative difference in the behaviour. \subsection{Axisymmetric case} As an illustrative example of the main characteristics of the axisymmetric problem, let us consider the class of Eddington potentials $\Phi(r,\theta) = \Phi_1(r) + \eta(\beta \cos{\theta})/r^2$ (Lynden-Bell 1962; 1994) which are separable in spherical coordinates. The third integral for this type of potentials is $I_3 = \frac{1}{2} L^2 + \eta(\beta \cos{\theta})$. The actions are computed from: \begin{eqnarray} J_\varphi \!\!\!& = &\!\!\! L_z, \\ J_\theta \!\!\!& = &\!\!\! \frac{1}{2 \pi}\oint d\theta \, \sqrt{2 (I_3 - \eta(\theta)) - \frac{J_\varphi^2} {\sin^2{\theta}}}, \\ J_r \!\!\!& = &\!\!\! \frac{1}{2 \pi} \oint dr \, \sqrt{2 (E - \Phi_1(r)) - \frac{2 I_3}{r^2}}. \end{eqnarray} Since the frequencies of motion are all different and non-zero, the system has the freedom to spread along three directions in phase-space. The conservation of the local phase-space density will force the dispersions along the remaining three directions to decrease in time. Following a similar analysis as for the spherical case we derive for the density at the central point ${\bf \bar{x}}(t)$ of the system at time $t$ \begin{equation} \label{eq:rho_ax} \rho({\bf{\bar x}},t) = \frac{(2 \pi)^{3/2} f_0} {\sqrt{\det{\bf \sigma}_{\bf \phi}^{0}}} \frac{1}{|\det{\bf \Omega'}|} \frac{\partial I_3}{\partial J_\theta} \frac{\Omega_r} {r^2 \sin\theta |p_r p_\theta|}\frac{1}{t^3}, \end{equation} where ${\bf \sigma}_{\bf \phi}^{0}$ is the angle submatrix of the initial variance matrix in action-angle variables. Therefore the density at the central point of the system decreases as $t^{-3}$, because of the extra degree of freedom that the rupture of the spherical symmetry introduces (see Appendix B), and so after a Hubble time, the density decreases by approximately a factor of a thousand. In Figure~9 we plot the time evolution of the components of the velocity ellipsoid for a system on an orbit with the same initial conditions as for the spherical case, in the potential \begin{equation} \label{eq:axis_pot} \Phi(r, \theta) = v_{\rm h}^2 \log{(r^2 + d^2)} + \frac{\beta^2 \cos^2\theta}{r^2}, \end{equation} where $v_{\rm h} = 123$ ${\rm km \, s^{-1}}$, $d = 12$ kpc and $\beta = 950$ kpc ${\rm km \, s^{-1}}$. This choice of parameters produces a reasonably flat potential which is physical (giving a positive density field) outside \mbox{7 kpc}. All velocity dispersions now decrease as $1/t$. \begin{figure*} \label{f:velax} \flushleft{\psfig{figure=figure9.eps,height=12.0cm,width=15.8cm}} \caption[]{Time evolution of the velocity dispersions along the principal axes, computed as outlined in Section~4.2 and 4.3, for the simple axisymmetric potential of Eq.~(\ref{eq:axis_pot}). Now all the dispersions decrease with time as $1/t$ (dotted curve). The periodic time behaviour is due to the combination of the radial and angular oscillations, as described in the text. The last panel shows the product of the three dispersions which is proportional to the density. The radial and $\theta$-oscillations are also plotted to indicate the position of the turning points.} \end{figure*} The analytic formalism developed here can be applied to any separable potential in a straightforward manner, using the definitions and results of Sec.~\ref{sec:general}. This includes, of course, the set of St\"ackel potentials which may be useful in representing the Milky Way (Batsleer \& Dejonghe 1994), or any axisymmetric elliptical galaxy (de Zeeuw 1985, Dejonghe \& de Zeeuw 1987). The only difference is that the matrix ${\bf T}$ of the transformation from the usual coordinates (${\bf x}$,${\bf v}$) to the action-angle variables should be first multiplied by the matrix of the mapping from (${\bf x}$,${\bf v}$) to the ellipsoidal coordinates $(\lambda,\mu,\varphi, p_\lambda,p_\mu,p_\varphi)$, since this is the system in which the problem is separable. We discuss some of the properties St\"ackel potentials and derive, for a particular model for our Galaxy, the explicit form for the density in Appendix C. Even if the potential is not separable our general results on the evolution of the system remain valid provided most orbits remain regular. In the general case the frequencies and their derivatives with respect to the actions will have to be computed through a spectral dynamics analysis similar to that used in Section 3.1 (Carpintero and Aguilar 1998). \subsection{What happens if there is phase-mixing} The procedure outlined above assumes that only one stream of debris from the satellite is present in any volume which is analysed. When phase-mixing becomes important we may find more than one kinematically cold stream near a given point. The velocity dispersions of the debris in such a region would then appear much larger than predicted naively using our formalism. We can make a rough estimate for the velocity dispersions also in this case by using the following simple argument. If the system is (close to) completely phase-mixed, then the coarse-grained distribution function that describes it will be uniform in the angles and therefore will only depend on the adiabatic invariants, i.e. $f({\bf x}, {\bf v}) = f({\bf J}({\bf x}, {\bf v}))$. Since these are conserved the moments of the coarse-grained distribution function will be given by the moments of the initial distribution function. Therefore $f(\bf{J})$ is completely determined by the initial properties of the system in the adiabatic invariants space. If the initial distribution function is Gaussian in action-angles then $f(\bf{J})$ will be Gaussian with mean and dispersion given by their values at $t=t^0$. As an example, let us analyse the velocity dispersion in the $\varphi$-direction in a particular region in which there is a multistream structure: \[ \sigma^2(v_\varphi) = \frac{\int d^3x~ d^3 v ~(v_\varphi - \bar{v}_\varphi)^2 ~f(\bf{J}(\bf{x}, \bf{v}))}{\int d^3x ~d^3 v~ f(\bf{J}(\bf{x}, \bf{v}))} = \frac{\displaystyle\int d^3x~ d^3J~ \displaystyle\left(\frac{{J}_\varphi}{R} - \frac{\bar{J}_\varphi}{\bar{R}}\right)^2 f(\bf{J})}{\int d^3x~ d^3J~f(\bf{J})}, \] where we used that $v_\varphi = {J}_\varphi/R$. By expanding to first order we find \begin{equation} \label{eq:sLz} \sigma^2(v_\phi) = \sigma^2(J_\varphi)/\bar{R}^2 + \Delta_x^2 \bar{J}_\varphi^2/\bar{R}^4. \end{equation}\noindent Here we replaced $\sigma(R)$ by $\Delta_x$ (the size of the region in question) which is justified by our previous result that the spatial dimensions of streams grow with time; and neglected the correlation between $J_\varphi$ and $R$. The first term in Eq.~(\ref{eq:sLz}) estimates the dispersion between streams, while the second estimates the contribution from the velocity gradient along an individual stream. For the experiments of Table~1 the values of the dispersions range from 50 to 150 ${\rm km \, s^{-1}}$. These dispersions increase in proportion to those of the initial satellite. \subsubsection{The filling factor} We can use the results of our previous section to quantify the probability of finding more than one stream at a given position in space. This probability is measured by the filling factor. We define this by comparing the mass-weighted spatial density of individual streams with a mean density estimated by dividing the mass of the satellite by the total volume occupied by its orbit. The first density can be calculated formally through an integral over the initial satellite: \[ \langle \, \rho(t)\, \rangle = \frac{1}{M} \int dm({\bf x,v}) ~\rho({\bf x,v})(t) = \frac{1}{M} \int d^3x~d^3v f({\bf x,v}, t^0)~\rho({\bf x,v})(t),\nonumber \] where $\rho({\bf x,v})(t)$ is the density at time $t$ of the individual stream in the neighbourhood of the particle which was initially at $({\bf x,v})$. The filling factor is then \[F(t) = \frac{M}{V_{o}}\frac{1}{\langle \, \rho(t)\, \rangle},\] where $V_{o}$ is the volume filled by the satellite's orbit. An estimate of the filling factor can be obtained by approximating $\langle \, \rho(t)\, \rangle$ by $\rho(\bar{\bf x},t)/(2 \sqrt{2})$ taken from Eqs.~(\ref{eq:rho_sph}), (\ref{eq:rho_ax}) or (\ref{eq:rho_staeckel}) for spherical, axisymmetric Eddington or St\"ackel potentials respectively. The factor $1/2 \sqrt{2}$ is the ratio of the central to mass-weighted mean density for a Gaussian satellite. We approximate $V_o = 4\pi\, r_{\rm apo}^3 \cos \theta_{\rm f}/3 $, where $r_{\rm apo}$ and $\theta_{\rm f}$ correspond to the orbit of the satellite centre. Since we are interested in deriving an estimate for the filling factor for the solar neighbourhood, we focus on the St\"ackel potential described in Appendix C, which produces a flat rotation curve resembling that of the Milky Way. Thus \begin{equation} F(t) =\frac{6 \sqrt{2} M \sqrt{\det{\bf \sigma_\phi^0}}} {2 (2 \pi)^{5/2} \, f_0} \frac{\langle \, R\, \rangle \langle \, |\nu - \lambda|v_\lambda v_\nu\, \rangle }{r_{\rm apo}^3 \cos{\theta_{\rm f}}} \frac{|\det{\bf \Omega'}|} {\displaystyle\left|\Omega_\nu \frac{\partial I_3}{\partial J_\lambda} - \Omega_\lambda \frac{\partial I_3}{\partial J_\nu}\right|} t^3, \end{equation} where $\lambda$, $\nu$ are spheroidal coordinates (for which the potential is separable), $J_\lambda$ and $J_\nu$ are the corresponding actions, and $\Omega_\lambda$ and $\Omega_\nu$ the frequencies; and $I_3$ is the third integral of motion. If we approximate $\langle \, v_\lambda v_\nu\, \rangle \sim v_{\rm circ}^2/4$ and replace $f_0 = M/(2 \pi \sigma(x) \sigma(v))^3$ then \begin{equation} \label{eq:fil_fac_gral} F(t) \sim C_{\rm orbit} C_{\rm IC} \left(\frac{\sigma(x)}{r_{\rm apo}}\right)^2 \, \frac{\sigma(v)}{v_{\rm circ}} \,\left(\Omega_\lambda \,t\right)^3, \end{equation} where \begin{equation} C_{\rm orbit} = \frac{3 \sqrt{\pi} ~\langle \, |\nu - \lambda|\, \rangle ~ \langle \, R\, \rangle ~v_{\rm circ}^5 ~|\det{\bf \Omega'}|} { 2 \cos{\theta_{\rm f}}\displaystyle\left|\Omega_\nu \frac{\partial I_3} {\partial J_\lambda} - \Omega_\lambda \frac{\partial I_3}{\partial J_\nu}\right| \Omega_\lambda^3}, \end{equation} depends on the orbital parameters of the satellite, and \begin{equation} C_{\rm IC} = \frac{h_\lambda h_\nu}{\displaystyle\left|\Omega_\nu \frac{\partial I_3}{\partial J_\lambda} - \Omega_\lambda \frac{\partial I_3}{\partial J_\nu}\right|} \frac{\lambda-\nu}{P^3 Q^3} \frac{R}{r_{\rm apo} v_{\rm circ}^2}\Bigg\rfloor_{{\bf \bar{x}^0}, {\bf \bar{v}^0}}, \end{equation} with \[ h_\tau = 2 p_\tau \frac{\partial p_\tau}{\partial \tau}, \qquad \tau = \lambda, \nu,\] is a function of its initial position on the orbit. (See Appendix C for further details and definitions). This last expression holds if the satellite is initially close to a turning point of its orbit. For example, a satellite of 10 ${\rm km \, s^{-1}}$ velocity dispersion and $0.4$ kpc size on an orbit with an apocentric distance of 13 kpc, a maximum height above the plane of 5 kpc and an orbital period of $\sim$ 0.2 Gyr, gives an average of $0.4$ streams of stars at each point in the inner halo after 10 Gyr. A satellite of 25 ${\rm km \, s^{-1}}$ dispersion and 1 kpc size on the same orbit would produce $5.9$ streams on the average after the same time. Let us compare this last prediction with a simulation for the same satellite and the same initial conditions in the Galactic potential described in Section~2. In Figure~10 we plot the behaviour of the filling factor from the simulation, computed as \[ F(t) = \frac{N}{V_o} \frac{1}{n(t)}, \] where $N$ is the total number of particles, $n(t) = N^{-1} \sum_i \rho_i$ with $\rho_i$ the density of the stream where particle $i$ is, which we calculate by dividing space up into 2 kpc boxes and counting the number of particles of each stream in each box. Note that the filling factor increases as $t^3$ at late times as we expect for any axisymmetric potential. Our prediction is in good agreement with the simulations, showing also that it is robust against small changes in the form of the Galactic potential. \begin{figure} \center{\psfig{figure=figure10.eps,height=9.0cm}} \caption[]{Time evolution of the filling factor for a satellite with an initial velocity dispersion of 25 ${\rm km \, s^{-1}}$ and size of 1 kpc, moving in the Galactic potential described in Section~2. Its orbital parameters resemble those of halo stars in the solar neighbourhood. The dashed-curve indicates a $\gamma_0 + \gamma_1 t^3$ fit for late times.} \end{figure} \subsubsection{Properties of an accreted halo in the solar neighbourhood} To compare with the stellar halo it is more useful to derive the dependence of the filling factor on the initial luminosity of a satellite. We shall assume that the progenitor satellites are similar to present-day dwarf ellipticals, and satisfy both a Faber-Jackson relation: \begin{equation} \label{eq:faber_jackson} \log \frac{L}{\rm L_\odot} - 3.53 \log \frac{\sigma(v)}{\rm km \, s^{-1}} \sim 2.35, \end{equation} for $H_0 \sim 50 \,{\rm km \,s^{-1} Mpc^{-1}}$, and a scaling relation between the effective radius ($R_e \sim \sigma(x)$) and the velocity dispersion $\sigma(v)$: \begin{equation} \label{eq:effrad_sigmavel} \log \frac{\sigma(v)}{\rm km \, s^{-1}} - 1.15 \log \frac{R_e}{\rm kpc} \sim 1.64, \end{equation} both as given by Guzm\'an, Lucey \& Bower (1993) for the Coma cluster. Expressed in terms of the luminosity of the progenitor, the filling factor then becomes \begin{equation} \label{eq:filfac_L} F(t) \sim C_{\rm orbit}\, C_{\rm IC} \left(\frac{L}{L_n}\right)^{0.776}\, (\Omega_\lambda \, t)^3, \end{equation} where $L_n$ is a normalization constant that depends on the orbit and on the properties of the parent galaxy as: \begin{equation} L_n = 3.75 \times 10^{11} {\rm L_\odot} \, \left(\frac{r_{\rm apo}}{10~ {\rm kpc}}\right)^{2.58} \left(\frac{v_{\rm circ}}{200~ {\rm km\, s^{-1}}}\right)^{1.29}. \end{equation} If the whole stellar halo had been built from disrupted satellites, we can derive the number of streams expected in the solar neighbourhood by adding their filling factors using the appropriate orbital parameters in Eq.~(\ref{eq:fil_fac_gral}) or Eq.~(\ref{eq:filfac_L}): $F_\odot(t) = N_{\rm sat} F(t)$. For a sample of giant stars located within 1 kpc from the Sun with photometric distances and radial velocities measured from the ground (Carney \& Latham 1986; Beers \& Sommer-Larsen 1995; Chiba \& Yoshii 1998), and proper motions measured by HIPPARCOS, we estimate $C_{\rm orbit} \times C_{\rm IC} \sim 1.29 \times 10^{-3}$. The median pericentric (apocentric) distance is $3.7$ ($ 11.6$) kpc, and the median $\Omega_\lambda$ is 26.6 Gyr$^{-1}$ (equivalent to a period of $\sim 0.24$ Gyr). Thus using Eq.~(\ref{eq:fil_fac_gral}) \[ F_\odot(t) \sim 0.9 N_{\rm sat} \left(\frac{\sigma(x)}{\rm kpc}\right)^2 \, \frac{\sigma(v)}{\rm km~s^{-1}} \, \left(\frac{t}{10 \,{\rm Gyr}}\right)^3. \] If now we assume that the progenitor systems are similar to present-day dwarf ellipticals, then using Eq.~(\ref{eq:filfac_L}) we find for the whole $10^9~{\rm L}_\odot$ stellar halo \begin{equation} F_\odot(t) \sim \left(\frac{t}{10 \,{\rm Gyr}}\right)^3 \times \left\{\begin{array}{lr} 5.1 \times 10^{2}, & \mbox{100}\times 10^7 ~{\rm L}_\odot \,\, {\rm sat}, \\ 3.0 \times 10^{2}, & \mbox{ 10} \times 10^8 ~{\rm L}_\odot \,\,{\rm sat}. \end{array}\right. \end{equation} For $t \sim 10$ Gyr, the {\em number of streams} expected in the solar neighbourhood is therefore in the range \begin{equation} F_{\odot} \sim 300 - 500. \end{equation} Fuchs \& Jahrei\ss\ (1998) have obtained a lower limit for the local mass density of spheroid dwarfs of $1 \times 10^{5} \, {\rm M}_{\odot} {\rm kpc}^{-3}$. We may use this estimate to derive the mass content in subdwarfs of an individual stream in a volume of 1 kpc$^3$ centered on the Sun: \begin{equation} F_{M}(t) \sim \frac{M_{\rm local~halo} ~({\rm in~ 1~ kpc}^3)}{F(t)}. \end{equation} Thus with our previous estimate for the filling factor \begin{equation} F_{M}(t) \sim \left(\frac{10 ~{\rm Gyr}}{t}\right)^3 \times \left\{\begin{array}{ll} 1.9 \times 10^2 \,{\rm M}_\odot, & {\rm for}\, 10^7 ~{\rm L}_\odot \,{\rm sat,} \\ 3.3 \times 10^2 \,{\rm M}_\odot, & {\rm for}\, 10^8 ~{\rm L}_\odot \,{\rm sat}. \end{array}\right. \end{equation} Therefore, after 10 Gyr, each stream contains $F_M \sim (200 - 350) \,{\rm M}_\odot$ in subdwarf stars, depending on the orbital parameters of the progenitors and their initial masses. Since the halo stars in the solar neighbourhood have one-dimensional dispersions $\sigma_{\rm obs}(v) \sim 100 - 150$ ${\rm km \, s^{-1}}$, in order to distinguish kinematically whether their distribution is really the superposition of $\sim 300 - 500$ individual streams of velocity dispersion $\sigma_{\rm st}(v)$ we might require that \begin{equation} \sigma_{\rm st}^{3}(v) < \frac{1}{27}\frac{\sigma_{\rm obs}^3(v)}{F_{\odot}}, \end{equation} where the factor $1/27$ would ensure a $\sim 3 \sigma$ distinction between streams. Using our previous estimate of $F_{\odot}$ this condition becomes \mbox{$\sigma_{\rm st}(v) < \sigma_{\rm obs}(v)/(20 - 24)$}, and thus \mbox{$\sigma_{\rm st}(v) < 5$ ${\rm km \, s^{-1}}$}. Currently the observational errors in the measured velocities of halo stars are of order 20 ${\rm km \, s^{-1}}$, and thus there is little hope to distinguish at the present day all the individual streams which may make up the stellar halo of our Galaxy. Since intrinsic velocity dispersions for streams originating from $10^7 - 10^8 {\rm L}_\odot$ objects are of the order of $3 - 5$ ${\rm km \, s^{-1}}$ after 10 Gyr, it should be possible to distinguish such streams with the astrometric missions SIM and GAIA, if they reach their planned accuracy of a few ${\rm km \, s^{-1}}$. Even with an accuracy of 15 ${\rm km \, s^{-1}}$ per velocity component, streams are predicted to be marginally separated. The clumpy nature of the distribution should thus be easily distinguishable in samples of a few thousand stars. One way of identifying streams which are debris from the same original object, is through clustering in action or integrals of motion space (Helmi, Zhao \& de Zeeuw 1998). \section{An observational application} Majewski et al. (1994) discovered a clump of nine halo stars in a proper motion survey near the NGP (Majewski 1992), which appeared separated from the main distribution of stars in the field. They measured proper motions, photometric parallaxes, $F$ magnitudes and $(J-F)$ colours for all nine stars and radial velocities for six of them. For these six stars we find for the mean velocity $\bar{v}_\varphi = -152 \pm 23 \, {\rm km\,s^{-1}}$, $\bar{v}_R = -260 \pm 18 \, {\rm km\,s^{-1}}$ and $\bar{v}_z = -76 \pm 18\, {\rm km\,s^{-1}}$, and for the velocity dispersions $\sigma(v_{\varphi}) = 99 \pm 33\, {\rm km\,s^{-1}}$, $\sigma(v_R) = 100 \pm 24 \,{\rm km\,s^{-1}}$ and $\sigma(v_z) = 35 \pm 24 \,{\rm km\,s^{-1}}$. If the dispersions are computed along the principal axes, we find $ \sigma(v_1) = 29 \pm 20 \,{\rm km\,s^{-1}}$, $ \sigma(v_2) = 68 \pm 94 \,{\rm km\,s^{-1}}$, $\sigma(v_3) = 125 \pm 5 \,{\rm km\,s^{-1}}$. Since the mean velocities are significantly different from zero, the group of stars can not be close to any turning point of their orbit. The only way to understand the large observed dispersions, in particular of $\sigma(v_3)$, if the stars come from a single disrupted satellite, is for the group to consist of more than one stream of stars. We believe that this may actually be the case. By computing the angular momenta of the stars we find they cluster into two clearly distinguishable subgroups: $\bar{L}_z^{(1)} = -784$ and $\sigma^{(1)} (L_z) = 299$, and $\bar{L}_z^{(2)} = -2180$ and $\sigma^{(2)} (L_z) = 313$ in kpc ${\rm km \, s^{-1}}$. If we accept the existence of two streams as a premise, we may compute the velocity dispersions in each of them. We find for the stream with 4 stars \[ \sigma^{(1)}(v_1) = 25 \pm 25 , \, \, \sigma^{(1)}(v_2) = 43 \pm 62 , \, \, \sigma^{(1)}(v_3)= 100 \pm 45, \] while for the stream with 2 stars \[ \sigma^{(2)}(v_1) = 3 \pm 4 , \, \, \sigma^{(2)}(v_2) = 25 \pm 21 , \, \, \sigma^{(2)}(v_3) = 89 \pm 64, \] all in ${\rm km \, s^{-1}}$. These results are consistent at a $2\sigma$ level with very small 3-D velocity dispersions, as expected, if indeed these are streams from a disrupted satellite. With this interpretation of the kinematics of this group, we can estimate the mass of the progenitor and its initial size and velocity dispersion. Galaxies today obey scaling laws of the Faber-Jackson or Tully-Fisher type. If we assume that the original satellite was similar to present-day dwarf ellipticals, then we may use Eq.~(\ref{eq:effrad_sigmavel}) to derive a relation between the initial dispersion in the $z$-component of the angular momentum and initial velocity dispersion of the progenitor \begin{equation} \sigma_i^2(L_z) = \sigma_i^2(v) R_{\rm apo}^2 + 0.0375^2 \frac{L_z^2}{R_{\rm apo}^2} \sigma_i^{1.74}(v), \end{equation} where $R_{\rm apo}$ is the apocentric distance of its orbit. Under the assumption that $L_z$ is conserved, we can derive $\sigma_i(v)$ by replacing in the previous equation the observed values of $L_z$, $\sigma(L_z)$ and an estimate of $R_{\rm apo}$. We obtain the latter by orbit integration in a Galaxy model, which includes a disk, bulge and halo and find $R_{\rm apo} \sim 12 \,{\rm kpc}$. Our estimate for the initial velocity dispersion of the progenitor is then \begin{equation} \sigma_i(v) \sim 48 \,{\rm km \,s}^{-1}, \end{equation} which in Eqs.~(\ref{eq:faber_jackson}) and (\ref{eq:effrad_sigmavel}) yields for its initial luminosity and size \begin{equation} L \sim 2 \times 10^8 ~{\rm L}_{\odot}, \qquad R \sim 1 \,{\rm kpc}. \end{equation} We estimate that the relative error-bars in these quantities are of order 50\%, if measurement errors and a 50\% uncertainty in the apocentric distance are included. In summary, if indeed these stars come from a single disrupted object, we must accept that the first six stars that were detected (Majewski et al. 1994) are part of at least two independent streams. This seems reasonable, since two streams can be indeed be distinguished, and the velocity dispersions, in each stream are very small. Moreover, a disrupted object with the properties just derived (luminosity, initial size and velocity dispersion), would fill its available volume rapidly, producing a large number of streams. In view of our explanation, a number of stars from the same disrupted object but with positive $z$-velocities should also be present in the same region, since phase-mixing allows streams to be observed with opposite motion in the $R$ and/or $z$ directions. Candidates for such additional debris should have similar $v_\varphi$, since $L_z$ is conserved during phase-mixing. By simple inspection of Figure~1(a) in Majewski et al. (1994), other stars can be indeed found, with similar $v_\varphi$ but opposite $v_R$ and $v_z$. \section{Discussion and Conclusions} We have studied the disruption of satellite galaxies in a disk + halo potential and characterised the signatures left by such events in a galaxy like our own. We developed an analytic description based on Liouville's theorem and on the very simple evolution of the system in action-angle variables. This is applicable to any accretion event if self-gravity is not very important and as long as the overall potential is static or only adiabatically changing. Satellites with masses up to several times $10^9 \, {\rm M}_{\odot}$ are likely to satisfy this adiabatic condition if the mass of the Galaxy is larger than several times $10^{10} \, {\rm M}_{\odot}$ at the time of infall and if there are no other strong perturbations. Even though have not studied how the system gets to its starting point, it seems quite plausible that in this regime dynamical friction will bring the satellites to the inner regions of the Galaxy in a few Gyr, where they will be disrupted very rapidly. Their orbital properties may be similar to those found in CDM simulations of the infall structure onto clusters, where objects are mostly on fairly radial orbits (Tormen, Diaferio \& Syer 1998); this is consistent with the dynamics of solar neighbourhood halo stars. Their masses range from the low values estimated observationally for dwarf spheroidals to the much larger values expected for the building blocks in hierarchical theories of galaxy formation. We summarize our conclusions as follows. After 10 Gyr we find no strong correlations in the spatial distribution of a satellite's stars, since for orbits relevant to the bulk of the stellar halo this is sufficient time for the stars to fill most of their available configuration volume. This is consistent with the fact that no stream-like density structures have so far been observed in the solar neighbourhood. On the contrary, strong correlations are present in velocity space. The conservation of phase-space density results in velocity dispersions at each point along a stream that decrease as $1/t$. On top of the secular behaviour, periodic oscillations are also expected: at the turning points of the orbit the velocity dispersions, and thus the mean density of the stream, can be considerably enhanced. Some applications of this density enhancement deserve further study. For example, the present properties of the Sagittarius dwarf galaxy seem difficult to explain, since numerical simulations show that it could have been disrupted very rapidly given its current orbit (Johnston, Spergel \& Hernquist 1995; Vel\'azquez \& White 1995). This puzzle has led to some unconventional suggestions to explain its survival, like a massive and dense dark matter halo (Ibata \& Lewis 1998) or a recent collision with the Magellanic Clouds (Zhao 1998). However, since the densest part of Sagittarius seems to be near its pericentre, it could be located sufficiently close to a `caustic' to be interpreted as a transient enhancement. Sagittarius could simply be a galaxy disrupted several Gyr ago (c.f. Kroupa 1997). If the whole stellar halo of our Galaxy was built by merging of $N_{\rm sat}$ similar smaller systems of characteristic size $\sigma(x)$ and velocity dispersion $\sigma(v)$, then after 10 billion years we expect the stellar distribution in the solar neighbourhood to be made up of $F_\odot$ streams, where \[ F_\odot \sim 0.9 N_{\rm sat} \left(\frac{\sigma(x)}{\rm kpc}\right)^2 \, \frac{\sigma(v)}{\rm km~s^{-1}}.\] For satellites which obey the same scaling relations as the dwarf elliptical galaxies, this means 300 to 500 streams. Individually, these streams should have extremely small velocity dispersions, and inside a 1 kpc$^3$ volume centered on the Sun each should contain a few hundred stars. Since the local halo velocity ellipsoid has dispersions of the order of 100 ${\rm km \, s^{-1}}$, 3-D velocities with errors smaller than 5 ${\rm km \, s^{-1}}$ are needed to separate unambiguously the individual streams. This is better by a factor of four than most current measurements, which would, however, be good enough to give a clear detection of the expected clumpiness in samples of a few thousand stars. The combination of a strongly mixed population with relatively large velocity errors yields an apparently smooth and Gaussian distribution in smaller samples. Since the intrinsic dispersion for a stream from an LMC-type progenitor is of the order of $3 - 5$ ${\rm km \, s^{-1}}$~ after a Hubble time, one should aim for velocity uncertainties below 3 ${\rm km \, s^{-1}}$. With the next generation of astrometric satellites, (in particular GAIA, e.g. Gilmore et al. 1998) we should be able to distinguish almost all streams in the solar neighbourhood originating from disrupted satellites. Our analytic approach is based on Liouville's Theorem and the very simple evolution of the system in action-angle variables. Although the latter is likely to fail in the full merging regime, the conservation of local phase-space density will still hold. It will be interesting to see how this conservation law influences the final phase-space distribution in the merger of more massive disk-like systems. These are plausible progenitors for the bulge of our Galaxy in hierarchical models. \section*{Acknowledgments} A.H. wishes to thank the hospitality of MPA, HongSheng Zhao for many very useful discussions, Tim de Zeeuw for comments on earlier versions of this manuscript and Daniel Carpintero for kindly providing the software for the spectral-analysis used in Section~3.1. EARA has provided financial support for A.H. visits to MPA.
1,116,691,501,021
arxiv
\section{Introduction}
1,116,691,501,022
arxiv
\section{Introduction} The analysis of the oscillation spectrum provides an unrivaled method for probing the stellar internal structure because the frequencies of these oscillations depend on the sound speed inside the star, which in turn depends on the density, temperature, gas motion, and other properties of the stellar interior. High-precision spectrographs have acquired data yielding to a rapidly growing list of solar-like oscillation detections in main-sequence and giant stars (see e.g., Bedding \& Kjeldsen \cite{bk07}, Carrier et al. \cite{cel08}). In a few years, we have moved from ambiguous detections to firm measurements. Among these, only a few are related to red giants, e.g., \object{$\xi$~Hya}, Frandsen et al. (\cite{frandsen}), \object{$\epsilon$~Oph}, De Ridder et al. (\cite{joris1}), and \object{$\eta$~Ser}, Barban et al. (\cite{barban}). The reason is that longer and almost uninterrupted time series are needed to characterize the oscillations in red giants, because of longer oscillation periods than main-sequence stars, and long observing runs are difficult to obtain using high-accuracy spectrographs. The CoRoT (COnvection ROtation and planetary Transits) satellite (Baglin \cite{bag}) is perfect for this purpose because it can provide these data for a large number of stars simultaneously. The CoRoT satellite continuously collects white-light high-precision photometric observations for 10 bright stars in the so-called {\it seismofield}, as well as 3-color photometry for thousands of relatively faint stars in the so-called {\it exofield}. The primary motivation for acquiring this second set of data is to detect planetary transits, but the data are also well suited to asteroseismic investigations. De Ridder et al. (\cite{joris2}) unambiguously detected long-lifetime non-radial oscillations in red giant stars in the exofield data of CoRoT, which is an important breakthrough for asteroseismology. Indeed, observations from either the ground or other satellites have been unable to confirm the existence of non-radial modes and determine a clear value of the mode lifetime. Hekker et al. (\cite{hekker}) presented a more detailed classification of the red giants observed by CoRoT. \object{HR 7349} (HD 181907) is a bright equatorial G8 giant star ($V$ = 5.82) that is an excellent target for asteroseismology. This star was selected as a secondary target during the first long run of the CoRoT mission. In this paper, we thus report on photometric CoRoT observations of \object{HR~7349} resulting in the detection and identification of p-mode oscillations. The non-asteroseismic observations are presented in Sect.~2, the CoRoT data and frequency analysis in Sects.~3 and 4, and the conclusions are given in Sect.~5. \section{Fundamental parameters} \subsection{Effective temperature and chemical composition} We used the line analysis code MOOG, Kurucz models, and a high-resolution FEROS spectrum obtained in June 2007 to carry out an LTE abundance study of HR~7349. The effective temperature and surface gravity were estimated from the excitation and ionization equilibrium of a set of iron lines taken from the line list of Hekker \& Mel\'endez (\cite{hekker1}). We obtain $T_{\rm eff}$=4790$\pm$80 K and [Fe/H]=--0.08$\pm$0.10 dex, while the abundance pattern of the other elements with respect to Fe is solar within the errors. The full results of the abundance analysis will be reported elsewhere (Morel et al., in preparation). We also determined a photometric temperature given by the relation in Alonso et al. (\cite{alonso}) using the dereddened color index ($B$-$V$) (see Sect.~\ref{lum}) and found 4704$\pm$110\,K, which agrees with the spectroscopic value. We finally adopt a weighted-mean temperature of 4760$\pm$65\,K. \subsection{Luminosity} \label{lum} Even for such a bright star, the interstellar extinction in the direction of the Galactic center is not negligible. From the \textsc{Hipparcos} parallax $\Pi=9.64 \pm 0.34$\,mas (van Leeuwen \cite{vanleeuwen}) and the value of ($B$-$V$)\,=\,1.093 in the \textsc{Hipparcos} catalog, an absorption of A$_V$\,=\,0.185\,mag is derived for the region of the star (Arenou et al. \cite{arenou}), which corresponds to an E$_{B-V}$\,=\,0.052. Combining the magnitude $V = 5.809 \pm 0.004$ (Geneva photometry, Burki et al. \cite{bu08}), the \textsc{Hipparcos} parallax, the solar absolute bolometric magnitude $M_{\mathrm{bol},\,\odot}=4.746$ (Lejeune et al. \cite{le98}), and the mean bolometric corrections $BC = -0.40 \pm 0.04$\,mag ($BC = -0.42 \pm 0.06$\,mag according to the calibration of Flower, \cite{flower} and $BC = -0.38 \pm 0.05$\,mag by Alonso et al., \cite{alonso}), we find a luminosity for HR~7349 of $L=69 \pm 6$\,$L_{\odot}$. \subsection{Rotational velocity} We determined the rotational velocity of the star by means of a spectrum taken with the spectrograph \textsc{Coralie} installed on the 1.2-m Swiss telescope, ESO La Silla, Chile. According to the calibration of Santos et al. (\cite{santos}), we determined a v\,$\sin i$\,=\,1.0\,$\pm$\,1.0\,km\,s$^{-1}$. For this small value of the projected rotational velocity, we do not expect to see any split modes in the power spectrum. \subsection{Large spacing estimation} \label{lse} An estimation of the mass of HR~7349 may be obtained by matching evolutionary tracks to the $L$-$T_{\rm eff}$ error box in the HR diagram. However, in the red giant part of the HR diagram, this determination is not robust at all. Assuming a mass of between 0.8 and 3\,M$_{\odot}$ and scaling from the solar case (Kjeldsen \& Bedding \cite{kjebed}), a large frequency spacing of 2.8-5.5\,$\mu$Hz is expected. \section{CoRoT Observations} \label{co} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{12749fg1.eps}} \caption{The total CoRoT light curve (top) and a zoom (bottom) of HR~7349. This light curve is detrended with a polynomial fit (order 8), which only affects frequencies below 6\,$\mu$Hz. A periodicity of about 8.5 hours can be seen in the zoom, corresponding to oscillation modes close to 30\,$\mu$Hz.} \label{light}% \end{figure} \object{HR 7349} was observed with the CoRoT satellite for 5 consecutive months. CoRoT was launched on 2006 December 27 from Ba\"{i}konur cosmodrome on a Soyuz Fregat II-1b launcher. The raw photometric data acquired with CoRoT were reduced by the CoRoT team. A detailed description of how photometric data are extracted for the seismology field was presented in Baglin (\cite{bag}). A summary can be found in Appourchaux et al. (\cite{appourchaux}). In the seismofield, CoRoT obtains one measurement every 32 seconds. The observations lasted for 156.64 days from May 11th to October 15th 2007. The light curve shows near-continuous coverage over the 5 months, with only a small number of gaps due mainly to the passage of CoRoT across the South Atlantic Anomaly. These short gaps were filled by suitable interpolation (Baglin \cite{bag}), without any influence on the mode extraction because it only affects the amplitude of frequencies far above the oscillation range of our target (see Fig.~\ref{figboth}). The duty cycle for HR~7349 before interpolation was 90~\%. For the frequency analysis (see Sect.~\ref{fa}), the light curve was detrended with a polynomial fit to remove the effect of the aging of the CCDs (see Auvergne et al. \cite{auvergne}). This detrending has no consequence on the amplitude or frequency of oscillation modes, since it only affects the power spectrum for frequencies lower than 6\,$\mu$Hz. The light curve shows variations of a timescale of 8-9 hours and peak-to-peak amplitudes of 1-3\,mmag (see Fig.~\ref{light}). This signal is a superposition of tens of smaller modes with similar periods (see Sect.~\ref{fa}). \begin{figure} \resizebox{\hsize}{!}{\includegraphics{12749fg2.eps}} \caption{Power spectra of the original data (grey) and interpolated data (black). The range of the oscillation is zoomed in the inset. The interpolation drastically reduces the amplitude of aliases, in particular the one at 23\,$\mu$Hz.} \label{figboth}% \end{figure} \section{Frequency analysis} \label{fa} \subsection{Noise determination} \label{nb} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{12749fg3.eps}} \caption{Power density spectrum of the photometric time series of HR~7349 and a multi-component function (black line) fitted to the heavily smoothed power density spectrum. The function is the superposition of three power-law components (dashed lines), white noise (horizontal dashed line) and a power excess hump approximated by a Gaussian function.} \label{logfit}% \end{figure} We computed the power spectra of CoRoT light curves with both gaps and interpolated points (see Sect.~\ref{co}). The resulting power spectra are quasi-identical, the interpolation not affecting the oscillations but suppressing the aliases (the most important of which lies at 23\,$\mu$Hz). We thus analyze the interpolated time series with negligible alias amplitudes. The time base of the observations gives a formal resolution of 0.07\,$\mu$Hz. The power (density) spectrum of the time series, shown in Figs.~\ref{logfit} and~\ref{power}, exhibits a series of peaks between 20 and 40\,$\mu$Hz, exactly where the solar-like oscillations are expected for this star. The power density spectrum is independent of the observing window: this is achieved by multiplying the power by the effective length of the observing run (we have to divide by the resolution for equidistant data), which is calculated to be the reciprocal of the area beneath the spectral window in power (Kjeldsen et al. \cite{kjel}). We note that to obtain the same normalisation as in Baudin et al. (\cite{baudin}), we multiply the power by the effective length of the observation divided by four. Typically for such a power spectrum, the noise has two components: \begin{itemize} \item At high frequencies it is flat, indicative of the Poisson statistics of photon noise. \item Towards the lowest frequencies, the power scales inversely with frequency, as expected for instrumental instabilities and noise of stellar origin like granulation. \end{itemize} For the Sun, it is common practice to model the background signal with power laws to allow accurate measurements of solar oscillation frequencies and amplitudes (Harvey \cite{harvey}, Andersen et al. \cite{andersen}, Aigrain et al. \cite{aigrain}). To study this "noise", we compute the power density spectrum shown in Fig.~\ref{logfit} and fit a smoothed version of this spectrum with a sum of N power laws \begin{equation} P (\nu) = \sum_{i=1}^N P_i = \sum_{i=1}^N \frac{A_i}{1+(B_i \ \nu)^{C_i}} , \end{equation} where this number N depends on the frequency coverage, $\nu$ is the frequency, A$_i$ is the amplitude of the $i$-th component, $B_i$ is its characteristic timescale, and C$_i$ is the slope of the power law. For a given component, the power remains approximately constant on timescales longer than $B_i$, and drops off for shorter timescales. Each power law corresponds to a separate class of physical phenomena, occurring on a different characteristic timescale, and corresponding to different physical structures on the surface of the star. In our case, we fixed the slope to 4, which is a typical value for the Sun (Aigrain et al. \cite{aigrain}, Michel et al. \cite{michel}). Moreover, this value allows us to fit our power density spectrum well. To model the power density spectrum, we added a white noise P$_n$ and a power excess hump produced by the oscillations, which was approximated by a Gaussian function \begin{equation} P (\nu) = \sum_{i=1}^N \frac{A_i}{1+(B_i \ \nu)^{C_i}} + P_n + P_g \ e^{-(\nu_{max}-\nu)^2/(2 \sigma^2)} \ . \end{equation} The number of components N is determined iteratively: we first made a single component fit, and additional components were then added until they no longer improved the fit. In our case, we limited the number of components to three. We note that for this star, part of the low frequency variation is caused by the aging of the CCD. The timescales for the different noise sources ($B_i$) are 3.6 $\times$ 10$^6$, 6.9 $\times$ 10$^4$, and 1.1 $\times$ 10$^4$ s. The noise at high frequencies P$_n$ is only 0.17 ppm$^2 / \mu $Hz and the oscillations are centered on 28.2\,$\mu$Hz. \subsection{Search for a comb-like pattern} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{12749fg4.eps}} \caption{Power spectrum of the CoRoT observations of HR~7349. Only a polynomial fit was removed from the original data.} \label{power}% \end{figure} In solar-like stars, p-mode oscillations are expected to produce a characteristic comb-like structure in the power spectrum with mode frequencies $\nu_{n,\ell}$ reasonably well approximated by the asymptotic relation (Tassoul \cite{tassoul80}): \begin{eqnarray} \label{eq1} \nu_{n,\ell} & \approx & \Delta\nu(n+\frac{\ell}{2}+\epsilon)-\ell(\ell+1) D_{0}\;. \end{eqnarray} Here $D_0$ (which equals $\frac{1}{6} \delta\nu_{02}$ if the asymptotic relation holds exactly) and $\epsilon$ are sensitive to the sound speed near the core and in the surface layers, respectively. The quantum numbers $n$ and $\ell$ correspond to the radial order and the angular degree of the modes, and $\Delta\nu$ and $\delta\nu_{02}$ are the large and small spacings. We note that a giant star such as \object{HR~7349} is expected to show substantial deviations from its regular comb-like structure described above (Christensen-Dalsgaard \cite{chris}). This is because some mode frequencies, except for $\ell=0$, may be shifted from their usual regular spacing by avoided crossings with gravity modes in the stellar core (also called `mode bumping') (see e.g., Christensen-Dalsgaard et al. \cite{cd95} and Fernandes \& Monteiro \cite{fm03}). We must keep the possibility of these mixed modes in mind when attempting to identify oscillation modes in the power spectrum. Moreover, the ratio of lifetime to oscillation periods is usually far smaller for red giants (because of their longer periods) than for solar-like stars, which can complicate the mode detection (see e.g., Stello et al. \cite{stello}, Tarrant et al. \cite{tarrant}). The first step is to measure the large spacing that should appear at least between radial modes. The power spectrum is autocorrelated to search for periodicity. Each peak of this autocorrelation (see Fig.~\ref{auto}) corresponds to a structure present in the power spectrum. One of the three strong groups of peaks at about 1.7, 3.5, and 5.2\,$\mu$Hz should correspond to the large spacing. By visually inspecting the power spectrum, the value of about 3.5\,$\mu$Hz is adopted as the large separation, the two others are spacings between the $\ell$\,=\,0 and $\ell$\,=\,1 modes. This large separation value is in good agreement with the scaled value from the solar case (see Sect.~ \ref{lse}). \begin{figure} \resizebox{\hsize}{!}{\includegraphics{12749fg5.eps}} \caption{Autocorrelation of the slightly smoothed power spectrum. The large spacing, as well as separations between $\ell$\,=\,0 and $\ell$\,=\,1 modes are clearly present.} \label{auto}% \end{figure} \subsection{Extraction of mode parameters} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{12749fg6.eps}} \caption{Power spectra of the whole dataset (black) and of the two half-long subsets (grey). The frequencies of the oscillation peaks change from one subset to the other, which is a sign of a finite lifetime. The initial guess for the identification of modes are indicated by shaded regions: these regions are regularly spaced in agreement with the large separation deduced from the autocorrelation. Every second region has a structure more simple and narrow, corresponding to our identification of $\ell$\,=\,1 modes, the others are more complex and wide and correspond to $\ell$\,=\,0 and~2 modes.} \label{fit2}% \end{figure} \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{12749fg7.eps}} \caption{{\bf Top:} Lorentzian fit (thick red line) of the observed power spectrum assuming the same lifetime for all the oscillation modes. {\bf Bottom:} Ratio of the observed spectrum over the Lorentzian fit (in amplitude). As expected, this ratio does not show any correlation with the fit.} \label{fit}% \end{figure*} The power spectra in Figs.~\ref{fit2} and~ \ref{fit} clearly exhibit a regularity that allows us to identify $\ell$\,=\,0 to $\ell$\,=\,2 modes. Each mode consists of several peaks, which is the clear signature of a finite lifetime shorter than the observing time span. In order to determine the mode frequencies, as well as amplitude and lifetime of the modes, we fitted the power spectrum using a maximum likelihood estimation (MLE) method. MLE has been applied widely in the helioseismic community (e.g., see Schou \cite{schou}, Appourchaux et al. \cite{appour1}, and Chaplin et al. \cite{chaplin}). Our program uses the IDL routines developed by T. Appourchaux (\cite{appour1}). This method has already been used with success for the red giant $\epsilon$~Oph (Barban et al. \cite{barban2}). The modelled power spectrum for a series of M oscillation modes, P($\nu_k$), is \begin{equation} P ( \nu_k ) = \sum_{n=1}^M \left( H_n \frac{1}{1+\left(\frac{2 ( \nu_k - \nu_n)^2}{\Gamma_n}\right)} \right) + B, \end{equation} where $H_n$ is the height of the Lorentzian profile, $\nu_n$ is the oscillation mode frequency, $\Gamma_n$ is the mode line-width, and $B$ is the background noise. The fit was performed on the non-over-sampled power spectrum to minimize the interdependency of the points. The quantity that was minimized is \begin{equation} L = \sum_{k=1}^K \left( \ln P(\nu_k) + \frac{P_{obs}(\nu_k)}{P(\nu_k)} \right) , \end{equation} where $K$ is the number of bins, i.e., the number of Fourier frequencies. Only a few peaks belong to a given mode, indicating a lifetime longer than 10 days. It is thus also difficult to derive the correct Lorentzian shape for each mode. Therefore some parameters were fixed to avoid incorrect parameter determinations: \begin{itemize} \item The noise was determined independently of the MLE method (see Sect.~\ref{nb}). \item Since no difference between the shapes of modes of different degree $\ell$ is detected and the value of v\,$\sin i$ is extremely small, resulting in an expected rotational splitting value smaller than 0.03\,$\mu$Hz, we assumed that it is zero. \item When fitting all modes without fixing the width of the Lorentzian envelope, we clearly saw that the fit was not robust enough to provide an accurate determination of all mode parameters. The method was thus first to find a mean value for the Lorentzian width and to fix this mean value for all modes. The mean width of the Lorentzian is obtained by individually fitting all modes and by taking their mean, rejecting the values that obviously correspond to a poor fit. The determined mean is 0.25\,$\pm$\,0.06\,$\mu$Hz, which corresponds to a mode lifetime of 14.7$^{+4.7}_{-2.9}$\,d. The use of sub-series show the stochastic nature of the mode excitation caused by the different fine structure of peaks in each spectrum, and give an indication of their width. We note that we also checked that no significant width difference was found for modes with different degrees $\ell$ (by comparing their mode-width means). \end{itemize} All modes were then fitted with fixed parameters as deduced above. We note that the initial guess values are indicated by shaded regions in Fig.~\ref{fit2}. These regions are already in good agreement with a regular spacing between modes. Every second region has a structure that is more simple and narrow, corresponding to our identification of $\ell$\,=\,1 modes, the others are more complex and wide. In this last case, we needed to fit two modes per region to reproduce the power spectrum (which correspond to $\ell$\,=\,0 and~2 modes). At 38\,$\mu$Hz, the signal-to-noise ratio is far smaller and it becomes more difficult to differentiate between modes: for the sake of homogeneity, we also decided to fit two modes in this region. The frequencies of these last two modes will however be more uncertain. The result is listed in Table~\ref{tab1}. The formal uncertainties associated with MLE are well understood, as explained by Libbrecht (\cite{lib}) and Toutain \& Appourchaux (\cite{toutain}). The echelle diagram with the nineteen identified modes is shown in Fig.~\ref{dech}. At higher and lower frequency (above 40\,$\mu$Hz and below 19\,$\mu$Hz), the amplitude of the modes is either too small or the noise too high to unambiguously identify additional modes. The ratio of the observed power spectrum to the fit is shown in Fig.~\ref{fit}: it appears to be pure noise and has a mean value of 1. Moreover, no correlation was found between the amplitude of this ratio and the amplitude of the fit. We note that the non-radial modes are as well aligned as radial or non-mixed modes. However, the separation between $\ell$\,=1 and $\ell$\,=0 modes is not fully compatible with the asymptotic relation. The $\ell$\,=1 modes are indeed too far to the right in the echelle diagram. The large and small separations are shown in Fig.~\ref{diagastero}. The mean large separation has a value of $\Delta\nu$\,=\,3.47\,$\pm$\,0.12\,$\mu$Hz, and the values for different degrees $\ell$, $\Delta\nu_{\ell}$, are: $\Delta\nu_0$\,=\,3.45\,$\pm$\,0.12\,$\mu$Hz, $\Delta\nu_1$\,=\,3.46\,$\pm$\,0.07\,$\mu$Hz, and $\Delta\nu_2$\,=\,3.50\,$\pm$\,0.19\,$\mu$Hz. We can identify a small oscillation of the large spacing that varies with frequency, which is a clear signature of the second helium ionization zone (see e.g., Monteiro \& Thompson \cite{monteiro}). The small separation has a mean value of $\delta\nu_{02}$\,=\,0.65\,$\pm$\,0.10\,$\mu$Hz and seems to decrease with frequency. The small value of the frequency difference between these modes make their frequency determination more uncertain than that of $\ell$\,=\,1 modes, which are not affected by neighbouring modes. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{12749fg8.eps}} \caption{Echelle diagram of identified modes with a large separation of $\Delta\nu_0$\,=\,3.45\,$\mu$Hz. The modes $\ell$\,=\,0 ($\bullet$) $\ell$\,=\,1 ($\blacktriangle$), and $\ell$\,=\,2 ($\circ$) follow ridges, but $\ell$\,=\,1 modes are situated too far to the right to follow the asymptotic relation.} \label{dech}% \end{figure} \begin{table} \caption[]{Frequency and amplitude of identified modes. The uncertainties in the last given digit of the frequencies are noted in parentheses. For the S/N estimates (see Barban et al. \cite{barban2}), the signal is taken to be the height of the fitted Lorentzian profile and the noise is determined according to the procedure described in Sect.~\ref{nb}.} \begin{center} \begin{tabular}{ccccc} \hline \hline Degree & Frequency & Mode height & S/N & Amplitude \\ $\ell$ & $\mu$Hz &ppm$^2$ * 1000 / $\mu$Hz& & ppm \\ \hline 0 & 20.94 (7)& 7.8& 10.8&79\,$\pm$\,15\\ 0 & 24.20 (7)& 10.0& 17.6&89\,$\pm$\,15\\ 0 & 27.65 (6)& 11.3& 24.8&94\,$\pm$\,16\\ 0 & 31.08 (6)& 9.0& 23.7&84\,$\pm$\,14\\ 0 & 34.59 (7)& 4.6& 14.0&60\,$\pm$\,11\\ 0 & 38.17 (9)& 1.6 & 5.6&35 \,$\pm$\,8\\ 1 & 19.16 (7)& 5.3& 6.2&65\,$\pm$\,14\\ 1 & 22.59 (7)& 5.5& 8.6&66\,$\pm$\,13\\ 1 & 25.94 (5)& 16.8& 33.1&115\,$\pm$\,19\\ 1 & 29.45 (5)& 16.0& 38.8&112\,$\pm$\,18\\ 1 & 32.88 (5)& 9.1& 26.0&85\,$\pm$\,14\\ 1 & 36.41 (6)& 4.4& 14.6&59\,$\pm$\,11\\ 1 & 39.93 (6)& 2.3 & 8.8&43 \,$\pm$\,9\\ 2 & 20.19 (8)& 6.5& 8.4&71\,$\pm$\,14\\ 2 & 23.58 (7)& 6.5& 10.9&71\,$\pm$\,13\\ 2 & 26.88 (7)& 6.5& 13.7&72\,$\pm$\,13\\ 2 & 30.48 (6)& 9.5& 24.1&86\,$\pm$\,15\\ 2 & 33.91 (8)& 2.3& 7.0&43 \,$\pm$\,9\\ 2 & 37.68 (8)& 2.1& 7.3&41 \,$\pm$\,8\\ \hline \hline \end{tabular}\\ \end{center} \label{tab1} \end{table} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{12749fg9.eps}} \caption{{\bf Top:} Small spacing versus frequency. {\bf Bottom:} Large spacing versus frequency. The variations of the large separation with frequency show a clear oscillation. The symbols used are the same as for Fig.~\ref{dech}.} \label{diagastero}% \end{figure} \subsection{Oscillation amplitudes} The fit of the Lorentzian profiles to the power spectrum infers the height of all oscillation modes. Since the modes are resolved and because of the normalization of the power spectrum, the {\sc rms} amplitude is measured to be (see Baudin et al. \cite{baudin}) \begin{equation} A = \sqrt{H \pi \Gamma} , \end{equation} where H and $\Gamma$ are the height and width (FWHM) of the Lorentzian function, respectively, in the power density spectrum. The amplitudes are in the range of 35 -- 115\,ppm (see Fig.~\ref{ampli}). The error bars are derived from the Hessian matrix and the correlation between the height and width has been taken into account (Toutain \& Appourchaux \cite{toutain}, Appourchaux, private communication). We note that the amplitudes of the $\ell$\,=\,1 modes are the highest. As noticed by Kjeldsen et al. (\cite{kjel}), measurements made on different stars with different instruments using different techniques, in different spectral lines or bandpasses, have different sensitivity to the oscillations. It is thus important to derive a bolometric amplitude that is independent of the instrument used. We computed the maximum bolometric amplitude of the $\ell$\,=\,0 modes, because their visibility coefficients do not depend on the inclination of the star. According to Michel et al. (\cite{michel}), who derived the CoRoT response for radial modes, radial-mode amplitudes of HR~7349 must be divided by 1.16 to obtain the bolometric amplitudes. We find that A$_{\rm bol, \ell=0, max}$\,=\,81\,ppm, which corresponds to 32 times the solar value (Michel et al. \cite{michel}). The scaling laws for both the large separation and the frequency of the maximum amplitude (Kjeldsen \& Bedding, \cite{kjebed}), coupled with the non-asteroseismic constraints, infer a mass for HR~7349 of about 1.2\,M$_{\odot}$ The derived amplitude is in good agreement with a scaling function $( L / M )^{s} / \sqrt{T_{\rm eff}/T_{\rm eff,\odot}}$, with $s$ close to 0.8, which is in-between the values given by Samadi et al. (\cite{samadi}; s=0.7) and Kjeldsen \& Bedding (\cite{kjebed}; s=1). \begin{figure} \resizebox{\hsize}{!}{\includegraphics{12749fg10.eps}} \caption{Amplitude of the oscillation modes versus frequency. The symbols used are the same as for Fig.~\ref{dech}.} \label{ampli}% \end{figure} \section{Conclusion} The red giant star HR~7349 has been observed for about 156 days by the CoRoT satellite. These observations have yielded a clear detection of p-mode oscillations. As already mentioned by De Ridder et al. (\cite{joris2}), non-radial modes are observed in red giants. Nineteen identifiable modes of degree $\ell$\,=\,0 to $\ell$\,=\,2 appear in the power spectrum between 18 and 42\,$\mu$Hz with average large and small spacings of 3.47 and 0.65\,$\mu$Hz, respectively, and a maximum bolometric amplitude of 81\,ppm. We note that the amplitude of $\ell$\,=\,1 modes is even larger than that of the radial modes. All modes of the same degree are aligned in the echelle diagram, which is a sign of modes that follow the asymptotic relation. However, the separation between $\ell$\,=1 and $\ell$\,=0 modes is not fully compatible with this asymptotic relation. All frequency patterns are theoretically expected for red giants (Dupret et al. \cite{dupret}), from very complex to regular: our observations correspond to a red giant for which the radiative damping of non-radial modes is large and only radial and non-radial modes completely trapped in the envelope can be observed. By fitting Lorentzian profiles to the power spectrum, it has also been possible to unambiguously derive, for the first time for a red giant, a mean line-width of 0.25\,$\mu$Hz corresponding to a mode lifetime of 14.7\,days. This lifetime is in agreement with the scaling law $T_{\rm eff}^{-4}$ suggested by Chaplin et al.~\cite{chaplin9}, although is a little too long. This relation however has yet to be verified for a larger number of red giants with different physical properties. The theoretical study of this red giant, including asteroseismic and non-asteroseismic constraints, will be the subject of a second paper. \begin{acknowledgements} FC is a postdoctoral fellow of the Fund for Scientific Research, Flanders (FWO). AM is a postdoctoral researcher of the 'Fonds de la recherche scientifique' FNRS, Belgium. TK is supported by the Canadian Space Agency and the Austrian Science Found (FWF). The research leading to these results has received funding from the Research Council of K.U.Leuven under grant agreement GOA/2008/04, from the Belgian PRODEX Office under contract C90309: CoRoT Data Exploitation, and from the FWO-Vlaanderen under grant O6260. We thank T. Appourchaux for helpful comments. \end{acknowledgements}
1,116,691,501,023
arxiv
\section{Introduction} \label{intro} Wireless sensor networks (WSN) consist of many nodes with sensing, computation, and communication capabilities, sharing a common wireless communication channel. In a typical WSN configuration, a large number of nodes measure possibly correlated data and transmit to a single collector node. This network problem is referred to as the ``sensor reachback problem'' in \cite{servetto-barros}. In many applications, nodes are energy-limited and the physical distance between each sensing node and the common destination makes the transmission difficult or (energy wise) expensive. We investigate such a scenario where communicating nodes cooperate with each other and act as relays in order to transport their own data along with the data from the other sensing nodes. Wireless channels differ from their wired line counterpart in two fundamental aspects. On one hand, the wireless channel is a broadcast (shared) medium and the signal from any transmitter is received by potentially many receivers. This is called \emph{broadcast} constraint. On the other hand, any receiver observes the superposition (linear combination) of signals from possibly many transmitters. This is called \emph{interference} constraint. The simultaneous presence of these two constraints makes a general wireless network quite difficult to analyze. The multiuser Gaussian channel that models a relay network, unfortunately, has so far escaped a sharp general characterization, even in the simplest case of a Gaussian relay network with a single source, single destination and a single relay \cite{meulen}. The capacities of Gaussian relay channel and certain discrete relay channels are evaluated in \cite{cover1} and a lower bound to the capacity of general relay channel is presented. In \cite{Gastpar}, capacity is determined for a Gaussian relay network when the number of relays is asymptotically large. In \cite{salman1}, a simpler deterministic channel is proposed. While this channel model is significantly simpler to analyze, it is able to capture the key aspects of broadcast and interference constraints. For this model, referred to as the linear finite-field deterministic model, \cite{salman1} determines the capacity for a general relay network with one source and one destination, as well as the multicast capacity with one source, multiple destinations and common information only. Our contribution in this paper builds heavily on the results and techniques of \cite{salman1} and can be regarded almost as a trivial extension thereof. Nevertheless, to the best of our knowledge and somehow surprisingly, this simple extension has not been reported before. We consider the ``sensor reachback problem'' \cite{servetto-barros} for a linear finite-field deterministic network with arbitrary topology, a single destination node and independent information at the source nodes. We show that the capacity region for this network is given by the cut-set bound and takes on a very simple and appealing closed-form expression. Also, for a specific sources correlation model, we find necessary and sufficient conditions for the sources transmissibility. This result reminds closely Theorem 1 of \cite{servetto-barros}, with the following main differences: on one hand, the result of \cite{servetto-barros} is more general since it applies to general correlated discrete sources observed at the sensor nodes and general noisy channels. On the other hand, our result applies to networks with broadcast and interference constraints while the result of \cite{servetto-barros} requires ``orthogonal'' channels, i.e., with neither broadcast nor interference constraints. We expect that the achievability technique for the Gaussian (noisy) relay network proposed in \cite{salman1} can be generalized to the case of multiple independent sources and a single destination as examined in this paper, so that a scheme that achieves a bounded and fixed gap to the capacity region in the Gaussian case can be found. Also, we believe that a fixed-gap rate-distortion achievable region can be found using independent quantization and Slepian-Wolf binning for the case of correlated Gaussian sources with mean-squared distortion and Gaussian noisy channels, at least for some specific source correlation model (see \cite{Maddah-tse}), especially matched to the discrete correlated source model considered here. This, however, seems to be a far more involved result since event in the standard case of Gaussian/quadratic separated lossy encoding (that corresponds to the case where the communication network reduces to a set of orthogonal links from the sensor nodes to the destination), a general fixed-gap characterization of the rate-distortion region is missing \cite{Maddah-tse}. In this work we limit ourselves to the linear finite-field deterministic model and we leave the fixed-gap achievability for the Gaussian case to future work. \section{Review of the deterministic linear finite-field model} \label{review} In this section we briefly review the deterministic channel model proposed in \cite{salman1} and used in this work. The received signal at each node is a deterministic function of the transmitted signal. This model focuses on the signals interaction rather than on the channel noise. In a Gaussian (real) network, a single link from node $i$ to node $j$ with SNR ${\sf snr}_{i,j}$ has capacity $C_{i,j} = \frac{1}{2} \log(1 + {\sf snr}_{i,j}) \approx \log \sqrt{{\sf snr}_{i,j}}$. Therefore, approximately, $n_{i,j} = \left \lceil \log \sqrt{{\sf snr}_{i,j}} \right \rceil$ bits per channel use can be sent reliably. In \cite{salman1} (see also references therein), the Gaussian channel is replaced by a finite-field deterministic model that reflects the above behavior. Namely, the transmitted signal amplitude is represented through its binary\footnote{The generalization to $p$-ary expansion is trivial. Here we focus on the binary expansion as in \cite{salman1}.} expansion $X = \sum_{\ell=1}^\infty B_\ell 2^{-\ell}$ where $B_\ell \in \mbox{\bb F}_2$. At the receiver, all the input bits such that $\sqrt{{\sf snr}_{i,j}} 2^{-\ell} > 1$ (i.e., received ``above the noise level'') are perfectly decoded, while all those such that $\sqrt{{\sf snr}_{i,j}} 2^{-\ell} \leq 1$ (i.e., received ``below the noise level'') are completely lost. It follows that only the most significant bits (MSBs) can be reliably decoded, such that the capacity of the deterministic channel is given exactly by $n_{i,j}$ and it is achieved by letting $B_1, \ldots, B_{n_{i,j}}$ i.i.d. Bernoulli-$1/2$. A linear finite-field deterministic relay network is defined as a directed acyclic graph ${\cal G} = \{{\cal V}, {\cal E}\}$ such that the received signal at any node $j \in {\cal V}$ is given by \begin{equation} \label{mac-channel} {\bf y}_j = \sum_{i \in {\cal V} : (i,j) \in {\cal E}} {\bf S}^{q - n_{i,j}} {\bf x}_i \end{equation} where ${\bf y}_j, {\bf x}_i \in \mbox{\bb F}_2^q$, sum and products are defined over the vector space $\mbox{\bb F}_2^q$, and where \[ {\bf S} = \left [ \begin{array}{ccccc} 0 & 0 & 0 &\cdots & 0 \\ 1 & 0 & 0 &\cdots & 0 \\ 0 & 1 & 0 &\cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \cdots & 0 & 1 & 0 \end{array} \right ] \] is a ``down-shift'' matrix. Notice that $n_{i,j} \leq q$ indicates the deterministic channel capacity for the link $(i,j)$ as described before. Without loss of generality, the integer $q$ can be set equal to the maximum of all $\{n_{i,j} : (i,j) \in {\cal E}\}$. The broadcast constraint is captured by the fact that the input ${\bf x}_i$ for each node $i$ is common to all channels $(i,j) \in {\cal E}$. In the case of single source (denoted by $s$) single destination (denoted by $d$), Theorem 4.3 of \cite{salman1} yields the capacity of linear finite-field deterministic relay networks in the form \begin{equation} \label{salman-capacity} C = \min_{({\cal S}, {\cal S}^c) \in \Lambda_d} \; {\rm rank} \left \{ {\bf G}_{{\cal S},{\cal S}^c} \right \} \end{equation} where $\Lambda_d$ is the set of cuts ${\cal S} \subset {\cal V}$, $\Omega^c = {\cal V} - {\cal S}$ such that $s \in {\cal S}$ and $d \in {\cal S}^c$, and where ${\bf G}_{{\cal S},{\cal S}^c}$ is the transfer matrix for the cut $({\cal S},{\cal S}^c)$, formally defined as follows. Let ${\cal N}(i)$ denote the set of nodes $j$ for which $(i,j) \in {\cal E}$ (this is the ``fan-out'' of node $i$) and let ${\cal P}(j)$ denote the set of nodes $i$ for which $(i,j) \in {\cal E}$ (this is the ``fan-in'' of node $j$). The transfer matrix ${\bf G}_{{\cal S},{\cal S}^c}$ is defined as the matrix of the linear transformation between the transmitted vectors (channel inputs) of nodes $\beta_{\rm in}({\cal S})$ and the received vectors (channel outputs) of nodes $\beta_{\rm out}({\cal S})$, where the inner and outer boundaries $\beta_{\rm in}({\cal S})$ and $\beta_{\rm out}({\cal S})$ of ${\cal S}$ are defined as \cite{kramer-note}: \[ \beta_{\rm in}({\cal S}) = \{ i \in {\cal S} : {\cal N}(i) \cap {\cal S}^c \neq \emptyset \} \] and \[ \beta_{\rm out}({\cal S}) = \{ j \in {\cal S}^c : {\cal P}(j) \cap {\cal S} \neq \emptyset \} \] In words: $\beta_{\rm in}({\cal S})$ is the set of nodes of ${\cal S}$ with a direct link to nodes in ${\cal S}^c$, and $\beta_{\rm out}({\cal S})$ is the set of nodes in ${\cal S}^c$ with a direct link from nodes in ${\cal S}$. Going through the proof of Theorem 4.3 in \cite{salman1} we notice that the ``down-shift'' structure for the individual channels is irrelevant. In fact, this structure is useful in making the connection between the linear finite-field model and the corresponding Gaussian case. As a matter of fact, if the channel matrices ${\bf S}^{q - n_{i,j}}$ in the above model are replaced by general matrices ${\bf S}_{i,j} \in \mbox{\bb F}_2^{q \times q}$, the result (\ref{salman-capacity}) still holds. \section{Main result} \label{main} In a linear finite-field deterministic network defined as above, let ${\cal V} = \{1,\ldots, N, d\}$, where node $d$ denotes the common destination and all other nodes $\{1, \ldots, N\}$ have independent information to send to node $d$. For any integer $T = 1, 2, \ldots$ we let ${\cal W}_i = \{1, \ldots, \lceil 2^{TR_i} \rceil \}$ denote the message set of node $i = 1,\ldots, N$. A $(T, R_1, \ldots, R_N)$ code for the network is defined by a sequence of {\em strictly causal} encoding functions $f_i^{[t]} : {\cal W}_i \times \mbox{\bb F}_2^{q(t - 1)} \rightarrow \mbox{\bb F}_2^q$, for $t = 1, \ldots, T$ and $i = 1, \ldots, N$, such that the transmitted signal of node $i$ at (discrete) time $t$ is given by ${\bf x}_i[t] = f_i^{[t]}(w_i, {\bf y}_i[1], \ldots, {\bf y}_i[t-1])$, and by a decoding function $g : \mbox{\bb F}_2^{Tq} \rightarrow {\cal W}_1 \times \cdots {\cal W}_N$, such that the set of decoded messages is given by $(\widehat{w}_1, \ldots, \widehat{w}_N) = g({\bf y}_d[1], \ldots, {\bf y}_d[T])$. The average probability of error for such code is defined as $P_n(e) = \mbox{\bb P}((W_1, \ldots, W_N) \neq (\widehat{W}_1, \ldots, \widehat{W}_N)$, where the random variables $W_i$ are independent and uniformly distributed on the corresponding message sets ${\cal W}_i$. The rate $N$-tuple $(R_1, \ldots, R_N)$ is {\em achievable} if there exists a sequence of $(T, R_1, \ldots, R_N)$-codes with $P_n(e) \rightarrow 0$ as $T \rightarrow \infty$. The capacity region ${\cal C}$ of the network is the closure of the set of all achievable rates. With these definitions, we have: \begin{thm} \label{thm1} The capacity region ${\cal C}$ of a linear finite-field deterministic network $({\cal V}, {\cal E})$ with independent information at the nodes $\{1,\ldots, N\}$ and a single destination $d$ is given by \begin{equation} \label{cut-set-general} \sum_{i \in {\cal S}} R_{i} \leq {\rm rank}\left \{ {\bf G}_{{\cal S}, {\cal S}^c} \right \}, \;\;\; \forall \; {\cal S} \subseteq \{1,\ldots, N\}. \end{equation} \end{thm} \begin{proof} The converse of (\ref{cut-set-general}) follows directly from the general cut-set bound and by the fact that, for the linear deterministic network model, uniform i.i.d. inputs maximize all cut-set values at once \cite{cover_book,salman1,kramer-note}. For the direct part, we build an augmented network by introducing a virtual source node 0 and by expanding the channel output alphabet of each node $i = \{1, \ldots, N\}$. Let $\{n_{0,i} : i = 1,\ldots, N\}$ be arbitrary non-negative integers. The channel output alphabet of node $i$ in the augmented network is given by $\mbox{\bb F}_2^{q + n_{0,i}}$. The virtual source node 0 has $n_0 = \sum_{i=1}^N n_{0,i}$ input bits, partitioned into $N$ disjoint sets ${\cal U}_i$ of cardinality $n_{0,i}$ for $i = 1,\ldots, N$, respectively, such that the bits of subset ${\cal U}_i$ are sent directly to node $i$ and are received at the top $n_{0,i}$ MSB positions of the expanded channel output alphabet. Fig. \ref{fig:diamond_aug} shows an example of such network augmentation for a ``diamond'' network \cite{salman1}. \begin{figure}[!htb] \centering \includegraphics[width=3.0 in]{diamond_aug.eps} \caption{A diamond network with a source node $1$, two relay nodes $2$ and $3$ and a common destination $d$ is augmented by adding node $0$ and virtual links to nodes $1$, $2$ and $3$.} \label{fig:diamond_aug} \end{figure} After introducing the virtual source node, the augmented linear finite-field deterministic network belongs to the class studied in \cite{salman1} with the minor difference that the channel linear transformations are not necessarily limited to ``down-shifts''. Nevertheless, as we observed before, Theorem 4.3 of \cite{salman1} still applies. Letting $R_0$ denote the rate from the virtual source node 0 to the destination node $d$, we have that all rates $R_0$ satisfying \begin{equation} \label{suca} R_0 \leq \min_{(\Omega_0 , \Omega_0^c) \in \Lambda_d} \;\; {\rm rank} \left \{ {\bf G}_{\Omega_0,\Omega_0^c} \right \} \end{equation} are achievable, where $\Lambda_d$ is the set of all cuts $(\Omega_0, \Omega_0^c)$ of the augmented network such that $0 \in \Omega_0$ and $d \in \Omega_0^c$. For any such set $\Omega_0$ we have that $\Omega_0 = {\cal S} \cup \{0\}$, for some ${\cal S} \subseteq \{1, \ldots, N\}$. Consequently, we have that $\Omega_0^c = {\cal S}^c$, where ${\cal S}, {\cal S}^c$ are subsets as defined in the statement of Theorem \ref{thm1}. Since the links from 0 to any nodes $i \in \{1,\ldots, N\}$ are orthogonal by construction (not subject to any broadcast or interference constraint), we have that ${\bf G}_{\Omega_0,\Omega_0^c}$ has a block-diagonal form where a block is given by ${\bf G}_{{\cal S}, {\cal S}^c}$ (the links of the original network, corresponding to the cut $(\Omega_0,\Omega_0^c)$ via the correspondence $\Omega_0 \leftrightarrow {\cal S}$ defined above) and other blocks, denoted by ${\bf G}_{0,j}$ for all $j \in {\cal S}^c$, have rank $n_{0,j}$, respectively. By construction, there is no direct link between $0$ and $d$ so, without loss of generality, we can assume $n_{0,d} = 0$. The general form for $ {\bf G}_{\Omega_0,\Omega_0^c}$ is \[ {\bf G}_{\Omega_0,\Omega_0^c} = \left [ \begin{array}{cccc} {\bf G}_{{\cal S},{\cal S}^c} & 0 & \cdots & 0 \\ 0 & {\bf G}_{0,i_1} & & \vdots \\ \vdots & & \ddots & 0 \\ 0 & \cdots & 0 & {\bf G}_{0,i_|{\cal S}^c|} \end{array} \right ] \] where we have indicated ${\cal S}^c = \{i_1, \ldots, i_{|{\cal S}^c|}\}$. Therefore, we have \begin{equation} \label{suca1} {\rm rank} \left \{ {\bf G}_{\Omega_0,\Omega_0^c} \right \} = {\rm rank} \left \{ {\bf G}_{{\cal S},{\cal S}^c} \right \} + \sum_{j \in {\cal S}^c} n_{0,j} \end{equation} In particular, the cut $\Omega_0 = \{0\}$ yields \begin{equation} \label{suca2} R_0 \leq \sum_{j=1}^N n_{0,j} \end{equation} By letting this inequality hold with equality, and by replacing this into all other inequalities, we obtain the set of inequalities \begin{equation} \label{suca3} \sum_{i \in {\cal S}} n_{0,i} \leq {\rm rank} \left \{ {\bf G}_{{\cal S},{\cal S}^c} \right \}, \;\;\;\; \forall \;\; {\cal S} \subseteq \{1, \ldots, N\} \end{equation} where we used the fact that $\sum_{j=1}^N n_{0,j} - \sum_{j \in {\cal S}^c} n_{0,j} = \sum_{i\in {\cal S}} n_{0,i}$. Consider now the ensemble of augmented networks for which there exist integers $\{n_{0,i} : i = 1, \ldots,N \}$ that satisfy (\ref{suca3}). For such networks, the rate $R_0 = \sum_{j=1}^N n_{0,j}$ is achievable (by \cite{salman1}) and therefore the individual rates $R_i = n_{0,i}$ are achievable by the argument above. Finally, the closure of the convex hull of all individual rate vectors ${\bf R} = (n_{0,1}, \ldots, n_{0,N})$ of such networks is achievable by time-sharing. It is immediate to see that this convex hull is provided by the inequalities (\ref{cut-set-general}).\footnote{Indeed, the inequalities (\ref{cut-set-general}) represent the convex relaxation of the integer constraints (\ref{suca2}).} \end{proof} \section{A specific example: diamond network} \label{diamond} In this section we work out a simple example and provide an explicit achievability strategy. Consider the ``diamond'' network shown in Fig. \ref{fig:diamond_aug}, with nodes $\{1,2,3,d\}$ and links of capacity $n_{1,2}, n_{1,3}, n_{2,d}$ and $n_{3,d}$. In this case, Theorem \ref{thm1} yields the capacity region ${\cal C}$ given by \begin{eqnarray} \label{eqn:diamond-cap} R_1 + R_2 + R_3 & \leq & \max\{ n_{2,d}, n_{3,d}\} \\ R_1 + R_2 & \leq & n_{2,d} + n_{1,3} \\ R_1 + R_3 & \leq & n_{3,d} + n_{1,2} \\ R_1 & \leq & \max\{ n_{1,2}, n_{1,3}\} \\ R_2 & \leq & n_{2,d} \\ R_3 & \leq & n_{3,d} \end{eqnarray} Next, we provide simple coding strategies that achieve all relevant vertices of ${\cal C}$. Any point ${\bf R} \in {\cal C}$ can be obtained by suitable time-sharing of the vertices-achieving strategies. There are 24 possible orderings of the individual link capacities $n_{1,2}, n_{1,3}, n_{2,d}$ and $n_{3,d}$. Due to symmetry, the regions for the case $n_{3,d} > n_{2,d}$ will be the mirror image of the regions for the case $n_{2,d} > n_{3,d}$. Therefore, we shall consider only the cases where $n_{2,d} \geq n_{3,d}$. The remaining 12 cases have to be discussed individually. For example, let's focus on the case $n_{3,d} \leq n_{1,2} \leq n_{1,3} \leq n_{2,d}$. An example of the network for the choice of the link capacities $n_{3,d}=1, n_{1,2}=2, n_{1,3}=3, n_{2,d}=4$ is given in Fig. \ref{diamond-1234}. Fig. \ref{region1} shows qualitatively the shape of the capacity region in the three possible sub-cases of the link-capacity ordering $n_{3,d} \leq n_{1,2} \leq n_{1,3} \leq n_{2,d}$: case 1) for $n_{1,2} + n_{3,d} < n_{1,3}$; case 2) for $n_{1,2} + n_{3,d} \geq n_{1,3}$, and case 3) for $n_{1,2} + n_{3,d} \geq n_{2,d}$. In all cases, the achievability of the vertices B and C of the region of Fig. \ref{region1} is trivial, since these correspond to vertices of the multi-access channel with node 2 and 3 as transmitters and node $d$ as receiver. {\bf Case 1).} Vertex A has coordinates $(R_1=n_{1,2}, R_2=n_{2,d} - n_{1,2} - n_{3,d} , R_3= n_{3,d})$ and can be achieved by letting node 1 send $n_{1,2}$ to node 2. Node 2 decodes and forwards these bits after multiplexing its own $n_{2,d} - n_{1,2} - n_{3,d} > 0$ bits in the MSB positions, such that node 3 can send $n_{3,d}$ bits without interference from node 2. Vertex D has coordinates $(R_1=n_{1,2} + n_{3,d}, R_2=n_{2,d} - n_{1,2} - n_{3,d} , R_3= 0)$ and can be achieved by letting node 1 send $n_{1,2} + n_{3,d}$ bits. These can be all decoded by node 3, then node 3 can forward the bottom (least-significant) $n_{3,d}$ bits of node 1 to node $d$. Node 2 decodes the top (most-significant) $n_{1,2}$ bits from node 1, and forwards them after multiplexing its own bits. {\bf Case 2).} Vertices A, D and E have coordinates $(R_1=n_{1,2}, R_2=n_{2,d} - n_{1,2}-n_{3,d}, R_3= n_{3,d})$, $(R_1=n_{1,3}, R_2=n_{2,d} - n_{1,3}, R_3=0)$ and $(R_1=n_{1,3}, R_2=n_{2,d} - n_{1,2}-n_{3,d}, R_3= n_{1,2}+n_{3,d}-n_{1,3})$, respectively. Vertex A can be achieved in the same way as in Case 1). Vertex D can be achieved by letting node 1 send $n_{1,3}$ bits to node 3. Node 3 decodes and forwards the bottom $n_{3,d}$. Since in this case $n_{1,2} \geq n_{1,3} - n_{3,d}$, node 2 can decode the top $n_{1,3} - n_{3,d}$ bits of node 1, and forwards them to node $d$ after multiplexing its own $n_{2,d} - n_{1,3}$ bits, using its $n_{2,d} - n_{3,d}$ MSBs. Vertex E can be achieved by letting node 1 transmit $n_{1,3}$ bits, where the top $n_{1,2}$ of which are received by node 2. Node 3 forwards the bottom $n_{1,3} - n_{1,2}$ bits of node 1, and multiplex its own $n_{3,d} + n_{1,2} - n_{1,3}$ bits. Node 2 forwards the top $n_{1,2}$ bits from node 1, by multiplexing its own $n_{2,d} - n_{1,2} - n_{3,d}$ bits, transmitting over its $n_{2,d} - n_{3,d}$ MSBs. {\bf Case 3).} Vertices A, D and E have coordinates $(R_1=n_{2,d}-n_{3,d}, R_2=0, R_3= n_{3,d})$, $(R_1=n_{1,3},R_2=n_{2,d} - n_{1,3}, R_3=0)$ and $(R_1=n_{1,3}, R_2=0, R_3=n_{2,d}-n_{1,3})$, respectively. Vertex A can be achieved by letting node 1 send $n_{2,d} - n_{3,d}$ bits to node 2. Since $n_{2,d} - n_{3,d} \leq n_{1,2}$ these can be decoded and forwarded to node $d$ in the MSB positions. Node 3 simply sends $n_{3,d}$ bits to node $d$ without interfering with node 2. Vertex D is achieved by letting node 1 send $n_{1,3}$ bits. The top $n_{1,3} - n_{3,d}$ of these are decoded by node 2 and forwarded together with $n_{2,d} - n_{1,3}$ own bits. The bottom $n_{3,d}$ bits of node 1 are decoded and forwarded by node 3. Finally, vertex E is achieved by letting node 1 send $n_{1,3}$ bits. The bottom $n_{3,d} - n_{2,d} + n_{1,3}$ of these are forwarded by node 3, after multiplexing its own $n_{2,d} - n_{1,3}$ bits. Since $n_{2,d} - n_{3,d} \leq n_{1,2}$, node 2 can decode the top $n_{2,d} - n_{3,d}$ bits from node 1 and forward them to node $d$ using its MSB positions. Other cases follow similarly and the whole capacity region is achieved by decode and forward. \begin{figure}[!htb] \centering \includegraphics[width=8cm,height=6cm]{diamond-1234.eps} \caption{The configuration of the diamond network in the example (Case (1) in Fig.\ref{region1})}. \label{diamond-1234} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=8cm]{region1.eps} \caption{The capacity region of the diamond network in the example.} \label{region1} \end{figure} \section{Transmissibility for correlated sources} \label{correlated} Consider the case of a sensor network where the nodes $\{1, \ldots, N\}$ observe samples from a spatially-correlated, i.i.d. in time, discrete vector source ${\bf U} = (U_1, \ldots, U_N)$ (see the source model in \cite{servetto-barros}). The goal is to reproduce the source blocks ${\bf u}[1], \ldots, {\bf u}[T]$ at the common destination node $d$. If the source blocks can be recovered at the destination with vanishing probability of error as $T \rightarrow \infty$, the vector source is said to be {\em transmissible}. In the case of a network of orthogonal links with capacities $C_{i,j}$, this problem was solved in \cite{servetto-barros} and yields the necessary and sufficient transmissibility condition\footnote{The notation $U_{\cal S} = \{U_i : i \in {\cal S}\}$ is standard.} \begin{equation} \label{servetto-result} H(U_{\cal S}|U_{{\cal S}^c}) \leq \sum_{(i,j) \in {\cal S}\times {\cal S}^c} C_{i,j}, \;\;\; \forall \; {\cal S} \subseteq \{1, \ldots, N\}. \end{equation} From the system design viewpoint, the above result yields the optimality of the ``separation'' approach consisting of the concatenation of Slepian-Wolf coding for the source with routing and single-user channel coding for the network \cite{servetto-barros}. With the same assumptions and linear finite-field deterministic network defined before, we consider a specific model for the vector source as defined in \cite{Maddah-tse}. Let $n_0$ be a non-negative integer, and let ${\bf V} \in \mbox{\bb F}_2^{n_0}$ be a random vector of uniform i.i.d. bits. For all $i = 1, \ldots, N$, let ${\cal U}_i \subseteq \{1, \ldots, n_0\}$ and define $U_i \in \mbox{\bb F}_2^{|{\cal U}_i|}$ as the restriction of ${\bf V}$ to the components $\{V_\ell : \ell \in {\cal U}_i\}$ of ${\bf V}$. Then, the correlation model for the source $(U_1,\ldots, U_N)$ is reduced to the following ``common bits'' case: sources $U_i$ and $U_j$ have common part $\{V_\ell : \ell \in {\cal U}_i \cap {\cal U}_j\}$ while the bits $V_\ell$ in ${\cal U}_i - {\cal U}_j$ and in ${\cal U}_j - {\cal U}_i$ are mutually independent. It follows that $H(U_i|U_j) = |{\cal U}_i| - |{\cal U}_i \cap {\cal U}_j|$. This source model is somehow ``matched'' to a correlated source defined over the reals in the following intuitive sense. Consider $N = 2$ and let $U_1$ and $U_2$ denote the binary quantization indices resulting from quantizing two correlated random variables $A_1 \in \mbox{\bb R}$ and $A_2 \in \mbox{\bb R}$ using ``embedded'' scalar uniform quantizers with $n$ bits, such that their first $m$ MSBs are identical and their last $n-m$ least significant bits (LSBs) are mutually independent. If $A_1, A_2$ are marginally uniform and symmetric, $U_1$ and $U_2$ are {\em exactly} obtained by defining ${\bf V}$ as above, with $n_0 = 2n - m$ independent bits, and letting $U_1$ include the $m$ MSBs and the first set of $n - m$ LBSs of ${\bf V}$, and $U_2$ include the same $m$ MSBs and the second set of $n - m$ LBSs of ${\bf V}$. This model trivially generalizes to the case of $N$ correlated sources and is related to the Gaussian sources with ``tree'' dependency considered in \cite{Maddah-tse}. For the source model defined above we have the following simple result: \begin{thm} \label{thm2} The vector source ${\bf U} = (U_1,\ldots, U_N)$ is transmissible over the linear finite-field deterministic network $({\cal V}, {\cal E})$ if and only if \begin{equation} \label{servetto-linear-ff} H(U_{\cal S}|U_{{\cal S}^c}) \leq {\rm rank}\left \{ {\bf G}_{{\cal S}, {\cal S}^c} \right \}, \;\;\; \forall \; {\cal S} \subseteq \{1,\ldots, N\}. \end{equation} \end{thm} \begin{proof} Again, we consider an augmented network with a single source node denoted by $0$, with $n_0$ output bits that we denote by ${\bf V}$. As before, subsets ${\cal U}_i$ of cardinalities $n_{0,i}$ of these bits are sent to nodes $i$, respectively. However, differently from before we choose the subsets ${\cal U}_i$ to overlap in accordance with the vector source model. For the augmented network, the rate $R_0$ from the virtual source to the destination $d$ must satisfy (\ref{suca}). In particular, choosing $\Omega_0 = \{0\}$ we get $R_0 \leq n_0$. Generalizing the proof of Theorem \ref{thm1} to the case of overlapping sets $\{{\cal U}_i\}$, we find that for any cut $(\Omega_0, \Omega_0^c)$ of the augmented network such that $\Omega_0 = {\cal S} \cup \{0\}$ and $\Omega_0^c = {\cal S}^c$, with ${\cal S} \subseteq \{1, \ldots, N\}$ we have \[ {\rm rank} \left \{ {\bf G}_{\Omega_0,\Omega_0^c} \right \} = {\rm rank} \left \{ {\bf G}_{{\cal S},{\cal S}^c} \right \} + {\rm rank} \left \{ {\bf G}_{0,{\cal S}^c} \right \} \] where ${\bf G}_{0,{\cal S}^c}$ is the linear transformation between the inputs ${\bf V}$ and the (augmented) channel outputs of nodes $j \in {\cal S}^c$. By construction, the matrix ${\bf G}_{0,{\cal S}^c}$ is formed by linear independent columns for all bits $V_\ell$ with $\ell \in \bigcup_{j \in {\cal S}^c} {\cal U}_j$. Therefore, \[ {\rm rank} \left \{ {\bf G}_{0,{\cal S}^c} \right \} = \left |\bigcup_{j \in {\cal S}^c} {\cal U}_j \right | = H(U_{{\cal S}^c}) \] Since ${\bf V}$ is uniform i.i.d., we have $R_0 = n_0 = H({\bf V}) = H({\bf U})$. Replacing these equalities into the set of inequalities (\ref{suca}) and using the chain rule of entropy $H({\bf U}) = H(U_{\cal S}|U_{{\cal S}^c}) + H(U_{{\cal S}^c})$ we obtain that the conditions (\ref{servetto-linear-ff}) are sufficient for transmissibility. On the other hand, if a source as defined in our model was transmissible, then the set of conditions (\ref{servetto-linear-ff}) must hold, otherwise the rate $R_0$ of the corresponding single-source single destination augmented network would violate (\ref{suca}). Hence, necessity also holds. \end{proof} \section{Conclusions \label{conclusions}} In this work we have characterized the capacity region for a linear finite-field deterministic network with independent information at all nodes and a single destination node. In our setup, all nodes may relay information from other nodes as well as inject their own information into the network. This may serve as a simplified model for a large WSN where sensing nodes cooperate with each other to send the collective data towards a single collector node. For a specific model of discrete binary source correlation at the nodes, we have also found necessary and sufficient conditions for the source transmissibility. Albeit restrictive, this correlation model may be useful (e.g., see \cite{Maddah-tse}) as a simple discrete ``equivalent'' (up to some bounded mean-square distortion penalty) for a spatially-correlated real sources whose components are observed and encoded separately at the network nodes. Motivated by these results, it is natural to investigate the performance of achievability schemes based on the techniques as in \cite{salman1} (for independent information) and separated quantization and Slepian-Wolf binning (for lossy transmission of correlated sources) in order to achieve the capacity region or the distortion region of actual WSN, within a bounded performance gap. \bibliographystyle{IEEEtran}
1,116,691,501,024
arxiv
\section{Introduction} Recently high-energy, fixed angle behavior of string scattering amplitudes \cite{GM, Gross, GrossManes} was intensively reinvestigated \cite{ChanLee1,ChanLee2, CHL,CHLTY,PRL,Decay,Compact,susy,CC} for string states at arbitrary mass levels. The motivation was to uncover the long-sought hidden stringy spacetime symmetry. A saddle-point method was developed to calculate the general formula for tree-level high-energy open string scattering amplitudes of four arbitrary string states. Remarkably, it was found that there is only one independent component of the amplitudes at each fixed mass level, and ratios among high energy scattering amplitudes of different string states at each mass level can be obtained. However, it was soon realized that \cite{Closed} the saddle-point method was applicable to $(t,u)$ channel only but not $(s,t)$ channel. It was also pointed out that, through the observation of the KLT formula \cite{KLT}, this difficulty is associated with the lack of saddle-point in the integration regime for the closed string calculation. To calculate the complete high energy closed string scattering amplitudes in the fixed angle regime \cite{Closed}, one had to rely on calculation based on the method of decoupling of zero-norm states \cite{ZNS1,ZNS3,ZNS2} in the spectrum. With this new input, an infinite number of linear relations among high energy scattering amplitudes of different string states can be derived and the complete ratios among high energy closed string scattering amplitudes at each fixed mass level can be determined. One can now calculate only high energy amplitude corresponding to the highest spin state at each mass level in the spectrum, and the complete closed string scattering amplitudes can then be obtained. In this paper, we will use another method to calculate the closed string ratios in the fixed angle regime mentioned above. We will calculate the complete closed string scattering amplitudes in the Regge regime, which have not been considered in the literature so far. It turned out that both the saddle-point method and the method of decoupling of zero-norm states adopted in the calculation of fixed angle regime do not apply to the case of Regge regime. However a direct calculation is manageable. The calculation will be based on the KLT formula and the open string $(s,t)$ channel scattering amplitudes in the Regge regime calculated previously \cite{KLY}. By using a set of Stirling number identities developed in combinatoric number theory \cite{MK}, one can then extract the ratios in the fixed angle regime from Regge closed string scattering amplitudes. \section{Fixed angle Scattering} We begin with a brief review of high energy string scatterings in the fixed angle regime,% \begin{equation} s,-t\rightarrow\infty,t/s\approx-\sin^{2}\frac{\theta}{2}=\text{fixed (but }\theta\neq0\text{)}\label{1}% \end{equation} where $s,t$ and $u$ are the Mandelstam variables and $\theta$ is the CM scattering angle. It was shown \cite{CHLTY,PRL} that for the 26D open bosonic string the only states that will survive the high-energy limit at mass level $M_{2}^{2}=2(n-1)$ are of the form% \begin{equation} \left\vert n,2m,q\right\rangle \equiv(\alpha_{-1}^{T})^{n-2m-2q}(\alpha _{-1}^{L})^{2m}(\alpha_{-2}^{L})^{q}|0,k\rangle,\label{2}% \end{equation} where the polarizations of the 2nd particle with momentum $k_{2}$ on the scattering plane were defined to be $e^{P}=\frac{1}{M_{2}}(E_{2}% ,\mathrm{k}_{2},0)=\frac{k_{2}}{M_{2}}$ as the momentum polarization, $e^{L}=\frac{1}{M_{2}}(\mathrm{k}_{2},E_{2},0)$ the longitudinal polarization and $e^{T}=(0,0,1)$ the transverse polarization. Note that $e^{P}$ approaches to $e^{L}$ in the fixed angle regime. For simplicity, we choose $k_{1}$, $k_{3}$ and $k_{4}$ to be tachyons. It turned out that the $(t,u)$ channel of the scattering amplitudes can be calculated by using the saddle-point method and the final results are \cite{CHLTY,PRL,Closed}% \begin{equation} \frac{A^{(n,2m,q)}(t,u)}{A^{(n,0,0)}(t,u)}=\left( -\frac{1}{M_{2}}\right) ^{2m+q}\left( \frac{1}{2}\right) ^{m+q}(2m-1)!!\label{3}% \end{equation} with% \begin{align} A^{(n,0,0)}(t,u) & \simeq\sqrt{\pi}(-1)^{n-1}2^{-n}E^{-1-2n}(-2E^{3}% \sin\theta)^{n}(\sin\frac{\theta}{2})^{-3}(\cos\frac{\theta}{2})^{5-2n}% \nonumber\\ & \times\exp(-\frac{t\ln t+u\ln u-(t+u)\ln(t+u)}{2}).\label{4}% \end{align} To calculate the high energy, fixed angle closed string scattering amplitudes, one encountered the well-known difficulty of the lack of saddle-point in the integration regime. In fact, it was demonstrated \cite{Closed} by three evidences that the standard saddle-point calculation for high energy closed string scattering amplitudes was not reliable. It was also pointed out \cite{Closed} that this difficulty is associated with the lack of saddle-point in the integration regime for the calculation of $(s,t)$ channel high energy open string scattering amplitudes. This can be seen from a formula by Kawai, Lewellen and Tye (KLT), which expresses the relation between tree amplitudes of closed and open string $(\alpha_{\text{closed}}^{\prime}=4\alpha _{\text{open}}^{\prime}=2)$ \cite{KLT} \begin{equation} A_{\text{closed}}^{\left( 4\right) }\left( s,t,u\right) =\sin\left( \pi k_{2}\cdot k_{3}\right) A_{\text{open}}^{\left( 4\right) }\left( s,t\right) \bar{A}_{\text{open}}^{\left( 4\right) }\left( t,u\right) .\label{5}% \end{equation} Note that Eq.(\ref{5}) is valid for all energies. On the other hand, a direct calculation instead of the saddle point method was not sucessful either. This is mainly because the true leading order amplitudes for states with $m\neq0$ drop from energy order $E^{4m}$ to $E^{2m}$ \cite{ChanLee1,ChanLee2, CHL} , and one needs to calculate the complicated subleading order contraction terms. For this reason, the complete forms of the fixed angle closed string and $(s,t)$ channel open string scattering amplitudes were not calculable. However, a simple case of the $(s,t)$ channel scattering amplitude, which is calculable for all energies, with $k_{2}$ the highest spin state $V_{2}% =\alpha_{-1}^{\mu_{1}}\alpha_{-1}^{\mu_{2}}\cdot\cdot\alpha_{-1}^{\mu_{n}}% \mid0,k>$ at mass level $M_{2}^{2}$ $=2(n-1)$ and three tachyons $k_{1,3,4}$ is \cite{CHL}% \begin{equation} A_{n}^{\mu_{1}\mu_{2}\cdot\cdot\mu_{n}}(s,t)=\overset{n}{\underset{l=0}{\sum}% }(-)^{l}\left( _{l}^{n}\right) B(-\frac{s}{2}-1+l,-\frac{t}{2}% -1+n-l)k_{1}^{(\mu_{1}}..k_{1}^{\mu_{n-l}}k_{3}^{\mu_{n-l+1}}..k_{3}^{\mu _{n})}.\label{6}% \end{equation} The high energy limit of Eq.(\ref{6}) can then be calculated to be \cite{Closed}% \begin{equation} A^{(n,0,0)}(s,t)=(-)^{n}\frac{\sin\left( \pi u/2\right) }{\sin\left( \pi s/2\right) }A^{(n,0,0)}(t,u).\label{7}% \end{equation} The factor $\frac{\sin\left( \pi u/2\right) }{\sin\left( \pi s/2\right) }$ which was missing in the literature \cite{GM,Veneziano} has important physical interpretations. The presence of poles give infinite number of resonances in the string spectrum and zeros give the coherence of string scatterings. These poles and zeros survive in the high energy limit and can not be dropped out. Presumably, the factor triggers the failure of saddle point calculation mentioned above. To calculate the complete high energy closed string scattering amplitudes, one had to rely on calculation based on the method of decoupling of zero-norm states, or stringy Ward identities, in the spectrum. With this new input, an infinite number of linear relations among high energy scattering amplitudes of different string states can be derived, and the complete ratios among high energy closed string scattering amplitudes at each fixed mass level can be shown to be the tensor product of two sets of $(t,u)$ channel open string ratios in eq.(\ref{3}). The complete high energy closed string and $(s,t)$ channel open string scattering amplitudes can then be obtained by Eqs.(\ref{5}) and (\ref{7}). An explicit calculation for the lowest mass level case was presented in \cite{Closed}. Another independent method to obtain the closed string ratios is to calculate high energy string scattering amplitudes in the Regge regime, which we will discuss in the next section. \section{Regge Scattering} Another high energy regime of string scattering amplitudes, which contains complementary information of the theory, is the fixed momentum transfer or Regime regime. That is in the kinematic regime% \begin{equation} s\rightarrow\infty,\sqrt{-t}=\text{fixed (but }\sqrt{-t}\neq\infty). \label{8}% \end{equation} It was found \cite{KLY} that the number of high energy scattering amplitudes for each fixed mass level in this regime is much more numerous than that of fixed angle regime calculated previously. On the other hand, it seems that both the saddle-point method and the method of decoupling of zero-norm states adopted in the calculation of fixed angle regime do not apply to the case of Regge regime. However the calculation is still manageable, and the general formula for the high energy $(s,t)$ channel open string scattering amplitudes at each fixed mass level can be written down explicitly. It was shown that the most general high energy open string states in the Regge regime at each fixed mass level $n=\sum_{n,m}lk_{n}+mq_{m}$ are% \begin{equation} \left\vert k_{l},q_{m}\right\rangle =\prod_{l>0}(\alpha_{-l}^{T})^{k_{l}}% \prod_{m>0}(\alpha_{-m}^{L})^{q_{m}}|0,k\rangle. \label{9}% \end{equation} For our purpose here, however, we will only calculate scattering amplitudes corresponding to the vertex in Eq.(\ref{2}). The relevant kinematics are% \begin{equation} e^{P}\cdot k_{1}\simeq-\frac{s}{2M_{2}},\text{ \ }e^{P}\cdot k_{3}\simeq -\frac{\tilde{t}}{2M_{2}}=-\frac{t-M_{2}^{2}-M_{3}^{2}}{2M_{2}}; \label{10}% \end{equation}% \begin{equation} e^{L}\cdot k_{1}\simeq-\frac{s}{2M_{2}},\text{ \ }e^{L}\cdot k_{3}\simeq -\frac{\tilde{t}^{\prime}}{2M_{2}}=-\frac{t+M_{2}^{2}-M_{3}^{2}}{2M_{2}}; \label{11}% \end{equation} and% \begin{equation} e^{T}\cdot k_{1}=0\text{, \ \ }e^{T}\cdot k_{3}\simeq-\sqrt{-{t}}. \label{12}% \end{equation} The Regge scattering amplitude for the $(s,t)$ channel was calculated to be \cite{KLY}% \begin{align} R^{(n,2m,q)}(s,t) & =B\left( -1-\frac{s}{2},-1-\frac{t}{2}\right) \sqrt{-t}^{n-2m-2q}\left( \frac{1}{2M_{2}}\right) ^{2m+q}\nonumber\\ & \cdot2^{2m}(\tilde{t}^{\prime})^{q}U\left( -2m\,,\,\frac{t}{2}% +2-2m\,,\,\frac{\tilde{t}^{\prime}}{2}\right) . \label{13}% \end{align} In Eq.(\ref{13}) $U$ is the Kummer function of the second kind and is defined to be% \begin{equation} U(a,c,x)=\frac{\pi}{\sin\pi c}\left[ \frac{M(a,c,x)}{(a-c)!(c-1)!}% -\frac{x^{1-c}M(a+1-c,2-c,x)}{(a-1)!(1-c)!}\right] \text{ \ }(c\neq2,3,4...) \label{14}% \end{equation} where $M(a,c,x)=\sum_{j=0}^{\infty}\frac{(a)_{j}}{(c)_{j}}\frac{x^{j}}{j!}$ is the Kummer function of the first kind. Note that the second argument of Kummer function $c=\frac{t}{2}+2-2m,$ and is not a constant as in the usual case. We now proceed to calculate the Regge $(t,u)$ channel scattering amplitude. The high energy limit of the amplitude can be written as% \begin{align} R^{(n,2m,q)}(t,u) & =\int_{1}^{\infty}dx\,x^{k_{1}\cdot k_{2}}(1-x)^{k_{2}% \cdot k_{3}}\left[ \frac{e^{T}\cdot k_{3}}{1-x}\right] ^{n-2m-2q}\nonumber\\ \cdot & \left[ \frac{e^{L}\cdot k_{1}}{-x}+\frac{e^{L}\cdot k_{3}}% {1-x}\right] ^{2m}\left[ \frac{e^{L}\cdot k_{1}}{x^{2}}+\frac{e^{L}\cdot k_{3}}{(1-x)^{2}}\right] ^{q}\nonumber\\ & \simeq(\sqrt{-{t}})^{n-2m-2q}\left( \frac{\tilde{t}^{\prime}}{2M_{2}% }\right) ^{2m+q}\sum_{j=0}^{2m}{\binom{2m}{j}}\left( -\right) ^{j}\left( \frac{s}{\tilde{t}^{\prime}}\right) ^{j}\nonumber\\ & \cdot\int_{1}^{\infty}dx\,x^{k_{1}\cdot k_{2}-j}(1-x)^{k_{2}\cdot k_{3}+j-n}.\label{15}% \end{align} We can make a change of variable $y=\frac{x-1}{x}$ to transform the integral of Eq.(\ref{15}) to% \begin{align} R^{(n,2m,q)}(t,u) & =(\sqrt{-{t}})^{n-2m-2q}\left( \frac{\tilde{t}^{\prime}% }{2M_{2}}\right) ^{2m+q}(-)^{k_{2}\cdot k_{3}-n}\nonumber\\ & \cdot\sum_{j=0}^{2m}{\binom{2m}{j}}\left( \frac{s}{\tilde{t}^{\prime}% }\right) ^{j}\int_{0}^{1}dy\,y^{k_{2}\cdot k_{3}+j-n}(1-y)^{n-k_{1}\cdot k_{2}-k_{2}\cdot k_{3}-2}.\nonumber\\ & =(\sqrt{-{t}})^{n-2m-2q}\left( \frac{\tilde{t}^{\prime}}{2M_{2}}\right) ^{2m+q}(-)^{k_{2}\cdot k_{3}-n}\nonumber\\ & \cdot\sum_{j=0}^{2m}{\binom{2m}{j}}\left( \frac{s}{\tilde{t}^{\prime}% }\right) ^{j}B(k_{2}\cdot k_{3}+j-n+1,n-k_{1}\cdot k_{2}-k_{2}\cdot k_{3}-1).\label{16}% \end{align} In the Regge limit, the beta function can be approximated by% \begin{align} & B(k_{2}\cdot k_{3}+j-n+1,n-k_{1}\cdot k_{2}-k_{2}\cdot k_{3}-1)\nonumber\\ & =B(-1-\frac{t}{2}+j,-1-\frac{u}{2})\nonumber\\ & \simeq B(-1-\frac{t}{2},-1-\frac{u}{2})(-1-\frac{t}{2})_{j}(\frac{s}% {2})^{-j}\label{17}% \end{align} where $(a)_{j}=a(a+1)(a+2)...(a+j-1)$ is the Pochhammer symbol. In the above calculation, we have used $s+t+u=2n-8.$ Finally, the $(t,u)$ channel amplitude can be written as% \begin{align} R^{(n,2m,q)}(t,u) & =(-)^{k_{2}\cdot k_{3}-n}B(-1-\frac{t}{2},-1-\frac{u}% {2})(\sqrt{-{t}})^{n-2m-2q}\left( \frac{\tilde{t}^{\prime}}{2M_{2}}\right) ^{2m+q}\nonumber\\ & \cdot2^{2m}(\tilde{t}^{\prime})^{q}U\left( -2m\,,\,\frac{t}{2}% +2-2m\,,\,\frac{\tilde{t}^{\prime}}{2}\right) .\label{18}% \end{align} We can now explicitly write down the general formula for high energy closed string scattering amplitude corresponding to the closed string state% \begin{equation} \left\vert n;2m,2m^{^{\prime}};q,q^{^{\prime}}\right\rangle \equiv(\alpha _{-1}^{T})^{\frac{n}{2}-2m-2q}(\alpha_{-1}^{L})^{2m}(\alpha_{-2}^{L}% )^{q}\otimes(\tilde{\alpha}_{-1}^{T})^{\frac{n}{2}-2m^{^{\prime}}% -2q^{^{\prime}}}(\tilde{\alpha}_{-1}^{L})^{2m^{^{\prime}}}(\tilde{\alpha}% _{-2}^{L})^{q^{^{\prime}}}|0,k\rangle.\label{19}% \end{equation} By using Eqs.(\ref{5}), (\ref{13}) and (\ref{18}), the amplitude is% \begin{align} R_{\text{closed}}^{\left( n;2m,2m^{^{\prime}};q,q^{^{\prime}}\right) }\left( s,t,u\right) & =(-)^{k_{2}\cdot k_{3}-n}\sin\left( \pi k_{2}\cdot k_{3}\right) B(-1-\frac{s}{2},-1-\frac{t}{2})B(-1-\frac{t}{2},-1-\frac{u}% {2})\nonumber\\ & \cdot(\sqrt{-{t}})^{n-2(m+m^{^{\prime}})-2(q+q^{^{\prime}})}\left( \frac{\tilde{t}^{\prime}}{2M_{2}}\right) ^{2(m+m^{^{\prime}})+q+q^{^{\prime}% }}2^{(2m+m^{^{\prime}})}(\tilde{t}^{\prime})^{q+q^{^{\prime}}}\nonumber\\ & \cdot U\left( -2m\,,\,\frac{t}{2}+2-2m\,,\,\frac{\tilde{t}^{\prime}}% {2}\right) U\left( -2m^{^{\prime}}\,,\,\frac{t}{2}+2-2m^{^{\prime}% }\,,\,\frac{\tilde{t}^{\prime}}{2}\right) .\label{20}% \end{align} The Regge scattering amplitudes at each fixed mass level are no longer proportional to each other. The ratios are $t$ dependent functions and can be calculated to be $\frac{{}}{{}}$% \begin{align} \frac{R^{(n,2m,q)}(s,t)}{R^{(n,0,0)}(s,t)} & =(-1)^{m}\left( -\frac {1}{2M_{2}}\right) ^{2m+q}(\tilde{t}^{\prime}-2N)^{-m-q}(\tilde{t}^{\prime })^{2m+q}\nonumber\\ \cdot & \sum_{j=0}^{2m}(-2m)_{j}\left( -1+n-\frac{\tilde{t}^{\prime}}% {2}\right) _{j}\frac{(-2/\tilde{t}^{\prime})^{j}}{j!}+\mathit{O}\left\{ \left( \frac{1}{t}\right) ^{m+1}\right\} .\label{21}% \end{align} An interesting observation \cite{KLY} is that the coefficients of the leading power of $\tilde{t}^{\prime}$ in Eq. (\ref{21}) can be identified with the ratios in Eqs.(\ref{3}). To ensure this identification, we need the following identity \begin{align} & \sum_{j=0}^{2m}(-2m)_{j}\left( -1+n-\frac{\tilde{t}^{\prime}}{2}\right) _{j}\frac{(-2/\tilde{t}^{\prime})^{j}}{j!}\nonumber\\ & =0(-\tilde{t}^{\prime})^{0}+0(-\tilde{t}^{\prime})^{-1}+...+0(-\tilde {t}^{\prime})^{-m+1}+\frac{(2m)!}{m!}(-\tilde{t}^{\prime})^{-m}+\mathit{O}% \left\{ \left( \frac{1}{\tilde{t}^{\prime}}\right) ^{m+1}\right\} .\label{22}% \end{align} Note that $n$ effects only the sub-leading terms in $\mathit{O}\left\{ \left( \frac{1}{\tilde{t}^{\prime}}\right) ^{m+1}\right\} .$ Eq.(\ref{21}) was exactly proved \cite{KLY} for $n=0,1$ by using Stirling number identities developed in combinatoric number theory \cite{MK}. For general integer $n$ case, only the identity corresponging to the term $\frac{(2m)!}{m!}(-\tilde {t}^{\prime})^{-m}$ was rigoursly proved \cite{HLTY} but not other "0 identities". We conjecture that Eq. (\ref{22}) is valid for any \textit{real} number $n.$ We have numerically shown the validity of Eq. (\ref{22}) for the value of $m$ up to $m=10.$ Here we give only results of $m=3$ and $4$ \begin{align} & \sum_{j=0}^{6}(-2m)_{j}\left( -1+n-\frac{\tilde{t}^{\prime}}{2}\right) _{j}\frac{(-2/\tilde{t}^{\prime})^{j}}{j!}\nonumber\\ & =\frac{120}{(-\tilde{t}^{\prime})^{3}}+\frac{720a^{2}+2640a+2080}% {(-\tilde{t}^{\prime})^{4}}+\frac{480a^{4}+4160a^{3}+12000a^{2}+12928a+3840}% {(-\tilde{t}^{\prime})^{5}}\nonumber\\ & +\frac{64a^{6}+960a^{5}+5440a^{4}+14400a^{3}+17536a^{2}+7680a}{(-\tilde {t}^{\prime})^{6}},\label{23}% \end{align}% \begin{align} & \sum_{j=0}^{8}(-2m)_{j}\left( -1+n-\frac{\tilde{t}^{\prime}}{2}\right) _{j}\frac{(-2/\tilde{t}^{\prime})^{j}}{j!}\nonumber\\ & =\frac{1680}{(-\tilde{t}^{\prime})^{4}}+\frac{13440a^{2}+67200a+76160}% {(-\tilde{t}^{\prime})^{5}}\nonumber\\ & +\frac{13440a^{4}+152320a^{3}+595840a^{2}+930048a+467712}{(-\tilde {t}^{\prime})^{6}}\nonumber\\ & +\frac{3584a^{6}+68096a^{5}+501760a^{4}+1802752a^{3}+3236352a^{2}% +2608128a+645120}{(-\tilde{t}^{\prime})^{7}}\nonumber\\ & +\frac{256a^{8}+7168a^{7}+82432a^{6}+501760a^{5}+1732864a^{4}% +3361792a^{3}+3345408a^{2}+1290240a}{(-\tilde{t}^{\prime})^{8}}\label{24}% \end{align} where $a=-1+n.$ We can see that $a$ shows up only in the sub-leading order terms as expected. From the form of Eq.(\ref{20}), we conclude that the high energy closed string ratios in the fixed angle regime can be extracted from Kummer functions and are calculated to be \begin{align} \frac{A_{\text{closed}}^{\left( n;2m,2m^{^{\prime}};q,q^{^{\prime}}\right) }\left( s,t,u\right) }{A_{\text{closed}}^{\left( n;0,0;0,0\right) }\left( s,t,u\right) } & =\left( -\frac{1}{M_{2}}\right) ^{2(m+m^{^{\prime}% })+q+q^{^{\prime}}}\left( \frac{1}{2}\right) ^{q+q^{^{\prime}}}\nonumber\\ & \lim_{t\rightarrow\infty}(-t)^{-m-m^{^{\prime}}}U\left( -2m\,,\,\frac {t}{2}+2-2m\,,\,\frac{t}{2}\right) U\left( -2m^{^{\prime}}\,,\,\frac{t}% {2}+2-2m^{^{\prime}}\,,\,\frac{t}{2}\right) \nonumber\\ = & \left( -\frac{1}{M_{2}}\right) ^{2(m+m^{^{\prime}})+q+q^{^{\prime}}% }\left( \frac{1}{2}\right) ^{m+m^{^{\prime}}+q+q^{^{\prime}}}% (2m-1)!!(2m^{^{\prime}}-1)!!.\label{25}% \end{align} This is an alternative method to calculate the high energy closed string ratios other than the method of decoupling of zero norm state\ adopted previously. In addition to redriving the ratios calculated previously, one can express the ratios in terms of Kummer functions through the Regge calculation presented in this paper. This may turn out to be important for the understanding of algebraic structure of stringy symmetry. In conclusion, a direct calculation of general formula for high energy closed string scattering amplitudes is doable in the Regge regime and is calculated in Eq.(\ref{20}), but not in the fixed angle regime. The ratios among high energy closed string scattering amplitudes for each fixed mass level in the fixed angle regime, which were calculated previously by the method of decoupling of zero norm states, can be alternatively deduced from general formula of high energy closed string scattering amplitudes in the Regge regime.\noindent\ The result that the ratios can be expressed in terms of Kummer functions in the Regge calculation presented in this paper may help to understand the algebraic structure of stringy symmetry. \section{Acknowledgments} We thank Rong-Shing Chang, Song He, Yoshihiro Mitsuka and Keijiro Takahashi for helpful discussions. This work is supported in part by the National Science Council, 50 billions project of Ministry of Education and National Center for Theoretical Science, Taiwan.
1,116,691,501,025
arxiv
\section{Introduction} Coherent population trapping (CPT) \cite{alzetta1997induced, scully1997quantum, fleischhauer2005electromagnetically, bergmann1998coherent} is a quantum mechanical phenomenon in driven three-level $\Lambda$ systems used to make a specific material transparent to certain frequencies. Under appropriate driving conditions, the dynamics of the $\Lambda$ system gets ``trapped'' into the Hilbert subspace of the two ground levels, in a coherent superposition which can no longer absorb the light. Such a superposition is known as a ``dark state,'' because it is no longer coupled to the excited state and fluorescent light emission is then suppressed. Under current advances in quantum control, applications of CPT have attracted growing interest outside the field of optics. In the context of dissipative quantum state preparation \cite{hilser2012all,ticozzi2012hamiltonian,yale2013all,pingault2014all,chu2015all,zhou2017dark}, this concept is used to stabilize arbitrary linear superpositions of two ground states by driving the $\Lambda$ system into the (unique) dark state, with the amplitudes of the superposition being determined by the ratio between the two Rabi frequencies and the relative phase between the two laser fields. Notably, CPT plays an important role in protocols for all-optical manipulations in nitrogen-vacancy (NV) centers in diamond \cite{santori2006coherent0,santori2006coherent,golter2013nuclear,jamonneau2016coherent}. More recently, CPT has found application in real-time quantum sensing, by allowing the effective magnetic field in a medium to be estimated via the rate of photon counts under CPT conditions \cite{WangCPTsensing}. Currently, standard theoretical analyses of CPT only account for decoherence due to the quantum vacuum \cite{scully1997quantum,qi2009electromagnetically, whitley1976double}. However, this need not be the only source of noise in many realistic settings of interest. Even assuming that any operational source of noise (e.g., control amplitude or frequency fluctuations) may be experimentally minimized, it is important to expand the treatment to include noise arising directly from the hosting medium in which the $\Lambda$ system is implemented. While we can argue for a noise model that is specific for each medium (environment) on physical grounds, the resulting functional forms will typically still have unknown (e.g., decay) noise parameters that need to be estimated from experimentally accessible quantities. In this work, after describing the physical setting in Sec.\ref{setting}, we theoretically analyze the CPT dynamics of a general $\Lambda$ system under the simultaneous presence of vacuum noise and noise due a classical stochastic environment (Sec.\ref{NCPT}). Our approach is based on deriving an appropriate time-convolutionless (TCL) master equation (ME) \cite{breuer2002theory}. Based on our analysis, we first show (Sec.\ref{applications1}) a correspondence between the height of the dip in the CPT photoluminescence spectrum and the unknown decay parameter of the classical environment, thereby enabling an estimation of this parameter from observed spectra. In Sec.\ref{applications2}, we further apply this result to quantify the fidelity loss that the noise induces in CPT-based dissipative state initialization, as considered in \cite{yale2013all}. Thus, in our analysis, CPT serves two different but complementary purposes: decay parameter estimation and dissipative state preparation. While our theoretical approach may be applied to an arbitrary $\Lambda$ system in principle, we use the NV center \cite{tamarat2008spin,chu2015quantum, childress2013diamond} as a realistic illustrative setting for our analysis. NV centers are highly studied solid-state systems due to both their long qubit coherence times (ranging from $10^{-6}$s to $10^{-3}$s depending on the isotopic purity of the diamond sample \cite{maurer2012room,doherty2013nitrogen,kennedy2002single}) and their dynamic accessibility for initialization and read-out using optical pulses. Furthermore, they are scalable solid-state systems \cite{bernien2013heralded}, which makes them a good candidate for various quantum technology applications. \section{Physical setting} \label{setting} \begin{figure*}[!t] \centering \includegraphics[width=0.89\textwidth]{Fig1s.pdf} \vspace*{-3mm} \caption{(Color online) (a) Schematic of an NV center in the diamond lattice. Both the $^{13}$C nuclear spin bath and the quantum vacuum are explicitly shown. (b) Qualitative diagram of the energy levels and couplings of the selected $\Lambda$ system to two coherent light sources (solid green and dashed red) and the nuclear spin bath.} \label{Figure1} \end{figure*} The NV center is embedded in diamond, which is composed mostly of $^{12}$C isotopes (see Fig.(\ref{Figure1}a)). However, a small portion of the Carbon atoms (about $1.1\%$ in common samples) are $^{13}$C. Since the latter is the only one that has a non-zero nuclear spin ($I={1}/{2}$), it is the one that couples to the electronic spin degrees of freedom of the NV center. Using two coherent light sources, the NV center is driven to be an effective $\Lambda$ system \cite{whitley1976double,santori2006coherent, qi2009electromagnetically,golter2013nuclear,mishra2014three,jamonneau2016coherent}. In an ideal scenario, no photons are emitted when CPT is achieved. However, in the presence of noise, the expected value of the excited state population in CPT becomes larger than zero, and additional photons are emitted \cite{golter2013nuclear}. Throughout our analysis, we assume the quantization axis to be along the NV center axis. It is well known that the NV center satisfies the $C_{3v}$ point-group symmetry; hence we shall use the corresponding group-theoretical notation. For the excited state of the $\Lambda$ system \cite{manson2006nitrogen, maze2011properties, chu2015quantum, doherty2011negatively}, we choose the state $A_{2}$, motivated by the fact that it does not couple strongly to the non-radiative singlet states \cite{hincks2018statistical}. Combined with the two $m_{s}=\pm 1$ ground states $\{^{3\!}A_{2(-)}, ^{3\!}A_{2(+)}\}$, this gives us a nearly perfectly closed $\Lambda$ system, which has already been demonstrated experimentally \cite{togan2010quantum, golter2013nuclear, golter2014optically}. We denote the states $ A_{2}$, $^{3\!}A_{2(-)}$, $^{3\!}A_{2(+)}$ by $|0\rangle, |1\rangle,$ and $|2\rangle$, respectively (see Fig. \ref{Figure1}(b)). We also make the tensor product between the orbital and spin degrees of freedom of the electronic structure explicit, by letting \begin{align} |0\rangle &\equiv |A_{2}\rangle=|E_{-}\rangle \otimes |\!+1\rangle+|E_{+}\rangle \otimes |\!-1\rangle , \nonumber \\ |1\rangle &\equiv|^{3\!}A_{2(+)}\rangle=|E_{0}\rangle \otimes |\!+1\rangle , \label{basis} \\ |2\rangle &\equiv|^{3\!}A_{2(-)}\rangle=|E_{0}\rangle \otimes |\!-1\rangle . \nonumber \end{align} Here, $|E_{0}\rangle$ and $|E_{\pm}\rangle$ are the orbital angular momentum eigenstates of the NV-center electron system \cite{maze2011properties}, labelled by eigenvalues $0$ and $\pm 1$ of $L^{(e)}_{z}$. The $|\!\pm 1\rangle$ states denote the two spin angular momentum eigenstates, labelled by the eigenvalues of $S^{(e)}_{z}$. The Hamiltonian of the driven $\Lambda$ system is given by $H_{\Lambda}(t)\equiv H_{\Lambda}^0+H_{\text{drive}}(t)$ where, by assuming units $\hbar=1$, the two contributions take the form \begin{eqnarray} H_{\Lambda}^0&=&\omega_{0}|0\rangle \langle 0|+\omega_{1}|1\rangle \langle 1|+\omega_{2}|2\rangle \langle 2| , \notag\\ H_{\text{drive}}(t)&= & ({\Omega_{1}}/{2})\, e^{-i(\omega_{L1}t+\phi_{1})}|1\rangle \langle 0| \label{eq:Ham0} \\ &+& ({\Omega_{2}}/{2})\, e^{-i(\omega_{L2}t+\phi_{2})}|2 \rangle \langle 0| + \text{H.c.}, \notag \end{eqnarray} where $\Omega_{1}$ and $\Omega_{2}$ are the Rabi frequencies, and ($\omega_{L1}, \phi_{1}$) and ($\omega_{L2}, \phi_{2}$) the frequencies and phases of the two coherent light sources, respectively. As mentioned in the introduction, we shall work under the assumption that any control errors arising in the implementation of $H_{\text{drive}}(t)$ may be neglected in comparison with environmental noise. We model the effect of the $^{13}$C nuclear-spin bath \cite{zhao2012decoherence} on the $\Lambda$ system as a fluctuating classical magnetic field. The value of this field at the position of the NV center at time $t$ is denoted by $b(t)$. Specifically, we assume the spin-bath noise to be zero-mean, stationary, and sufficiently weak to be treated perturbatively (see Sec.\ref{sub:tcl}). In particular, the lowest-order (two-point) correlation function is determined by \begin{equation} C(t_1,t_2)\equiv {\mathbb E}\{b(t_1)b(t_2)\} = C(|t_1-t_2|), \label{eq:cor} \end{equation} where ${\mathbb E}$ denotes the ensemble average over realizations of the classical stochastic process $\{b(t)\}$. In the presence of this stochastic bath, the Hamiltonian of the driven $\Lambda$ system is then given by $H(t)\equiv H_{\Lambda}(t)+H_{c}(t)$, where \begin{equation} H_{c}(t)\equiv H_c[b(t)]= -\mu_{B}(L^{(e)}_{z}+2S^{(e)}_{z})\, b(t), \label{eq:Hn} \end{equation} is the semi-classical interaction Hamiltonian describing the coupling of the electronic system to the field, with $\mu_{B}=\frac{e\hbar}{2m_{e}c}$ being the Bohr magneton. We write such a Hamiltonian in the $\Lambda$-system basis of Eqs. \eqref{basis} as \[ H_{c}(t)=\sum_{i,j=0,1,2}\!\langle i|H_{c}(t)|j\rangle |i\rangle \langle j| . \] Using the fact that the expectation value of the operators $L^{(e)}_{z}$ and $S^{(e)}_{z}$ are given by $(0, 0), (0, 1),$ and $(0, -1)$ for the states $|0\rangle, |1\rangle,$ and $|2\rangle$, respectively, along with the orthonormality of the states $|E_{0}\rangle$ and $|E_{\pm}\rangle$, we arrive at \begin{equation} H_{c}(t)=-\gamma_{e}b(t)\,|1\rangle \langle 1|+\gamma_{e}b(t)\,|2\rangle \langle 2| , \end{equation} where $\gamma_{e}=\frac{e\hbar}{m_{e}c}$ is the gyromagnetic ratio of the electron. Physically, $\gamma_{e}b(t)$ is the time-dependent frequency fluctuation of the $\Lambda$-system ground states; see Fig.\ref{Figure1}(b). \section{Noisy Coherent Population Trapping} \label{NCPT} \subsection{Master equation for ideal CPT dynamics} As mentioned, CPT is an equilibration phenomenon in a driven three-level $\Lambda$ system where, irrespective of the initial state \cite{jyotsna1995coherent,ticozzi2012hamiltonian}, the dynamics gets restricted to the two-ground-state manifold. Physically, this is a quantum-mechanical consequence of the destructive interference between the two transition probability amplitudes from individual ground states to the same excited state in the $\Lambda$ system. In order to set the stage for the noisy setting, we briefly review the derivation of a quantitative model within a ME formalism. In the presence of spontaneous decay alone, the system and the bath are described by the total Hamiltonian \begin{equation} H^0_{\text{tot}}(t) \equiv H_\Lambda^0+H_{\text{drive}}(t)+H_{\text{vac}}+H_{\Lambda\text{-vac}}, \label{Htot0} \end{equation} where $H_{\text{vac}}$ is the Hamiltonian of the electromagnetic vacuum and $H_{\Lambda\text{-vac}}$ is the interaction Hamiltonian between the $\Lambda$ system and the vacuum, in the standard dipole approximation. We write the ME in the interaction picture with respect to $H_{\Lambda}^0+H_{\text{vac}}$ and denote an operator $X$ in this representation by $\tilde{X}$. The Liouville-von Neumann equation describing the evolution of the driven $\Lambda$ system and the vacuum is then given by \begin{equation*} \dot{\tilde{\sigma}}(t)=-i[\tilde{H}_{\text{drive}}(t)+\tilde{H}_{\Lambda\text{-vac}}(t), \tilde{\sigma}(t)] , \end{equation*} where $\tilde{\sigma}(t)$ is the density matrix of the total system. By assuming that the joint initial state $\sigma(0)=\rho(0)\otimes \rho_{\text{vac}}$ is product, and treating the coupling to the vacuum in the standard Born-Markov approximation \cite{breuer2002theory}, the resulting reduced dynamics is given by a Lindblad ME of the form \begin{equation} \frac{d}{dt}{\text{Tr}}_{\text{vac}}\tilde{\sigma}(t)=\dot{\tilde{\rho}}(t)=-i[\tilde{H}_{\text{drive}}(t), \tilde{\rho}(t)]+R_q[\tilde{\rho}(t)] , \label{eqn:A1} \end{equation} where the Hamiltonian may be explicitly computed from Eq. \eqref{eq:Ham0} and \begin{equation} R_q[\tilde{\rho}]\equiv -\frac{1}{2}\sum _{i=1,2} \,(L^{\dagger}_{i}L_{i}\tilde{\rho}+\tilde{\rho}L^{\dagger}_{i}L_{i}-2L_{i}\tilde{\rho}L^{\dagger}_{i}) \label{eq:Rq} \end{equation} is the dissipator accounting for the quantum Markovian environment. The two Lindblad operators are given by \begin{equation} L_{i}=\sqrt{{\Gamma}/{2}} \, |i\rangle\langle 0| , \quad \text{for} \hspace{0.5cm} i=\left\{1,2 \right\}, \label{eq:lind} \end{equation} where $\Gamma=\Gamma_{01}\approx \Gamma_{02}$ is the decay rate from the excited state to each of the ground states (explicitly, $\Gamma_{0i}=(\omega_{0}-\omega_{i})^{3}d^{2}_{0i}/3\pi \varepsilon_{0}\hbar c^{3}$, $i=1,2$, where $d_{0i}$ is a matrix element from the dipole coupling matrix \cite{breuer2002theory}). By dropping, for simplicity, the tilde notation for interaction-picture operators, the result is a coupled set of differential equations for the density matrix elements. In particular, in the relevant case where the detunings of the two lasers are $\delta_{1}\equiv \delta$ and $\delta_{2}=0$, and $\phi_1 = \phi_2 =0$, we recover the known expressions (see, for instance, \cite{arimondo1996v}): \begin{align*} \dot{\rho}_{00} &=-\Gamma \rho_{00}+i{\Omega_{1}}/{2}\rho_{01}+i{\Omega_{2}}/{2}\rho_{02}+c.c. ,\\ \dot{\rho}_{11} &={\Gamma}/{2} \rho_{00}-i{\Omega_{1}}/{2}\rho_{01}+c.c.,\\ \dot{\rho}_{22} &={\Gamma}/{2} \rho_{00}-i{\Omega_{2}}/{2}\rho_{02}+c.c.,\\ \dot{\rho}_{01} &=-{\Gamma}/{2} \rho_{01}+i{\Omega_{1}}/{2}(\rho_{00}-\rho_{11})-i{\Omega_{2}}/{2}\rho_{21},\\ \dot{\rho}_{02} &=(-{\Gamma}/{2}+i\delta) \rho_{02}+i{\Omega_{2}}/{2}(\rho_{00}-\rho_{22})-i{\Omega_{1}}/{2}\rho_{12},\\ \dot{\rho}_{12} &=i\delta \rho_{12}-i{\Omega_{2}}/{2}\rho_{02}+i{\Omega_{1}}/{2}\rho_{10}. \end{align*} The above set of coupled differential equations can be compactly represented in matrix form as \begin{equation} \dot{\vec{\rho}}=\hat{A}\vec{\rho},\qquad \vec{\rho}\equiv(\rho_{00}, \rho_{01}, \ldots, \rho_{22}), \label{eq:rho} \end{equation} in terms of the vectorized density matrix. It is well known that for a Lindblad ME as in Eq. \eqref{eqn:A1}, a steady-state solution always exists, and it is globally attractive if and only if it is unique \cite{Sophie}. Numerically, we have explicitly verified that $\det{\hat{A}}=0$ for arbitrary values of the parameters $\Gamma, \Omega_{1}, \Omega_{2}$, and $\delta$. In particular, this implies that the experimentally tunable parameter $\delta$ can be varied freely and there will be a steady state $\vec{\rho}^{\,\text{eq}}_{\delta}$, determined by $\hat{A}\vec{\rho}^{\,\text{eq}}_{\delta}=0$. Unsurprisingly, the steady state corresponding to $\delta=0$ is the dark state $$|d \rangle\equiv\frac{\Omega_{2}}{\Omega}|1\rangle-\frac{\Omega_{1}}{\Omega} |2\rangle, \quad \Omega\equiv \sqrt{\Omega^{2}_{1}+\Omega^{2}_{2}},$$ because the CPT condition of having a zero two-photon detuning $\delta_{12}=\delta_{1}-\delta_{2}$ is equivalent to having $\delta=0$. By invoking the sufficient conditions for uniqueness provided in \cite{Sophie}, one may verify that the Lindblad dynamics has the unique steady state $\vec{\rho}^{\,\text{eq}}_{0}=|d\rangle\langle d|$ independent of $\Gamma$, provided that $\Gamma \ne 0$, in which case this steady state is reached from an arbitrary initial preparation. \subsection{Master equation for noisy CPT dynamics} \label{sub:tcl} Stochastic bath models have been extensively discussed in the literature \cite{kubo1963stochastic, van1992stochastic, saeki1988stochastic, breuer2002theory}. In the present setting, to integrate the coupling to a quantum (vacuum) environment and to the classical nuclear spin-bath environment into a single equation for the reduced dynamics of the $\Lambda$ system, we start from the full stochastic Hamiltonian \begin{equation*} H_{\text{tot}}(t)= H_{\text{tot}}^0(t) + H_{c}[b(t)], \end{equation*} where the noiseless Hamiltonian and the noise term are given by Eqs.\eqref{Htot0} and \eqref{eq:Hn}, respectively. Moving to the interaction picture with respect to the total free Hamiltonian $H^0_{\Lambda}+H_{\text{vac}}$, and denoting the full density matrix for a single realization of the stochastic process $\left\{b(t)\right\}_{t}$ by $\sigma(t;\left\{b(t)\right\})$, the formal solution of the Liouville-von Neumann equation reads \[ \sigma(t;\left\{b(t)\right\})=\mathcal{T}\exp{\left\{ \int_{0}^{t}ds \hat{L}(s) \right\}}\sigma(0), \] where $\hat{L}$ is the Liouvillian superoperator corresponding to the interaction part of $H_{\text{tot}}(t)$ in the interaction picture and $\mathcal{T}$ denotes time ordering. Let us define the projection superoperator $\hat{P}$ by requiring that $\hat{P}\sigma\equiv ({\text{Tr}}_{B}\sigma) \otimes \rho_{B}$, for arbitrary $\sigma$ and fixed $\rho_B$ (the latter is usually taken to be the stationary Gibbs state of the quantum bath) \cite{breuer2002theory}. Assuming as before that $\sigma(0)=\rho(0)\otimes \rho_{B}$, we get $\hat{P}\sigma(0)=\sigma(0)$. From here on, we shall use van Kampen's cumulant notation (that is, $\langle \hat{\chi} \rangle =\hat{P}\hat{\chi}\hat{P}$). We apply the projection operator to both sides of the previous equation and use the property $\hat{P}^{2}=\hat{P}$ to get \[ \rho(t;\left\{b(t)\right\})\otimes\rho_{B}=\langle \mathcal{T}\exp{\left\{ \int_{0}^{t}ds \hat{L}(s) \right\}}\rangle \hat{P}\sigma(0). \] Unlike in the ideal case, notice that we cannot immediately differentiate this equation to obtain a ME because the Liouvillian now depends on a stochastic process $b(t)$ which is everywhere continuous but nowhere differentiable. We average both sides of this equation with respect to the classical noise first and then follow steps that are well known, due to van Kampen \cite{van1974cumulant}. Starting from the ensemble-averaged equation \begin{equation} {\mathbb E}(\hat{P}\sigma(t)) = {\mathbb E}\left\{ \langle {\cal T}\exp\left\{ \int_{0}^{t}ds \hat{L}(s) \right\}\rangle \right\} \hat{P}\sigma(0), \label{eqn:averaged_liouville} \end{equation} we expand the right hand-side to \begin{equation*} (I+E_{0}(\hat{1})+E_{0}(\hat{1}, \hat{2})+\ldots)\hat{P}\sigma(0), \label{eqn:ensemble_eqn} \end{equation*} where $ E_{0}(\hat{1}, \hat{2}, \ldots, \hat{n})$ is the $n$-th term in the Dyson expansion, given by \begin{equation*} \int^{t}_{0}dt_{n}\int^{t_{n}}_{0}dt_{n-1}\ldots\int^{t_{2}}_{0}dt_{1}{\mathbb E} \left\{ \langle \hat{L}(t_{n})\ldots\hat{L}(t_{2})\hat{L}(t_{1}) \rangle \right\}. \end{equation*} On the other hand, differentiating Eq. \eqref{eqn:averaged_liouville} yields \begin{equation} \frac{d}{dt}{\mathbb E}(\hat{P}\sigma(t;\left\{b(t)\right\})) = (E_{1}(\hat{1})+E_{1}(\hat{1}, \hat{2})+\ldots)\hat{P}\sigma(0), \label{eqn:derivative_of_ensemble_eqn} \end{equation} where $ E_{1}(\hat{1}, \hat{2}, \ldots, \hat{n})$ denotes the time derivative of $ E_{0}(\hat{1}, \hat{2}, \ldots, \hat{n})$. We can rewrite the right hand-side of Eq. (\ref{eqn:derivative_of_ensemble_eqn}) in terms of ${\mathbb E}(\hat{P}\sigma(t)) \equiv \sigma_{\text{av}}(t)$ by solving Eq. (\ref{eqn:averaged_liouville}) with respect to $\hat{P}\sigma(0)$ as \begin{equation*} \hat{P}\sigma(0)=\left\{1+E_{0}(\hat{1})+E_{0}(\hat{1},\hat{2})+\ldots \right\}^{-1}\sigma_{\text{av}}(t). \end{equation*} Upon substituting the result back into Eq. (\ref{eqn:derivative_of_ensemble_eqn}), we obtain the TCL ME, \( \dot{\sigma}_{\text{av}}(t) =\hat{\kappa}(t) \sigma_{\text{av}}(t)\). The TCL generator $\hat{\kappa}(t)$ is determined in terms of the (van Kampen) cumulants of the Liouvillian superoperator as \cite{breuer2002theory} \begin{equation*} \left\{E_{1}(\hat{1})+E_{1}(\hat{1}, \hat{2})+\ldots\right\} \left\{1+E_{0}(\hat{1})+E_{0}(\hat{1},\hat{2})+\ldots \right\}^{-1}, \end{equation*} and can be expanded in orders of the interaction coefficients, $\hat{\kappa}(t)=\sum_{n}\hat{\kappa}_{n}(t)$. We emphasize that $\hat{\kappa}(t)$ depends on \emph{both} the stochastic process $b(t)$ and the coupling coefficient to the vacuum field $\Gamma$. For sufficiently \emph{weak coupling} to both the classical and quantum baths, we truncate the TCL generator expansion at the second order. Accordingly, we have $\hat{\kappa}_{1}(t) = E_{1}(\hat{1}),$ and $\hat{\kappa}_{2}(t) = E_{1}(\hat{1}, \hat{2})-E_{1}(\hat{1})E_{0}(\hat{1}).$ Provided that ${\text{Tr}}_{B}[\rho_{B}H_{\Lambda\text{-vac}}(t)]=0$, we can take $E_{1}(\hat{1})=0$ \cite{breuer2002theory} (since that leads to $\hat{P}\hat{L}(t) \hat{P}=0$). This leaves $\hat{\kappa}_{2}(t)$ as the only non-zero contribution, resulting in the second-order TCL ME \begin{equation*} \dot{\sigma}_{\text{av}}(t)= \int_{0}^{t} \!ds \, {\mathbb E} \{ \hat{P}\hat{L}(t)\hat{L}(s)\hat{P} \}\sigma_{\text{av}}(t) . \end{equation*} Finally, by tracing over the quantum bath, we obtain the desired TCL ME for the $\Lambda$ system alone, namely, \begin{equation*} \dot{\rho}_{\text{av}}(t) = -\int_{0}^{t}\!ds \,{\text{Tr}}_{B} {\mathbb E}\left\{[\tilde{H}_{\text{int}}(t), [\tilde{H}_{\text{int}}(s), \rho_{\text{av}}(t)\otimes \rho_{B}]]\right\}, \end{equation*} where $\rho_{\text{av}}(t)\equiv {\text{Tr}}_{B}\sigma_{\text{av}}(t)$ is the reduced density matrix of the $\Lambda$ system. Hereafter, we omit the subscript $``\text{av}"$ for notational convenience. By tailoring the above derivation more specifically to our system, the relevant interaction picture is \( \tilde{H}_{\text{int}}(t)=\tilde{H}_{\text{drive}}(t)+\tilde{H}_{c}[b(t)]+\tilde{H}_{\Lambda\text{-vac}}(t), \) with \[ \tilde{H}_{c}[b(t)]=e^{ iH^0_{\Lambda}t}H_{c}(t)e^{ -iH^0_{\Lambda}t}=H_{c}[b(t)]=\gamma_{e}b(t)Z_{12} , \] and $Z_{12}\equiv |1\rangle \langle 1|-|2\rangle \langle 2|$. We thus easily arrive at \begin{equation} \dot{\rho}(t)=-i[H_{\text{drive}}(t), \rho(t)]+R_{q}[\rho(t)]+R_{c}[\rho(t)], \label{eqn:rate} \end{equation} where $R_q$ is the Lindblad dissipator already specified in Eqs.\eqref{eq:Rq}-\eqref{eq:lind}, whereas \begin{align*} R_{c}[\rho (t)] & = -\gamma^{2}_{e}\alpha(t) \left[ Z_{12}^2 \rho (t)+\rho(t) Z_{12}^2 - 2 Z_{12} \rho (t) Z_{12} \right] \end{align*} is the time-local dissipator accounting for the additional spin-bath noise. Here, the time-dependent strength parameter is given by \begin{equation} \alpha(t) \equiv \int_{0}^{t}\!\! ds\,C(t-s) = 2\int_0^\infty \!d\omega \, S(\omega) \frac{\sin \omega t}{\omega}, \label{eqn:alpha} \end{equation} in terms of the noise correlation function $C(t)$ of Eq. \eqref{eq:cor} and the corresponding noise spectral density, determined by the Fourier transform \[ S(\omega)=\frac{1}{2\pi}\int_{-\infty}^{+\infty} \!\! d\tau \, C(\tau) e^{-i\omega \tau}.\] Physically, the noise parameter $\alpha(t)$ in the ME is the only term that carries information about the history of the stochastic magnetic field, consistent with the fact that no Markovian assumption is involved in the TCL ME. \begin{figure}[b] \centering \includegraphics[width=7cm]{newPlotFeb24.pdf} \vspace*{-2mm} \caption{(Color online) Excited-state population as a function of time for fast ($\tau_{c}=0.01 \mu$s, hence $\tau_{2}=100 \mu$s) vs. slow ($\tau_{c}=5 \mu$s, hence $\tau_{2}=0.2\mu$s) stochastic baths. Parameter values are as follows: $\Omega_{1}=\Omega_{2}=46$MHz, $\Gamma=7$MHz, and $\delta=0$. Fast oscillations have been removed for clarity.} \label{fig:dynamics} \end{figure} Similar to Eq.\eqref{eq:rho}, Eq.\eqref{eqn:rate} can still be cast as a linear system of coupled differential equations, \begin{equation} \dot{\vec{\rho}}=\hat{A^{\prime}}(t)\vec{\rho}, \qquad \vec{\rho}\equiv(\rho_{00}, \rho_{01}, \ldots, \rho_{22}), \label{ME2} \end{equation} in terms of a new superoperator matrix $\hat{A^{\prime}}(t)$. However, the dynamical system is now \emph{time-varying} in general, due to the time dependence encoded in $\alpha(t)$, which in turn stems from the colored spectrum. Characterizing the steady states and their stability becomes a significantly less straightforward problem \cite{LTV}, which is beyond our present scope. As an illustration of the influence that bath properties may have on the transient dynamics, we showcase in Fig.(\ref{fig:dynamics}) the dynamics of the excited state population obtained by solving Eq.\eqref{ME2} for an exponentially decaying correlation function, $C(t)=c_0^{2}\exp{(-t/\tau_{c})}$, with $\gamma_{e}c_0=1$MHz \cite{de2010universal}. We contrast a fast ($\tau_{c}=0.01\mu$s) vs. a slow ($\tau_{c}=5\mu$s) bath, showing how this results in appreciably different profiles over a given observation window. Since our main focus is CPT, which is an equilibrium phenomenon, the steady state will be seen in the long-time (effectively Markovian) limit, whereby \[ \alpha(t)= \int^{t}_{0}\!\!d\tau \,C(\tau) \approx \int^{\infty}_{0}\!\!d\tau \,C(\tau)\equiv \alpha t = S(0) t , \quad t \gg \tau_c. \] This integral is often encountered when calculating the decoherence time for a two-level system in the presence of Gaussian dephasing \cite{breuer2002theory}, as $T^{-1}_{2}=\gamma^{2}_{e}\alpha$. More generally, $\alpha$ may be related to the fastest decoherence timescale, which is the timescale over which $\rho (t)$ changes appreciably due to the coupling to the bath \cite{mozgunov2020completely}, denoted by $\tau_{2}\equiv (\gamma^{2}_{e}\alpha)^{-1}$ for our three-level system. \section{Applications} In this section, we illustrate how the theoretical description of CPT dynamics developed so far may be applied to two applications of independent interest. \subsection{Parametric noise estimation} \label{applications1} \begin{figure}[t] \centering \includegraphics[width=8.6cm]{CPTfigsnew.pdf} \vspace*{-3mm} \caption{(Color online) (a) Excited-state population of the $\Lambda$ system versus the two-photon detuning of the two lasers; the second laser detuning is taken to be $\delta_{2}=0$. The depth of the CPT dip depends on the value of the noise parameter characterizing the classical bath. (b) Excited-state population as a function of $\tau_{2}={1}/({\gamma^{2}_{e}\alpha})$ at the CPT dip ($\delta=0$). The values for the experimental parameters are motivated by \cite{golter2014optically} and taken to be $\Gamma=7\,$MHz, $\Omega_{1}=\Omega_{2}=46\,$MHz. } \label{figure2} \end{figure} First, by determining the steady-state solution of the ME Eq.(\ref{eqn:rate}), the equilibrium excited-state population may be studied as a function of relevant parameters, in particular, the detuning. In Fig.(\ref{figure2}a), representative results are shown for CPT dynamics with and without the presence of the spin-bath noise. Notably, in the noisy case, the excited-state population no longer vanishes; rather, the characteristic CPT dip has a finite height from zero. The depth of this dip depends on the value of the noise parameter $\alpha$. This is caused by the fluctuating magnetic field randomly shifting the two ground states of the $\Lambda$ system, and spoiling the destructive interference condition necessary for CPT. Interestingly, a similar steady-state behavior of driven $\Lambda$ systems was reported in \cite{blaauboer1997steady} in the presence of an incoherent optical pumping between only one of the ground states and the excited state. Likewise, including the decoherence of the ground states would also lead to a similar effect \cite{xu2008coherent}. These effects will play a negligible role (if any) in the CPT setting we consider. On the one hand, no optical pumping is present in our scheme. On the other hand, the effect reported in \cite{xu2008coherent} (where the ground-state decoherence time ranges from $\mu$ s to ms depending on the $^{13}$C-to$^{12}$C ratio) is negligible due to the relatively fast equilibration time of the $\Lambda$ system in the NV center \cite{lekavicius2017transfer}. \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{AllFidPlots.pdf} \vspace*{-3mm} \caption{(Color online) (a) The fidelity of the initialized state as a function of the Rabi ratio for a fixed $\Omega_{2}=10$MHz and noise parameter $\tau_{2}=300\mu s$. The dip is associated with maximum coupling between the dark and bright states. (b) The fidelity of the initialized state as a function of $\Omega_{2}$ for a fixed Rabi ratio and noise parameter $\tau_{2}=300\mu s$. (c) The fidelity of the initialized state as a function of noise parameter $\tau_{2}$ for a fixed $\Omega_{2}=10$MHz and Rabi ratio. Unit fidelity implies that the initialization is at the target dark state. The vacuum decay rate in all three plots is fixed at $\Gamma=1$MHz. } \label{figure3} \end{figure*} In Fig.(\ref{figure2}b), we present the dependence of the height of the CPT dip on the noise parameter for the special case of an exponentially decaying correlation function as considered before. Fig.(\ref{figure2}b) allows us to infer the value of $\tau_{2}$ (equivalently, $\alpha$) given the CPT simulation data in Fig.(\ref{figure2}a) and any {\em a priori} model for $C(\tau)$. This ability to determine $\alpha$ from an experimentally accessible quantity (the noisy CPT photon count) without resorting to multiple experimental set-ups may be especially advantageous in practice. \subsection{Qubit state preparation} \label{applications2} State initialization of a $\Lambda$ system in an arbitrary superposition of its ground states has been studied \cite{yale2013all}, with application in optically controlled solid-state spin-based quantum devices. This is accomplished by properly tuning the Rabi frequencies $\Omega_{1}, \Omega_{2}$, as well as the phase difference $\phi_{1}-\phi_{2}$ between the two driving fields, and using CPT to initialize the system in the dark state \begin{equation} |d\rangle =\cos{\theta}|1\rangle-e^{i\phi}\sin{\theta}|2\rangle , \label{eqn:dark_state} \end{equation} where $\tan{\theta}=\Omega_{1}/\Omega_{2}$ and $\phi=\phi_{1}-\phi_{2}$. The dark state is reached when the laser frequencies $\omega_{1}$, $\omega_{2}$ are completely in tune with the transition frequencies of the $\Lambda$ system. Due to the spin-bath noise, the steady-state solution of the ME Eq.(\ref{eqn:rate}) will not exactly be a dark state; hence we look for its fidelity with respect to $|d\rangle$. This fidelity is a function of the experimental parameters $\Omega_{1}, \Omega_{2}$ and the noise parameter $\tau_{2}$ \begin{equation} f(\Omega_{1},\Omega_{2}, \tau_{2})\equiv F(\rho^{\text{eq}}, |d\rangle \langle d|)=\langle d|\rho^{\text{eq}}|d\rangle. \end{equation} We assume for simplicity that $\phi=0$ (recall that the noiseless limit corresponds to $\tau_{2} \rightarrow \infty$). Since the Rabi ratio $\Omega_{1}/\Omega_{2}$ determines the dark state in Eq.(\ref{eqn:dark_state}) when $\phi=0$, we write the fidelity in a more convenient form \begin{equation} f(\Omega_{1},\Omega_{2}, \tau_{2})=g({\Omega_{1}}/{\Omega_{2}},\Omega_{2}, \tau_{2}) . \label{eqn: fidelity} \end{equation} where $g$ is a function that depends on $\Omega_{1}$ only through the Rabi ratio $\Omega_{1}/\Omega_{2}$. Fig.(\ref{figure3}a) shows the range of possible target dark states with different Rabi ratios when the parameters $\Omega_{2}$ and $\tau_{2}$ are fixed. A certain threshold on the fidelity (e.g. $F>0.98$) should be demanded for a state preparation to be successful. The fidelity plot has a dip at around the Rabi ratio $\Omega_{1}/\Omega_{2}=1$, corresponding to the dark state $|d\rangle = (|1\rangle -|2\rangle)/\sqrt{2}$. This is explicitly shown in the Appendix. Briefly, the dark state (which is decoupled from the bright state when no spin bath is present) gets coupled to the bright state $|b\rangle$, mediated by the fluctuating magnetic field $b(t)$. The coupling constant is given by $(\sin{2\theta})\gamma_{e}$ which is maximized when $\theta=\pi/4$, corresponding to $\Omega_{1}/\Omega_{2}=1$. Since $|b\rangle$ is coupled to the excited state, this will reduce the fidelity of the target state. Next, we consider a fixed target dark state (i.e. a fixed Rabi ratio in Eq. (\ref{eqn: fidelity})). Fig. (\ref{figure3}b) shows the fidelity as a function of $\Omega_{2}$ for a fixed Rabi ratio and noise parameter. We see that the fidelity quickly saturates with increasing $\Omega_{2}$. This is important because lasers in practice have a finite spectrum width around the desired frequency $\omega$. In the presence of other excited states, this might lead to undesired excitation that takes the electrons out of the $\Lambda$ system (the probability of which is proportional to the matter-field coupling, i.e., the Rabi frequency). Hence, Fig.(\ref{figure3}b) shows that one should pick the smallest $\Omega_{2}$ for which the fidelity of the target state saturates. For the specific $\Lambda$ system under consideration, the undesired excitation to the next allowed level in the NV center (which is $A_{1}$) can be ignored because the energy difference between the excited states $A_{1}$ and $A_{2}$ is about 3GHz, whereas in practice the driving laser frequencies can get to the desired transition energies within an uncertainty of few tens of MHz. Finally, Fig.(\ref{figure3}c) shows the fidelity as a function of the noise parameter $\tau_{2}$ for fixed Rabi frequencies $\Omega_{1},\Omega_{2}$. For values of $\tau_{2}<400 \mu$s (within the given values of other parameters), the CPT method of state initialization \cite{yale2013all} fails to accumulate sufficient fidelity for certain target states, e.g., $F<0.98$ for $\tau_{2}=300\mu s$ when initializing at $\Omega_{1}/\Omega_{2}=1$. Therefore, the decision to use the CPT method for state initialization should be accompanied by the knowledge of the noise parameter $\tau_{2}$ to have a sense of the resulting state fidelity. This is accomplished by Figs. (\ref{figure2}a, \ref{figure2}b) of our results. \section{Conclusion} We have analyzed the CPT phenomenon in the presence of a classical weakly coupled noise environment, on top of quantum vacuum noise. We have derived a TCL ME for the reduced dynamics of the driven $\Lambda$ system and showed that the equilibrium state has a non-zero excited state population. We find a one-to-one correspondence between the height of the CPT dip and the value of the unknown noise parameter, allowing for experimental determination of the noise. In the presence of other experimental techniques to find environment correlation times, the parameter estimation technique using CPT can be implemented to infer further unknown parameters of the noise model. We apply this knowledge to the problem of dissipative qubit state initialization and show that the target states prepared this way vary in fidelity. Future efforts will be directed towards employing this noise parameter estimation method to monitor an environment with non-stationary noise. Consequently, this provides additional corrective information for the state of a nearby qubit against noise where quantum control can be applied in a feedback loop to maintain high fidelity of the qubit state. \section*{Acknowledgement} A.D., N.M., P.B., and N.B. would all like to dedicate this paper to the memory of their advisor, Jonathan P. Dowling; may he rest in peace. We would like to acknowledge Lorenza Viola and Leigh M. Norris for their guidance and support throughout the project as well as their contribution in the theoretical development of the paper. We would also like to thank Hailin Wang, Shu-Hao Wu, and Ethan Turner for useful discussions. A.D. would like to thank Lorenza Viola for her hospitality during his visit to Dartmouth College. A.D. would also like to thank Mark M. Wilde, Lior Cohen, and Hwang Lee for suggestions and comments. This work was supported by the U.S. Army Research Office through the U.S. MURI Grant No. W911NF-18-1-0218. \begin{appendix} \section{Stochastic Hamiltonian in the dark and bright state basis} \label{appendix} Here we derive the Hamiltonian of a driven $\Lambda$ system in the dark-bright-common (dbc) basis \cite{shakhmuratov2004dark}. First, we write the $\Lambda$ system Hamiltonian $H(t)=H^0_{\Lambda}+H_{\text{drive}}(t)+H_{c}[b(t)]$ in the presence of the stochastic bath as \begin{equation*} \begin{split} H(t) &=\sum_{n=0,1,2}\omega_{n}P_{nn} -\gamma_{e}b(t)(P_{11}-P_{22}) \\&+\frac{\Omega_{1}}{2}(P_{10}e^{-i(\omega_{L1}t+\phi_{1})} +P_{01}e^{i(\omega_{L1}t +\phi_{1})}) \\&+\frac{\Omega_{2}}{2}(P_{20}e^{-i(\omega_{2L}t+\phi_{2})}+P_{02}e^{i(\omega_{L2}t+\phi_{2})}), \end{split} \end{equation*} where $P_{ij}=|i\rangle \langle j|$ are one-dimensional projectors. Next, we transform the wavefunction using the unitary \begin{equation*} U_{\Lambda}(t)=e^{i\sum_{n=0,1,2}\omega_{n}P_{nn}t } \hspace{0.25cm} \Rightarrow \hspace{0.25cm} |\Phi(t)\rangle =U_{\Lambda}(t)|\Psi(t)\rangle , \end{equation*} so that the new wavefunction satisfies \begin{equation*} \begin{split} i\partial_{t}|\Phi(t)\rangle &=i(\partial_{t}U_{\Lambda}(t))|\Psi(t)\rangle+U_{\Lambda}(t)(i\partial_{t}|\Psi(t)\rangle) \\ &=H_{\text{eff}}(t)|\Phi(t)\rangle , \end{split} \end{equation*} with the effective Hamiltonian given by \begin{multline} H_{\text{eff}}(t) =-\gamma_{e}b(t)(\hat{P}_{11}-\hat{P}_{22})+\frac{\Omega_{1}}{2}\hat{P}_{10}e^{-i\phi_{1}} \\+\frac{\Omega_{2}}{2}\hat{P}_{20} e^{-i\phi_{2}}+\text{H.c.} \label{eqn:effective_hamiltonian} \end{multline} Notice that we used the rotating basis $|\hat{1}(t)\rangle =e^{-i(\omega_{10}-\omega_{L1})t}|1\rangle $, $|\hat{2}(t)\rangle =e^{-i(\omega_{20}-\omega_{L2})t}|2\rangle $, and $|\hat{0}\rangle=|0\rangle$ to define the new projectors $\hat{P}_{ij}$ (also note that $\hat{P}_{ii}=P_{ii}$).\\ The next step is to move to the dbc-basis, that is, \begin{align*} |c\rangle &=|\hat{0}\rangle ,\\ |d\rangle &=e^{i\phi_{2}}\cos{\theta}|\hat{1}\rangle-e^{i\phi_{1}}\sin{\theta}|\hat{2}\rangle ,\\ |b\rangle &=e^{-i\phi_{1}}\sin{\theta} |\hat{1}\rangle+e^{-i\phi_{2}}\cos{\theta}|\hat{2}\rangle , \end{align*} where, as in the main text, $\tan{\theta}=\Omega_{1}/\Omega_{2}$.\\ We now show that, in the absence of the classical noise $b(t)$, the $\Lambda$ system can be thought of as a single decoupled state $|d\rangle$ and an effective driven two-level system given by the other two states ($|b\rangle$ and $|c\rangle$), with an effective Rabi frequency of $\Omega=\sqrt{\Omega^{2}_{1}+\Omega^{2}_{2}}$. To start, we write \begin{align*} |\hat{1}\rangle &=e^{-i\phi_{2}}\cos{\theta}|d\rangle+e^{i\phi_{1}}\sin{\theta}|b\rangle ,\\ |\hat{2}\rangle &=-e^{-i\phi_{1}}\sin{\theta} |d\rangle+e^{i\phi_{2}}\cos{\theta}|b\rangle , \end{align*} which gives \begin{align*} \hat{P}_{10} &=e^{-i\phi_{2}}\cos{\theta}P_{dc}+e^{i\phi_{1}}\sin{\theta}P_{bc}, \\ \hat{P}_{20} &=-e^{-i\phi_{1}}\sin{\theta} P_{dc}+e^{i\phi_{2}}\cos{\theta}P_{bc}. \end{align*} Substituting into Eq.(\ref{eqn:effective_hamiltonian}), we find $\Omega(P_{bc}+P_{cb})/2$ for the drive contribution, and the dark state is decoupled from the other two states, as claimed. After including the stochastic contribution of Eq.(\ref{eqn:effective_hamiltonian}) and using the relationships \begin{equation} \begin{split} P_{11} &=\cos^{2}{\theta}P_{dd}+\sin^{2}{\theta}P_{bb}\\& \qquad +\sin{\theta}\cos{\theta}(e^{-i(\phi_{1}+\phi_{2})}P_{db}+\text{H.c.}), \\ P_{22} &=\sin^{2}{\theta}P_{dd}+\cos^{2}{\theta}P_{bb}\\ & \qquad-\sin{\theta}\cos{\theta}(e^{-i(\phi_{1}+\phi_{2})}P_{db}+\text{H.c.}), \end{split} \end{equation} we find \begin{eqnarray*} P_{11}-P_{22} &= & \cos{2\theta}(P_{dd}-P_{bb}) \\ &+& \sin{2\theta}(e^{-i(\phi_{1}+\phi_{2})}P_{db} +e^{i(\phi_{1}+\phi_{2})}P_{bd}). \end{eqnarray*} Therefore, the effective Hamiltonian finally reads \begin{align*} H_{\text{eff}}(t) &= -\gamma_{e}\cos{2\theta}b(t)(P_{dd}-P_{bb})\\ &\qquad -\gamma_{e}\sin{2\theta}b(t)(e^{-i(\phi_{1}+\phi_{2})}P_{db} +e^{i(\phi_{1}+\phi_{2})}P_{bd}) \\&\qquad \qquad +\frac{\Omega}{2}(P_{bc}+P_{cb}), \end{align*} where the first term describes the coupling of the dark and bright states to the stochastic magnetic field $b(t)$ with a strength that depends on the ratio of the two Rabi frequencies via $\cos{2\theta}$. The second term describes a coupling between the dark and bright states mediated by the stochastic magnetic field, with a strength that is also determined by the the ratio of the two Rabi frequencies via $\sin{2\theta}$. Finally, the term in the last line is the well known coupling between the bright and common states (which does not include the dark state). When $\sin{2\theta}=0$, we expect the steady-state solution of the ME given in Eq.(\ref{eqn:rate}) of the main text to have the highest fidelity because the dark and bright states are uncoupled for $\sin{2\theta}=0$. This is the case when $\theta=0 $ (i.e., $\Omega_{1}=0$) or $\theta=\frac{\pi}{2}$ (i.e., $\Omega_{2}=0$). On the other hand, the coupling between the dark and bright states is maximized (and hence the fidelity of the steady state is minimized) when $\sin{2\theta}=1$ (i.e., $\theta=\frac{\pi}{4}$), which corresponds to $\Omega_{1}=\Omega_{2}$. This explains the dip in Fig.(\ref{figure3}a). \vspace{0.35cm} \end{appendix} \bibliographystyle{unsrt}
1,116,691,501,026
arxiv
\section{C-Space Singularities of the Four Bar mechanism}\label{sec:four_bar} Recently efforts have been made in the kinematics community to define and categorize kinematic singularities of linkages in a rigorous way \cite{muller:local_geometry}, \cite{muller:higher_order}, \cite{muller:sing_conf}. It has been observed~\cite[Ex. 6.3.4]{muller:sing_conf} that closed 6R-chains exist with rank drop in the constraint equation but smooth configuration spaces nevertheless. This makes it necessary to decide for singular points in the configuration space whether it is a C-Space Singularity, which are defined as non-manifold points of the configuration space~\cite{muller:sing_conf}. Compare also \cite[p. 227]{piippo:planar}, where this question is asked for some well known planar linkages. In this section we would like to apply some of the theory developed so far to the example of Four Bar Linkages. Conditions on the Design Parameters such, that there exists points with a rank drop in the constraint equations are well known, see e.g. \cite{piippo:planar}, \cite{bottema:theoretical_kin} (Grashof Criterion). Lesser known are methods to show, that these points are C-Space Singularities, i.e. non-manifold points. We will be able to show this for all mechanism in the class of singular four bars with computational methods. The Four Bar mechanism is one of the oldest and most widely used planar mechanism in Kinematics and Mechanical Engineering. It is also one of the first examples, where singularities in the configuration space were described and analyzed in a systematic way \cite{gosselin:singularities}. In its basic form it consists of four bars connected in a circular arrangement by rotational joints with one bar fixed to the ground: \begin{center} \begin{tikzpicture}[scale=3.5, gelenk/.style={circle, draw=blue!50, fill=white, thick, inner sep=0pt,minimum size=1.6mm}, akt_gelenk/.style={circle, draw=blue!50, fill=white!80, thick, inner sep=0pt,minimum size=2.2mm}] \draw[->] (0,0) -- (1.5,0) node[right,fill=white] {}; \draw[->] (0,0) -- (0,1) node[above,fill=white] {}; \node[akt_gelenk,label=below:{\scriptsize $A$}] (base) at (0,0) {}; \node[akt_gelenk,label=below:{\scriptsize $B$}] (base2) at (1,0) {}; \node[gelenk,label={[yshift=2.8pt,xshift=-1pt]below right:{\scriptsize $(x,y)$}}] (g1) at (0.171301045, 0.46974030) {} ; \node[gelenk,label={[yshift=2.8pt,xshift=-1pt]below right:{\scriptsize $(u,v)$}}] (g2) at (1.0192522, 0.9998146) {} ; \node[gelenk] (b2) at (-0.24954, 0.4332779) {}; \node[gelenk] (c2) at (0.62099022, 0.92539266) {}; \draw[-,thick] (base) -- node[right,xshift=2pt] {\scriptsize $l_2$} (g1) -- node[below,yshift=-3pt] {\scriptsize $l_4$} (g2) -- node[right, xshift=1pt]{\scriptsize $l_3$} (base2); \draw[-,thick, dashed] (base) -- (b2) -- (c2) -- (base2); \end{tikzpicture} \end{center} The configuration space, defined as the set of all possible assembly configuration, can be represented by the real algebraic set $X = \algrv(I)$, where $I = \langle p_1,p_2,p_3 \rangle \leq \R[x,y,u,v]$ is generated by the polynomials \begin{align*} p_1 & = x^2 + y^2 - l_2^2,\\ p_2 & = (u - 2)^2 + v^2 - l_3^2,\\ p_3 & = (u - x)^2 + (v - y)^2 - l_4^2. \end{align*} $l_2,l_3,l_4$ are the parameters of the four bar which are assumed to be positive real numbers. We fixed the length $l_1 = |AB| = 2$ of the ground bar, since any other length can be treated by scaling the system. \paragraph{Dimension of $I$} We will assume $l_2 \ne 2$, $l_4 \ne 2$, since the complementary case can be analyzed in the same way. Now we calculate a pseudo Gr\"obner basis of $I$ with respect to the polynomial ordering $(dp(2),dp(2))$ and the enumeration $v,y,u,x$. We can do all the calculations in $B = \Q(l_2,l_3,l_4)[x,y,u,v]$ but we have to be careful to avoid dividing by elements of $\Q(l_2,l_3,l_4) \backslash \Q$ in all Gr\"obner base calculations, since these could be zero for valid parameters $l_2,l_3,l_4$. In \texttt{Singular} we can achieve this by setting \texttt{option(intStrategy)} and \texttt{option(contentSB)}. We get $6$ polynomials $g_1, \ldots, g_6$, with the leading terms \begin{alignat*}{2} \leadt(g_1) & = -16\,u^2\,x & \leadt(g_4) & = y^2 \\ \leadt(g_2) & = (-2\,l_2^2+8)\,v\,u \qquad & \leadt(g_5) & = 2\,v\,y \\ \leadt(g_2) & = -2\,v\,x^2 & \leadt(g_6) & = v^2 \end{alignat*} According to Exercise~2.3.8 of \cite{greuel:singular_commutative} $\{g_1, \ldots, g_6\}$ is a Gr\"obner basis of $I$ as long as $l_2 \ne \pm 2$ which we assumed in the beginning but then we can calculate the dimension of $I$ with \[ \dim I = \dim \, \langle u^2\,x, v\,u, v\,x^2, y^2, v\,y, \, v^2 \rangle. \] With a simple combinatorial argument \cite[Prop. 9.1.3]{cox:ideals} we see, that the dimension of the right ideal is $1$ and consequently $\dim I = 1$. Since $I$ can be generated by the $3$ elements $p_1,p_2,p_3$, $A := \R[x,y,u,v]/I$ must be a complete intersection ring and consequently equidimensional Cohen-Macaulay~\cite[Proposition~18.13]{eisenbud:comm_alg}. \paragraph{Singular Locus} According to \cite{piippo:planar} there only exist singular points in $X$, iff \[ l_2 \pm l_3 \pm l_4 = 2. \] We restrict our investigation to the case $l_2 - l_3 + l_4 = 2$, i.e. $l_3 = l_2 + l_4 - 2 > 0$, since other cases can be handled in a similar way. Since $\dim I = 1$ equidimensional we need to analyze the ideal $J$ generated by $I$ and all the $3$-minors of the jacobian of $(p_1, p_2, p_3)$. With a \texttt{Singular} Gr\"obner base computation we get $J = \langle s_1, s_2,s_3,s_4 \rangle$, with \begin{align*} s_1 & = q_1(l_2,l_4)\,x + c_1(l_2,l_4) \\ s_2 & = q_2(l_2,l_4)\,u + r_2(l_2,l_4)\,x + c_2(l_2,l_4) \\ s_3 & = q_3(l_2,l_4)\,y \\ s_4 & = q_4(l_2,l_4)\,v + f(l_2,l_4,x,y,u), \end{align*} where all the coefficients are polynomials in $l_2,l_4$ or $l_2,l_4,x,y,u$ respectively. We need to carefully examine the coefficients of the leading monomials of the $s_i$ to make sure that $\{s_1,s_2,s_3,s_4\}$ is a Gr\"obner basis of $J$ in $A$. A quick calculation in \texttt{Singular} shows: \begin{align*} q_1(l_2,l_4) & = l_4^2 \cdot (l_4-2) \cdot (l_2+l_4) \cdot (l_2+l_4-2)^2 \cdot (l_2 + 2)^2 \cdot l_2 \cdot (3\,l_2 - 8), \\ q_2(l_2,l_4) & = l_4^2 \cdot (l_2+2 \,l_4-2) \cdot (l_2+l_4-2)^2 \cdot (l_2+2)^2 \cdot (l_2 - 2), \\ q_3(l_2,l_4) & = l_4^2 \cdot (l_2 + l_4 - 2)^2 \cdot (l_2 + 2), \\ q_4(l_2,l_4) & = l_2^2 \cdot (l_2 + 2) \cdot (l_2 - 2), \end{align*} Taking in account our assumptions, that $l_4,l_2 > 0$, $l_2 + l_4 - 2 = l_3 > 0$, $l_2 \ne 2$, $l_3 \ne 2$ and in addition $l_2 \ne \frac{8}{3}$ (which we will also need to check separately), we see, that none of the $q_i$ will vanish and $s_1, s_2,s_3,s_4$ forms a Gr\"obner basis of $J$ for all possible values of $l_2, l_4$. Clearly then $\dim J = 0$ and since $A$ is Cohen-Macaulay, we can infer from \cite[Theorem~18.15]{eisenbud:comm_alg}, that $I$ must be a radical ideal. But then the Singular Locus of $I$ is given by all the prime ideals containing $J$. We now set $p = (l_2, 0, l_2 + l_4,0) \in \R^4$. As we can check quickly by substitution in $(s_1,s_2,s_3,s_4)$, we have $J \leq \frm_p$, so $p$ is the only singularity of $X_\C = \algv(I)$. \paragraph{Manifold Points} To check whether $p$ is a non manifold point with Theorem~\ref{thm:curves} we need to calculate the integral closure $C$ of $A_{\frm_p}$. We could do this by applying the normalization algorithm described in \cite{greuel:singular_commutative} and implemented in \texttt{Singular} but it has proven difficult to check the validness of the Gr\"obner base calculations in each step for the considered values of $l_2,l_4$. We could still analyze the situation for generic values of $l_2,l_4$ but we want a statement for all valid values. Instead we will determine the blow up $\pi \colon \tilde{X} \to X$ at $p$, since $\tilde{X}$ will be nonsingular after one blow up and is then the normalization of $X$. First we move $p$ to the origin and consider $I_{\textrm{bl}} = \langle p'_1,p'_2,p'_3,b_1,b_2,b_3,b_4 \rangle \leq \R[x,y,u,v,\hat{x}, \hat{y},\hat{u},\hat{v}]$ given by \begin{align*} p'_1& = p_1(x + l_2,y,u + l_2 + l_4,v) = x^2+y^2+ 2 l_2 x, \\ p'_2& = p_2(x + l_2,y,u + l_2 + l_4,v) = u^2+v^2+ (2 l_2 + 2 l_4-4) u, \\ p'_3& = p_3(x + l_2,y,u + l_2 + l_4,v) = x^2+y^2-2 x u+u^2-2 y v+v^2 - 2 l_4 x + 2 l_4 u, \end{align*} and the homogeneous polynomials \begin{alignat*}{2} b_1 & = x\,\hat{y} - y\, \hat{x}, \qquad \qquad& b_4 &= y\, \hat{u} - u \, \hat{y},\\ b_2 &= x\, \hat{u} - u\, \hat{x}, & b_5 &= y\,\hat{v} - v \, \hat{y},\\ b_3 &= x\, \hat{v} - v \, \hat{x}, & b_6 &= u\, \hat{v} - v \, \hat{u}. \end{alignat*} Then we go to the chart $\hat{y} = 1$ and get the system \begin{align*} p''_1 & = y \cdot (y \, \hx^2+y+(2 l_2) \hx), \\ p''_2 & = y \cdot (\hu^2+y \hv^2+(2 l_2+2 l_4-4)) \hu,\\ p''_3 & = y \cdot (y \,\hx^2-2 y \hx \hu+y \hu^2+y \hv^2-2 y \hv+y+(-2 l_4) \hx+(2 l_4) \hu). \end{align*} We set $I_y := \langle p''_1/y, p''_2/y, p''_2/y \rangle \leq \R[\hx,y,\hu,\hv]$. To get the equations of the strict transform on this chart, we need to remove the exceptional divisor, so we have to calculate the saturation \[ J := (I_y : \langle y \rangle^{\infty}). \] This can easily be achieved with the command \texttt{sat} in \texttt{Singular}. But again we have to be careful to check whether the Gr\"obner basis calculations stays valid for all assumed values for $l_2,l_4$, so we will calculate the saturation manually. First we calculate $I_y \cap \langle y \rangle$, which we get by eliminating $t$ of \[ I_y \, t + \langle (1-t)y \rangle \] Now we divide any generator of $I_y \cap \langle y \rangle$ by $y$ and after checking, that all coefficients of the leading monomials won't be zero after substitution of values for $l_2,l_4$ we normalize the generators and get the following Gr\"obner basis of $J = (I_y:\langle y \rangle)$: \scriptsize \begin{align*} f_1 & =\hu^2 \hx^2+\frac{-2 l_2-2 l_4}{l_2+2} \hu \hx^3+ \frac{l_2^2+2 l_2 l_4+ l_4^2}{l_2^2+4 l_2+4} \hx^4+\frac{l_2^2-4 l_2+4}{l_2^2+4 l_2+4} \hu^2+ \frac{-2 l_2^2-2 l_2 l_4+4 l_2-4 l_4}{l_2^2+4 l_2+4} \hu \hx+\frac{l_2^2+2 l_2 l_4+ l_4^2}{l_2^2+4 l_2+4} \hx^2, \\ f_2 & =y \hx^2+y+(2 \, l_2) \hx, \\ f_3 & =y \hu^2-y \hu \hx+ \frac{l_2^2+4 l_2+4}{4} \hu^2 \hx+ \frac{- l_2^2- l_2 l_4-2 l_2-2 l_4}{2} \hu \hx^2+ \frac{l_2^2+2 l_2 l_4+ l_4^2}{4} \hx^3, \\ f_4 & = \hv \hx+ \frac{l_2+2}{2 l_2} \hu \hx^2+ \frac{- l_2- l_4}{2 l_2} \hx^3+ \frac{- l_2+2}{2 l_2} \hu + \frac{- l_2- l_4}{2 l_2} \hx, \\ f_5 & = \hv \hu+ \frac{-3 l_2^2-3 l_2 l_4+6 l_2-2 l_4}{ l_2^2-4 l_2+4} \hv \hx+ \frac{ l_2^2+4 l_2+4}{2 l_2^2-4 l_2} \hu^2 \hx^3+ \frac{- l_2^2- l_2 l_4-2 l_2-2 l_4}{ l_2^2-2 l_2} \hu \hx^4+ \frac{ l_2^2+2 l_2 l_4+ l_4^2}{2 l_2^2-4 l_2} \hx^5+ \frac{3 l_2^2+4}{2 l_2^2-4 l_2} \hu^2 \hx \\ & \quad + \frac{-4 l_2^3-4 l_2^2 l_4+6 l_2^2-2 l_2 l_4+4 l_2+4 l_4}{ l_2^3-4 l_2^2+4 l_2} \hu \hx^2+ \frac{5 l_2^3+10 l_2^2 l_4-10 l_2^2+5 l_2 l_4^2-12 l_2 l_4-2 l_4^2}{2 l_2^3-8 l_2^2+8 l_2} \hx^3+ \frac{2 l_2^2+4 l_2 l_4-4 l_2+2 l_4^2-4 l_4}{ l_2^2-4 l_2+4} \hx, \\ f_6 & = \hv y+y \hu \hx+ \frac{-2 l_2- l_4+2}{2 l_2} y \hx^2+ \frac{-2 l_2- l_4+2}{2 l_2} y+ ( l_2-2) \hu+(- l_2+2) \hx, \\ f_7 &= \hv^2+ \frac{-2 l_2-2 l_4+4}{ l_2-2} \hv+ \hu^2+ \frac{-2 l_2-2 l_4+4}{ l_2-2} \hu \hx+ \frac{ l_2^2+2 l_2 l_4-2 l_2+ l_4^2-2 l_4}{ l_2^2-2 l_2} \hx^2+ \frac{ l_2^2+2 l_2 l_4-2 l_2+ l_4^2-2 l_4}{l_2^2-2 l_2}, \end{align*} \normalsize If we try to repeat the process we get the same ideal, hence $J = (I:\langle y \rangle^{\infty})$ is the ideal of the strict transform of $X$ on the chart $\hy = 1$. Now we check, that the ideal of the $3$-minors of the jacobian of $(f_1, \ldots, f_7)$ is the whole ring and $\tilde{X}$ nonsingular. We can do this as before in the calculation of the singular locus of $X$. Now we need to identify all points $q$ in the fiber over the origin, so we calculate a pseudo Gr\"obner basis of $J + \langle y \rangle$ and get \begin{align*} g_1 &= (2 \, l_2) \, \hx, \\ g_2 &= (l_2-2) \, \hu + (-l_2+2) \, \hx, \\ g_3 &= y, \\ g_4 &=\hv^2+ \frac{-2 l_2-2 l_4+4}{ l_2-2} \, \hv+\hu^2+\frac{-2 l_2-2 l_4+4}{ l_2-2} \, \hu \hx \\ &\quad + \frac{ l_2^2+2 l_2 l_4-2 l_2+ l_4^2-2 l_4}{ l_2^2-2 l_2} \hx^2+\frac{ l_2^2+2 l_2 l_4-2 l_2+ l_4^2-2 l_4}{ l_2^2-2 l_2} \end{align*} After checking, that this is a Gr\"obner base for all assumed values of $l_2,l_4$ we substitute $\hx=0$, $\hu=0$ from $g_1,g_2$ into $g_4$ and multiply with $(l_2^2 - 2 \,l_2)$. Then we get \[ g'(\hv) = (l_2^2 - 2\,l_2) \, \hv^2 - l_2\,(2\,l_2 - 2\,l_4 + 4) \hv + (l_2^2+2 l_2 l_4-2 l_2+ l_4^2-2 l_4). \] $g'$ is a quadratic equation in $\hv$ with discriminant \[ 8 \, l_2 \, l_4 \, (l_2 + l_4 - 2) = 8 \, l_2 \, l_3 \, l_4 > 0. \] Consequently all points lying over the origin in the chart $\hy = 1$ are real. One can check analogously that for all other charts the same points (if any) are lying over the origin and therefore the extension $I_{p} \, \R[[x,y,u,v]]$ must be real too, where $I_p$ is the translated ideal. It follows, that $(l_2,0,l_2 + l_4,0)$ is not a manifold point of $X$. \section{Higher Dimensions}\label{sec:higher_dim} If $X$ is of dimension greater than one, it is difficult in general to analyze $\hat{I}$, since we can't effectively compute in the ring of power series. However we can use a criterion by Efroymson to check if $I''$ is real: \begin{thm}[Efroymson~\cite{efroymson:local_reality}]\label{thm:efroymson} Let $I$ be a real prime and $\CR$ integrally closed. $I''$ is real, if and only if the origin is contained in the euclidean closure of the real nonsingular points of $X$. \end{thm} We can use this criterion in many cases to decide if a singular real point of $X$ is a non-manifold point, since according to~Corollary~\ref{cor:real_mani} it is enough to show, that the extension of $I''$ in $\R[[\bx]]$ is real at this point. We want to demonstrate this on the configuration space $X$ of the 3RRR-parallel linkage from figure~\ref{fig:3rrr}, but the same arguments can be used for a large class of linkages. \begin{figure} \centering \begin{tikzpicture} [scale=1.5, joint/.style={circle, inner sep=0pt, draw=blue!50, fill=white, minimum size=1.6mm}, ajoint/.style={circle,inner sep=0pt, draw=blue!50, fill=white, minimum size=2.2mm}, sjoint/.style={circle,inner sep=0pt, fill=black, minimum size=1mm}] \def\jointbox{ +(0.18,-0.08) arc (0:180:0.18) -- cycle} \def\jointboxdown{ +(-0.18,0.08) arc (180:360:0.18) -- cycle} \def1.6{1.6} \def(0.55635083*\sc,0.830947501*\sc){(0.55635083*1.6,0.830947501*1.6)} \def(-0.4114378*\sc,0.9114378*\sc){(-0.4114378*1.6,0.9114378*1.6)} \def(0.99752066*\sc,0.070374129410*\sc){(0.99752066*1.6,0.070374129410*1.6)} \def30{30} \def90{90} \def(1.5*\sc,0.5*\sc){(1.5*1.6,0.5*1.6)} \def\trfix{ ++(-60:0.18) -- ++(-0.18,0) -- cycle} \def ++(-120:0.18) ++(0.01,0) -- +(-120:0.1) ++(0.09,0) -- +(-120:0.1) ++(0.08,0) -- +(-120:0.1){ ++(-120:0.18) ++(0.01,0) -- +(-120:0.1) ++(0.09,0) -- +(-120:0.1) ++(0.08,0) -- +(-120:0.1)} \draw[->] (0,0) -- (4,0) node[right,fill=white] {\small $x$}; \draw[->] (0,0) -- (0,2) node[above,fill=white] {\small $y$}; \draw[thick, black] (0*1.6,0*1.6) node[right, xshift=7pt, yshift=7pt] {\scalebox{.91}{$\theta_1$}} -- (0.55635083*\sc,0.830947501*\sc) node[left, xshift=-9pt, yshift=2pt] {\scalebox{.91}{$\theta_2$}} coordinate(a) -- (1.5*\sc,0.5*\sc) coordinate (b) -- +(30:1*1.6) node[right, xshift=3pt, yshift=0pt]{\scalebox{.91}{$\theta_7,\, \theta_9$}} coordinate (c) -- +(90:1*1.6) node[above right, xshift=3pt, yshift=-1pt]{\scalebox{.91}{$\theta_4$}} coordinate (d) -- (b); \draw (0.7,0) arc (0:56:0.7); \draw[thick] (1*1.6,0*1.6) coordinate (A3)-- +(0.99752066*\sc,0.070374129410*\sc) node[right, xshift=3pt, yshift=2pt] {\scalebox{.91}{$\theta_8$}}coordinate(e) -- (c); \draw[thick] (1*1.6,1*1.6) coordinate (A2)-- +(-0.4114378*\sc,0.9114378*\sc) node[right, xshift=3pt, yshift=6pt] {\scalebox{.91}{$\theta_5$}}coordinate(f) -- (d); \draw[line width=1.5pt] (0,0) -- (a) -- (b); \draw[line width=1.5pt] (A2) -- (f) -- (d); \draw[line width=1.5pt] (A3) -- (e) -- (c); \draw[-] ($(a) + (0.26,0)$) arc (0:340:0.26); \draw[dashed] (a) -- +(0.5,0); \draw[-] ($(b) + (0.5,0)$) arc (0:30:0.5) node[right, xshift=1pt, yshift=-4pt] {\scalebox{.91}{$\theta_6$}}; \draw[dashed] (b) -- +(0.8,0); \draw[-] ($(b) + (0.26,0)$) arc (0:90:0.26) node[right, xshift=1pt, yshift=3pt] {\scalebox{.91}{$\theta_3$}}; \node at (0,0) [ajoint] {}; \node at (a) [joint] {}; \node at (b) [joint] {}; \node at (c) [joint] {}; \node at (d) [joint] {}; \node at (e) [joint] {}; \node at (f) [joint] {}; \node at (A2) [ajoint] {}; \node at (A3) [ajoint] {}; \end{tikzpicture}\caption{a plane 3RRR-mechanism}\label{fig:3rrr} \end{figure} As in \cite{piippo:planar}, we set $\cos(\theta_i) = c_i$, $\sin(\theta_i) = s_i$, then $X$ is the real zero set of $I = \langle p_1, \ldots, p_{15} \rangle \leq \R[\{c_i,s_i \mid i=1,\ldots,9\}]$, where \begin{align*} p_1 &= c_1 + c_2 + c_3 + c_4 + c_5 - 1;\\ p_2 &= s_1 + s_2 + s_3 + s_4 + s_5 - 1;\\ p_3 &= c_1 + c_2 + c_6 + c_7 + c_8 - 1;\\ p_4 &= s_1 + s_2 + s_6 + s_7 + s_8;\\ p_5 &= c_6 + c_9 - c_3\\ p_6 &= s_6 + s_9 - s_3\\ p_{6+i} &= c_i^2 + s_i^2 - 1, \quad i=1,\ldots,9. \end{align*} We can check with \texttt{Singular}, that $\dim I = 3$, but the ideal $J$ generated by $I$ and the $15$-minors of the jacobian of $(p_1, \ldots, p_{15})$ has dimension $0$, which can be confirmed by analyzing the dimensions of $I + J_k$, where $J_1, \ldots, J_s$ are all the ideals given by the factorizing Gr\"obner Base algorithm (\texttt{facstd} in \texttt{Singular}). We also know, that the coordinate ring $A = \R[\{c_i,s_i\}]/I$ is a complete intersection ring, since $I$ is generated by $15$-elements, but then $A$ is Cohen Macaulay and we conclude: \begin{itemize} \item[(1)] $I$ is equidimensional and radical, \cite[Cor. 18.14, Theorem~18.15]{eisenbud:comm_alg}. \item[(2)] The singular locus of $X$ is zero-dimensional, which follows from (1) and the general Jacobian criterion~\cite[Thm. 5.7.1]{greuel:singular_commutative}. \item[(3)] $A$ is a normal ring, \cite[Theorem~18.15]{eisenbud:comm_alg}, \item[(4)] All the components of $X$ are disjoint, since the singular intersection of components would have codimension $\geq 1$ according to Hartshorne's Connectedness Theorem~\cite[Thm. 18.13]{eisenbud:comm_alg}. \end{itemize} Now at any point, the local ring is the local ring of an irreducible, normal, affine $\R$-variety, because of (4) and (3), and we can apply Efroymson's Criterion. We see at any point $p$ of $X$, that the extension of $I$ to the power series ring is real, if and only if $p$ is not isolated in the set of nonsingular real points of $X$. But the singular locus is of dimension $0$ (2) and we only need to check that a singularity is not isolated in $X$, to prove that it is not a manifold point. Since $X$ is given as the configuration space of a linkage we can often achieve this by geometric arguments. For example, in the following singular configuration of the mechanism, one of the legs can rotate freely, so $X$ is not a manifold there: \vspace{0.1cm} \begin{figure}[h] \begin{minipage}{0.5\textwidth} \begin{center} \begin{tikzpicture} [scale=1.1,joint/.style={circle,inner sep=0pt, draw=blue!50,fill=white, minimum size=1.3mm}, ajoint/.style={circle,inner sep=0pt, draw=blue!50,fill=white, minimum size=2.2mm}, sjoint/.style={circle,inner sep=0pt, fill=black, minimum size=1mm} \def0.8660254{0.8660254} \def1.6{1.6} \def\jointbox{ +(0.18,-0.08) arc (0:180:0.18) -- cycle} \def\jointboxdown{ +(-0.18,0.08) arc (180:360:0.18) -- cycle} \def\trfix{ ++(-60:0.18) -- ++(-0.18,0) -- cycle} \def ++(-120:0.18) ++(0.01,0) -- +(-120:0.1) ++(0.09,0) -- +(-120:0.1) ++(0.08,0) -- +(-120:0.1){ ++(-120:0.18) ++(0.01,0) -- +(-120:0.1) ++(0.09,0) -- +(-120:0.1) ++(0.08,0) -- +(-120:0.1)} \draw[->] (0,0) -- (4,0) node[right,fill=white] {\scriptsize $x$}; \draw[->] (0,0) -- (0,2) node[above,fill=white] {\scriptsize $y$}; \draw[thick, black] (0*1.6,0*1.6) -- ++(0.8660254*1.6,-0.5*1.6) coordinate (a) -- ++(1*1.6,0*1.6) coordinate (b) -- ++ (-0.8660254*1.6,0.5*1.6) coordinate (c) -- ++(0*1.6,-1*1.6) coordinate (d) -- ++(0.8660254*1.6,0.5*1.6); \draw[line width=1.2pt, black] (1*1.6,1*1.6) coordinate (e) -- (1*1.6,0) -- (1*1.6,-1*1.6); \draw[line width=1.2pt, black] (1*1.6,0*1.6) -- (2*1.6,0) coordinate (f) ; \draw[line width=1.2pt, black] (0*1.6,0*1.6) -- (a) -- (b); \node at (0,0) [ajoint] {}; \node at (a) [joint] {}; \node at (b) [joint] {}; \node at (c) [ajoint] {}; \node at (d) [joint] {}; \node at (e) [ajoint] {}; \node at (f) [joint] {}; \end{tikzpicture} \end{center} \end{minipage}% \begin{minipage}{0.5\textwidth} \begin{tabular}{@{}cc} \toprule Variable & Value \\ \midrule $(c_1,s_1)$ & $\left(\frac{\sqrt{3}}{2}, -\frac{1}{2}\right)$ \\ $(c_2,s_2)$ & $(1,0)$ \\ $(c_3,s_3)$ & $\left(-\frac{3}{2}, -\frac{1}{2}\right)$ \\ $(c_4,s_4)$ & $(0,1)$ \\ $(c_5,s_5)$ & $(0,1)$ \\ $(c_6,s_6)$ & $\left(-\frac{3}{2}, \frac{1}{2} \right)$ \\ $(c_7,s_7)$ & $(0,1)$ \\ $(c_8,s_8)$ & $(0,-1)$ \\ $(c_9,s_9)$ & $(0,-1)$ \\ \bottomrule \end{tabular} \end{minipage}\caption{A singular configuration of the 3RRR-mechanism} \end{figure} \section{Introduction} For any zero set $X = \algrv(I)$ of an ideal $I = (g_1, \ldots, g_k) \leq \R[\bx]$, $\bx = (x_1, \ldots, x_n)$, there is the question of identifying points where $X$ is not locally a submanifold of $\R^n$. The standard approach to this problem is to look for points $p \in X$, where the rank of the jacobian of $(g_1, \ldots, g_k)$ drops below the height of $I$, which is the codimension of $\algcv(I)$. Unfortunately this is in general not enough to imply, that $X$ is \textbf{not} locally a submanifold. Obviously problems arise, if $I$ is not radical or equidimensional (cf. Ex.~\ref{ex:introduction} (ii),(iii)) and techniques to handle those problems are well known (although not computationally feasible in some cases), but there are more intricate difficulties for real algebraic sets, where the localization of the reduced coordinate ring is not regular and $X = \algrv(I)$ is still a smooth submanifold of $\R^n$ at this point (cf. Ex.~\ref{ex:introduction}~(vi)). The following examples show different kinds of behavior of real algebraic sets at points, where the jacobian drops rank. \begin{exs}\label{ex:introduction} In all examples we set $\frm := \langle \bx \rangle \leq \R[\bx]$, $A=\R[\bx]/I$. \begin{itemize} \item[(i)] The simple node $I = \langle y^2 - x^2 - x^3 \rangle \leq \R[x,y]$, shows the expected behavior. $A_{\frm}$ is not regular and $X = \algrv(I)$ is not locally a manifold at the origin. \item[(ii)] Let $I = \langle x^2,x y \rangle \leq \R[x,y]$. Then $X = \algrv(I)$ is just the $y$-axis, which is locally a manifold at the origin, although $A_{\frm}$ is not regular. The problem here is clearly, that $I$ is not a radical ideal, i.e. $A_{\mathrm{red}} = A/\sqrt{(0)}$ localized at $\frm$ is regular. In theory $\sqrt{I}$ is algorithmically computable with Gr\"obner base methods (\texttt{radical(I)} calculates the radical in \texttt{Singular} for example). Unfortunately the computation is unfeasible in many cases. But we will see, that we can avoid the computation of the radical for many systems of polynomials, which come from engineering problems. \item[(iii)] Let $I = \langle (z-1)xy, z(z-1) \rangle \leq \R[x,y,z]$. Then $\algrv(I)$ is the union of the $x$-axis, the $y$-axis and the plane given by $z=1$. Obviously $X$ is locally not a submanifold at the origin, but the rank of the jacobian at the origin equals $\height I = 1$. Note that $I$ is radical, but not equidimensional and $A_{\frm}$ is not regular. In this case we need to calculate an equidimensional decomposition before applying the jacobian criterion. This is possible in general again with Gr\"obner base methods (\texttt{primdecGTZ(I)} calculates a primary decomposition in \texttt{Singular}) but as hard as the computation of the radical. We will see a well known criterion to decide when $I$ is already equidimensional, which is useful for many problems in kinematics. \item[(iv)] Let $I = \langle x^2 + y^2 \rangle \leq \R[x,y,z]$. Then $X = \algrv(I)$ is the $z$-axis, which is a submanifold of $\R^3$, although the rank of the jacobian drops at any point of $X$ and $I$ is radical and equidimensional. This difficulty only appears in real geometry, since $X_{\C} = \algv(I)$ is \textbf{not} locally a complex manifold at any point of the $z$-axis. The problem is that $I \neq \algi_\R(X) = \{f \in \R[x,y] \mid f|_{X} \equiv 0 \} = \langle x,y \rangle$, since clearly $(\R[x,y,z]/\langle x,y \rangle)_\frm$ is a regular local ring. There are algorithms to compute the real radical $\sqrt[r]{I} = \algi_\Q(X)$ from $I \leq \Q[\bx]$ (e.g. \texttt{realrad(I)} computes the real radical over $\Q$ in \texttt{Singular}) but this computation is harder than that of the normal radical. Also we have in general $\sqrt[r]{I} \cdot \R[\bx] \ne \sqrt[r]{I \cdot \R[\bx]} = \algi_\R(X)$ (see example (v)), in contrast to the usual radical. If this is the case not much can be gained by computing $\algi_{\Q} (X)$. We will present a very useful criterion by T. Y. Lam \cite{lam:real_alg} to check for an ideal $I$ whether $\algi_\R(X) = I$. \item[(v)] Let $I = \langle x^3 - 5y^3 \rangle \leq \Q[x,y]$ and $X = \algrv(I)$, which is just the line given by $x = \sqrt[3]{5}\,y$. The jacobian drops rank at the origin but $X$ is an analytic submanifold of $\R^2$. Note that $I_\Q(X) = I$, but $I_\R(X) = \langle x - \sqrt[3]{5}\,y \rangle$. \item[(vi)] This example motivated this paper. Let $I = \langle y^3 + 2\,x^2\,y - x^4 \rangle \leq \R[x,y]$ and $X = \algrv(I)$. We will see, $A_{\frm}$ is not regular and even $\algi_\R(X) = I$, but $X = \algrv(I)$ is the analytic submanifold of $\R^2$ shown in figure~\ref{fig:curve_2xy2}. We notice, that $I = \algi_\R(X)$ and $A_\frm$ not regular does not imply, that $X$ is "nonsmooth" at the origin. The reason here is that some analytic branches are not visible in the real picture. We will carefully investigate this case by analyzing the completion of the local ring at this point. \item[(vii)] Let $I = \langle y^3 - x^{10} \rangle \leq \R[\bx]$ and $X = \algrv(I)$. Here $A_\frm$ is not regular and $\algi(X) = I$ again. But in this case $X$ is not locally an analytic submanifold at the origin although the real picture looks very "smooth", which is because $X$ is a $C^3$-submanifold (but not $C^4$). It is well known, that any real algebraic set which is (locally) $C^{\infty}$ is also $C^{\omega}$, so any "nonanalytic" point is at the most "finitely differentiable". This example emphasizes the need for an algebraic criterion to algebraically discern between the singularities seen in the last two examples because the real picture can be very deceiving. Criteria to identify points which are not locally topological submanifolds are beyond the scope of this article, although we will see, that we can rule out this case in a lot of situations. \end{itemize} \end{exs} In this paper we want to show strategies to effectively deal with all the problems seen in the examples when analyzing a singular point of a real algebraic set. This is of great importance in the theory of linkages \cite{muller:sing_conf} \cite{muller:local_geometry}, when studying local kinematic properties, since the configuration space of a linkage will usually be given as a real algebraic set. For a further discussion we refer the reader to the final sections~\ref{sec:four_bar} and \ref{sec:higher_dim}, where we investigate the configuration space of a class of planar linkages with the developed techniques. We will be able to address all questions raised in \cite{piippo:planar}. The rest of the paper is structured as follows: In section 2 and 3 we review some well known facts from commutative algebra, real algebra and differential geometry, which will enable us to make precise the notion of manifold point and deal with examples (i)-(v). We will also put a focus on base extensions of affine algebras, which comes in very handy if one needs to extend results gained by calculations in polynomial rings over $\Q$ to polynomial rings over $\R$. In section 4 we will build the theoretical foundation for local analysis of real algebraic sets. Central to the exposition is theorem~\ref{thm:mani_reg} which gives an algebraic condition for manifold points and will show together with Risler's analytic Nullstellensatz, that this is an intrinsic property. Section 5 deals with the problem, that the extensions of a prime ideal of $\R[\bx]$ to the ring of formal power series $\R[[\bx]]$ will not be prime in general and symbolic calculations in $\R[[\bx]]$ are not possible effectively. Instead we will investigate the integral closure of the local ring to divide the analytic branches. The main result Theorem~\ref{thm:normal} goes back to Zariski~\cite{zariski:comm_alg} and was extended by Ruiz~\cite{ruiz:power_series} to a complete description of the normalization of $\CF$, where $\CF$ is the local ring $\R[[\bx]]/\left(I \cdot \R[[\bx]] \right)$. Finally in section~6 we formulate and prove Theorem~\ref{thm:curves}, which decides the case completely for real algebraic curves. This extends results of \cite{maria:plane_curves}. For further reading regarding local properties of real algebraic sets the following authors - without whom (among many others) this paper would not have been possible - are recommended: H. Whitney~\cite{whitney:local_prop}, \cite{whitney:tangents} for his work on (tangents of) analytic varieties. T. Lam~\cite{lam:real_alg} for his real algebra introduction. The book of J. Ruiz~\cite{ruiz:power_series} covering all basic theory of power series rings. G. Efroymson~\cite{efroymson:local_reality} for the fundamental work on the realness of local ring completions, R. Risler for the (analytic) real Nullstellensatz~\cite{risler:real_nullstellensatz},\cite{ruiz:power_series} and D. O'Shea, L. Charles \cite{shea:limits_tangent} for their work on geometric Nash fibers, limits of tangent spaces and real tangent cones. \begin{figure}\label{fig:curve_2xy2} \centering \begin{tikzpicture} \begin{axis}[xmin=-1,xmax=0.5*2,ymin=0, ymax=1, xtick={-0.8,0.8}, ytick={-0.8,0.8}, unit vector ratio*=1 1 1, axis lines=center, axis on top, xlabel={$x$}, ylabel={$y$}] \addplot[very thick, blue] file {data_2xy2}; \end{axis} \end{tikzpicture}% \caption{$\algrv(y^3 + 2x^2y - x^4)$}% \end{figure} \section{Local real algebraic geometry}\label{sec:local_real} We now assume $\K = \R$ and that the singular point of $X$ is at the origin. So we have $I \leq \R[\bx]$ an ideal with $I \subset \langle \bx \rangle =: \frm$ and $A_{\frm}$ not regular, where $A = \R[\bx]/I$. As we have seen in Example~\ref{ex:introduction}~(vi) we need to investigate the extension of $I$ in the ring of convergent power series or the completion of the local ring $A_{\frm}$. The following notations will be used: \begin{definition}\label{def:list}\hfill \begin{itemize} \item[(i)] $I_{\frm} = I \R[\bx]_{\frm}, \quad \CR = \R[\bx]_\frm/I_{\frm} = A_{\frm}, \quad \frr = \frm \CR$ \item[(ii)] $I' = I \R\{ \bx \}, \quad \CO = \R\{ \bx \}/I', \quad \fro = \frm \CO$ \item[(iii)] $I'' = I \R[[ \bx ]], \quad \CF = \R[[ \bx ]]/I'', \quad \frf = \frm \CF$ \end{itemize} \end{definition}\noindent Since the ring extensions $\R[\bx]_\frm \to \R\{\bx\} \to \R[[\bx]]$ are faithfully flat, we have the following chain of local rings \[ \CR \subset \CO \subset \CF. \] We will also need the fact, that $\CF$ is the $\frr$-adic completion of $\CR$: \[ \CF = \varprojlim_{k} \CR/\frr^k. \] Now we define the following ideal of $\R\{\bx\}$, which is usually called the vanishing ideal of the set germ $(X_\R,0)$ \cite{risler:real_nullstellensatz}. We can do a similar construction for $\R[[\bx]]$, but since $f(p)$ is here in general not defined for elements $p \in \R^n$, we need to replace points in $\R^n$ with tuples of formal Puiseux series without constant term, $f \in \R[[\bx]]$. See \cite[Def. IV.4]{ruiz:power_series} for this approach. \begin{definition}\label{def:hat_i} \[ \hat{I} = \left\{ f \in \R\{\bx\} \Biggm| \parbox{18em}{$ \exists \ U \ni 0$ euclidean neighborhood with $f$ \\ converging on $U$ and $f \equiv 0$ on $X_\R \cap U$ } \right\}, \quad \hat{\CO} = \R\{ \bx \}/\hat{I}. \] \end{definition} \begin{thm}\label{thm:mani_reg} The origin is a manifold point of $X$ if and only if $\hat{\CO}$ is regular. \end{thm} \begin{proof} First let the origin be a manifold point of $X$ (of dimension $d$) with parametrization \begin{align*} \psi \colon U & \mapsto \R^n, \\ (x_1 \ldots x_d) & \to (x_1, \ldots x_d, \phi_1(x_1, \ldots x_d), \ldots, \phi_{n-d}(x_1, \ldots, x_d)), \end{align*} where $U$ is an euclidean neighborhood of the origin and $\psi(0) = 0$. We set \[ K := \langle x_{d+1} - \phi_1(x_1, \ldots, x_{d}), \ldots, x_{n} - \phi_{n-d}(x_1, \ldots x_{d}) \rangle \leq \R\{\bx\} \] and claim, that $K = \hat{I}$. Clearly we have $K \subset \hat{I}$, so let $a \in \hat{I}$. Since $\psi(0) = 0$, we can compose $a$ and $\psi$ and get a converging power series \begin{equation}\label{eq:proof_germ_manifold_smooth} a(x_1, \ldots x_d, \phi(x_1, \ldots, x_{d}), \ldots \phi_{n-d}(x_1, \ldots x_{d})) = 0, \end{equation} which follows because $a \circ \psi$ is identically zero close to the origin. We now set $\psi_i := x_{n + i} - \phi_i(x_1, \ldots x_d) \in \R\{\bx\}$ and have, that $\psi_i$ is of $x_{d+i}$-order $1$. According to the Weierstrass Division Theorem \cite[3.2]{ruiz:power_series} we have a representation \[ a = q_1 \cdot \psi_1 + r, \] with $q_1 \in \R{\bx}$ and $r \in \crps{x_1, \ldots ,x_d, x_{d+2}, \ldots, x_{n-1}}$. If we iterate this process with $r$ instead of $a$, we have a decomposition \[ a = q_1 \cdot \psi_1 + \ldots + q_{n-d} \cdot \psi_{n-d} + r, \] with $r \in \crps{x_1, \ldots, x_d}$. Because of \eqref{eq:proof_germ_manifold_smooth} and \[ \psi_i(x_1, \ldots x_d, \phi_1(x_1, \ldots, x_d), \ldots, \phi_{n-d}(x_1, \ldots, x_d)) = \phi_i(x_1, \ldots, x_d) - \phi_i(x_1, \ldots x_d) = 0, \] we have \[ r(x_1, \ldots, x_d) = 0, \] so $r = 0$ and therefore $a \in K$. It now remains to check, that $\crps{x_1, \ldots x_n}/K$ is a regular local ring. We will use Nagata's Jacobian Criterion~\cite[4.3]{ruiz:power_series}. With $\frm'$ the maximal ideal of $\R\{\bx\}$, it is enough to show, that $\frm' \not \supset J_{n-d}(K)$ and $\height(K) \leq n-d$, where $J_{n-d}(K)$ is the jacobian ideal of order $n-d$ of $K$ \cite[4.1]{ruiz:power_series}. Then $\R\{\bx\}$ is a regular local ring of dimension $n - (n - d) = d$. Since $K$ is generated by $n-d$ elements $\height(K) \leq n-d$ follows easily from Krull's height theorem~\cite[11.16]{atiyah:intro_comm}. Now the jacobian \[ \frac{D(\psi_1, \ldots, \psi_{n-d})}{D(x_{d+1}, \ldots, x_{n})} = 1, \] hence $J_{n-d} = \R\{\bx\} \not\supset \frm'$. Now on the contrary let $\hat{\CO}$ be regular with $\dim \hat{\CO} = d$. According to Nagata's Jacobian Criterion, it is $\frm' \not\supset J_{n-d}(\hat{I})$. Since clearly $\frm' \supset \hat{I}$ there must exist $g_1, \ldots, g_{n-d} \in \hat{I}$ such, that w.l.o.g \[ \frac{D(g_1, \ldots, g_{n-d})}{D(x_1, \ldots,x_{n-d})} \notin \frm'. \] But this means, that determinant of the first $(n-d)$ columns of the jacobian matrix of the $g_i$ evaluated at the origin is nonzero. Let $U$ be an euclidean environment of the origin such, that the region of convergence of $g_i$ is contained in $U$ for all $i$. We then set \[ X' := \{\, x \in U \mid g_i(x) = 0,\ \text{for all $i$} \,\}. \] According to the analytic implicit function theorem the origin is then a manifold point of $X'$ and we only have to show, that $X'$ agrees with $X$ on a neighborhood of the origin, which follows easily if we can prove $K \coloneqq \langle g_1, \ldots, g_{n-d} \rangle = \hat{I}$. By our choice of $g_1, \ldots, g_{n-d}$ we clearly have $K \subset \hat{I}$. On the other hand, since $\frm' \not\supset J_{n-d}(K)$ and $\height(K) \leq {n-d}$ we can apply Nagata's Jacobian Criterion again to see, that $\R\{\bx\}/K$ is a regular local ring of dimension $d$. Then $\R\{\bx\}$ is also integral, so $K$ is a prime ideal and because $\R\{\bx\}$ is Cohen-Macaulay we have $\height(K) = \dim \R\{\bx\} - \dim \R\{\bx\}/K = n-d$. But $\hat{I}$ is prime as well with $\height(\hat{I}) = n-d$. Since $K \subset \hat{I}$, we have $K = \hat{I}$. This completes the proof. \end{proof} The following theorem due to Risler (Rückert for the complex case) allows the calculation of $\hat{I}$. \begin{thm}[{{Risler's Real Analytic Nullstellensatz \cite[Th\'{e}or\`{e}me~4.1]{risler:real_nullstellensatz}}}]\label{thm:nullstellensatz} \[ \hat{I} = \sqrt[r]{I'} = \left\{\, f \in \R\{ \bx \} \bigm| f^{2n} + b_1^2 + \ldots + b_k^2 \in I', \quad r,k \geq 0,\ b_i \in \R\{\bx\} \,\right\}, \] \end{thm} The next proposition collects some well known facts on the relationship of the rings $\CR$, $\CO$, $\CF$. \begin{prop}\label{prop:compare}\hfill \begin{itemize} \item[(a)] $\CR$ reduced $\Leftrightarrow$ $\CO$ reduced $\Leftrightarrow$ $\CF$ reduced. \item[(b)] $\CR$ normal domain $\Leftrightarrow$ $\CO$ normal domain $\Leftrightarrow$ $\CF$ normal domain. \item[(c)] $\CR$ regular $\Leftrightarrow$ $\CO$ regular $\Leftrightarrow$ $\CF$ regular. \item[(d)] $I'$ real $\Leftrightarrow$ $I''$ real, i.e. $\CO$ real $\Leftrightarrow$ $\CF$ real. \item[(e)] $\CO/\sqrt[r]{(0)}$ is regular if and only if $\CF/\sqrt[r]{(0)}$ is regular. \item[(f)] If $\CF,\CO$ regular, then $\CF,\CO$ real. \end{itemize} \end{prop} \begin{proof}[proof of Proposition~\ref{prop:compare}] The proofs for (a),(b),(c),(d) can be found in \cite[ch.~V,VI]{ruiz:power_series}. (e) is also an easy consequence of results in \cite{ruiz:power_series}. We will carry out a proof for completeness sake: It is only needed to prove, that $\sqrt[r]{I'}\, \R[[\bx]] = \sqrt[r]{I''}$. Then the statement follows from one of Nagata's Comparison results~\cite[Prop.~V.4.5]{ruiz:power_series}, $\R[[\bx]] \sqrt[r]{I'} \subset \sqrt[r]{I''}$ is clear, so we proceed to demonstrate $\R[[\bx]] \sqrt[r]{I'} \supset \sqrt[r]{I''}$ by slightly adjusting the proof of Theorem V.4.2 in \cite{ruiz:power_series}. Let $f \in \sqrt[r]{I''}$, which means \[ f^{2s} + p_1^2 + \ldots + p_k^2 \in I'', \] for elements $p_1, \ldots, p_k \in \R[[\bx]]$. Consequently we have \[ (\bar{f})^{2s} + \bar{p}_1^2 + \ldots + \bar{p}_k^2 = 0, \] in $\CF$. According to M. Artin's Approximation Theorem in the form of \cite[Prop. V.4.1]{ruiz:power_series} we find elements $\hat{f}, \hat{p}_1, \ldots, \hat{p}_k \in \CO$, for every $\alpha \geq 1$ such, that \[ \hat{f}^{2s} + \hat{p}_1^2 + \ldots \hat{p}_k^2 = 0, \] and $\bar{f} = \hat{f} \bmod \frf^{\alpha}$ (recall that $\frf$ is the maximal ideal of $\CF$). Then for every $\alpha \geq 0$: \[ f \in \sqrt[r]{I'} \, \R[[\bx]] + (\frm'')^{\alpha}, \] where $\frm'' = \frm \, \R[[\bx]]$ is the maximal ideal of $\R[[\bx]]$. It follows \[ f \in \bigcap_{\alpha} (\sqrt[r]{I'} \, \R[[\bx]] + (\frm'')^{\alpha} \R[[\bx]]) = \sqrt[r]{I'}\R[[\bx]], \] since any ideal of $\R[[\bx]]$ is closed for the $\frm''$-adic topology. Now we go on to show (f). Clearly $\CF$ is a regular local ring, with $\CF/\frf \cong \CR/\frr \cong \R$ real. Then $\CF$ must be real according to \cite[Prop. 2.7]{lam:real_alg}. \end{proof} With some minor modifications all the theory so far in section~\ref{sec:local_real} (except the statements about realness) would also work if we exchange $\R$ with $\C$ and Theorem~\ref{thm:nullstellensatz} with Rückert's analytic Nullstellensatz~\cite[Theorem~2.20, Theorem~3.7]{gunning:analytic_functions}, which states, that $\hat{I} = \sqrt{I'}$ in the complex setting. From Proposition~\ref{prop:compare} we then see easily, why there is usually no need in complex algebraic geometry to consider the completion of $\CR$ to answer questions about the regularity of $\hat{\CO}$. Because then $\hat{\CO} = \CO/\sqrt{(0)} = \CO$ if $\CR$ is reduced, and $\CO$ is regular iff $\CR$ is regular. In the real case, it is not enough for $I$ to be real to imply the realness of $I'$, see Example~\ref{ex:introduction}~(vi), hence $\hat{I}$ is in general bigger than $I'$ and the nonregularity of $\CR$ does not imply the nonregularity of $\hat{\CO}$. On the other hand if $\CR$ is regular, then $\CO$ is regular and real, hence also $\hat{\CO} = \CO$. \begin{cor}\label{cor:real_mani Let $I''$ or $I'$ be real. The origin is a manifold point of $X = \algrv(I)$, if and only if the origin is nonsingular. \end{cor} \section{Normalization and Analytic Branches} In order to decompose the extended ideal $I''$ we look to the normalization of $\CF$, which can be compared to the the normalization of $\CR$. In this section we assume again $\K=\R$, but now we also require $I \leq \R[\bx]$ to be a radical ideal, with minimal decomposition \[ I = \frp_1' \cap \ldots \cap \frp_k'. \] Now let w.l.o.g $\frp_1', \ldots, \frp_s' \subset \frm = \langle \bx \rangle$ and $\frp_{s+1}', \ldots, \frp'_{n} \not\subset \frm$. In $\CR = (\R[\bx]/I)_{\frm}$ we have consequently then the following minimal decomposition of the zero ideal: \begin{equation}\label{eq:dec_zero_r} (0) = \frp_1 \cap \ldots \cap \frp_s, \end{equation} where $\frp_i$, $i=1, \ldots, s$ is the prime ideal generated by $\frp_i'$ in $\CR$. From now on we will use the notation $\CR_i = \CR/\frp_i$ and for any reduced ring $A$ we will write $\ov{A}$ for the integral closure of $A$ in its total ring of fractions. The following lemma collects some well known facts about the integral closure of reduced local rings. \begin{lemma}\label{prop:local_normal} It is \[ \ov{\CR} = \ov{\CR_1} \times \ldots \times \ov{\CR_s} \] a product of semi-local normal domains. Additionally we have \[ \sqrt{\frr \ov{\CR}} = (\frn_{11} \cap \ldots \cap \frn_{1k_1}) \cap \ldots \cap (\frn_{s1} \cap \ldots \cap \frn_{sk_s}) \] where the $\frn_{ij}$ are the maximal ideals of $\ov{\CR}$ in the form \[ \frn_{ij} = \ov{\CR_1} \times \ldots \times \ov{\CR_{i-1}} \times \frn_{ij}' \times \ov{\CR_{i+1}} \times \ldots \times \ov{\CR_{s}}, \] and $\frn_{ij}'$ is one of the $k_i$ maximal ideals of $\overline{\CR_i}$. Also we have the following minimal decomposition $\sqrt{\frr \overline {\CR_i}} = \frn_{i1}' \cap \ldots \cap \frn_{ik_i}'$ and \[ \ov{\CR}_{\frn_{ij}} \cong (\ov{\CR_i})_{\frn'_{ij}}. \] \end{lemma} We now want to compare the normalization of $\CF$ and the completion of $\ov{\CR}$, so we need to investigate what form $\ov{\CR}_{\frn}$ can take for $\frn \leq \ov{\CR}$ maximal. Since $\ov{\CR}_{\frn} = (\ov{\CR_i})_{\frn'}$ for some $i$ and $\frn' \leq \CR_i$ maximal we assume, that $\CR$ is a domain. The following exposition is taken from~\cite[VI.4]{ruiz:power_series} and can be checked for details. Since \[ \R = \CR/\frr \subset \ov{\CR}/\frn \] is an algebraic field extension it must be $\ov{\CR}/\frn = \C,\R$. We distinguish between the following three cases: \begin{itemize} \item[(a)] $\ov{\CR}/\frn = \R$. Since $\ov{\CR}$ is finitely generated over $\CR$, we can extend a surjection $\R[\bx]_{\frm} \to \CR$ to a surjection $\R[\bx,\by]_{\langle \bx,\by \rangle} \to \ov{\CR}_{\frn}$. Hence $\ov{\CR}_{\frn} \cong \R[\bx,\by]_{\langle \bx, \by \rangle}/J$ and its formal completion $(\ov{\CR}_{\frn})^* = \R[[\bx,\by]]/J\R[[\bx,\by]]$ is the $\frn$-adic completion of $\ov{\CR}_{\frn}$. \item[(b)] $\ov{\CR}/\frn = \C$ and $\sqrt{-1} \in Q(\CR)$. Since $\ov{\CR}$ is integrally closed, $\C \subset \ov{\CR}$. Then we get a surjection $\C[\bx,\by]_{\langle \bx, \by \rangle} \to \ov{\CR}_{\frn}$ and the formal completion $(\ov{\CR}_{\frn})^* = \C[[\bx,\by]]/J\C[[\bx,\by]]$ is the $\frn$-adic completion of $\ov{\CR}_{\frn}$. \item[(c)] $\ov{\CR}/\frn = \C$ and $\sqrt{-1} \notin Q(\CR)$. Now we need to adjoin $\sqrt{-1}$ to $\ov{\CR}$ and we get a unique maximal ideal $\frn'$ in $\ov{\CR}[\sqrt{-1}]$ over $\frn$ and the formal completion $(\ov{\CR}_{\frn})^*$ is considered as the formal completion of $(\ov{\CR}[\sqrt{-1}])_{\frn'}$ as in (b). One needs to take care though since this is not the $\frn$-adic completion of $\ov{\CR}_{\frn}$. \end{itemize} Now we set $\CF_i := \CF/(\frp_i \CF) = \R[[\bx]]/\frp'\R[[\bx ,]]$, for $i=1,\ldots,s$, which is the formal completion of $\CR_i$. \begin{prop}[{{Ruiz, Zariski \cite[Prop.~VI.4.4]{ruiz:power_series}}}]\label{thm:normal} For any $i=1, \ldots, s$ \[ \ov{\CF_i} = [(\ov{\CR_i})_{\frn_{i1}}]^* \times \ldots \times [(\ov{\CR_i})_{\frn_{ik_i}}]^* \] and $[(\ov{\CR_i})_{\frn_{ij}}]^* \cong \ov{\CF_i/\frq_{ij}}$, where $\frq_{i1}, \ldots, \frq_{ik_i}$ are the associated primes of $(0)$ in $\CF_i$. Additionally \[ \ov{\CF} = \ov{\CF_1} \times \ldots \times \ov{\CF_s}. \] \end{prop} \begin{remark} The importance of Proposition~\ref{thm:normal} for us lies in the fact, that $\CF$ is real if and only if $\overline{\CF}$ is real, so we can check realness on completions of local rings of normal varieties and use Theorem~\ref{thm:efroymson} for example. \end{remark} \begin{proof} The only thing missing from the proof in \cite{ruiz:power_series} is to take in account non-domains $\CR$, so we need to check \[ \ov{\CF} = \ov{\CF_1} \times \ldots \times \ov{\CF_s}. \] According to Chevalley's Theorem~\cite[Prop.~VI.2.1]{ruiz:power_series} we have a minimal decomposition \[ \frp_i \CF = \frq_{i1}' \cap \ldots \cap \frq_{ik_i}', \] with $\frq_{ij}$ prime of height $\height \frp_i =: d_i$ and $\frq_{ij} = \frq_{ij}' \CF_i$. It only remains to show, that \[ (0) = (\frp_1 \cap \ldots \cap \frp_s) \CF = \frq_{11}' \cap \ldots \cap \frq_{1k_1}' \ldots \ \frq_{s1}' \cap \ldots \cap \frq_{sk_s}' \] is a minimal decomposition of $(0)$ in $\CF$, because then \[ \overline{\CF} = \bigtimes_{i,j} \overline{\CF/\frq_{ij}'} = \ov{\CF_1} \times \ldots \times \ov{\CF_s}. \] Now suppose w.l.o.g $\frq_{11}' \supset \bigcap_{i,j \ne 1,1} \frq_{ij}'$. Then because $\frq_{11}'$ prime, there exists $\frq_{ij}' \subset \frq_{11}$, where clearly $i \ne 1$. If we can show, that $\frq_{ij}' \cap \CR = \frp_i$, we are done, since \eqref{eq:dec_zero_r} is a minimal decomposition. Assume $\frp_i \subsetneq \fra \coloneqq \frq'_{ij} \cap \CR$. Then since $\fra$ is prime it is $\height \fra > d_i = \height \frp_i$. Consequently according to Chevalley's Theorem every associated prime of $\fra \CF$ is of height greater $d_i$. Since $\fra \CF \subset \frq'_{ij}$ and $\height \frq'_{ij} = d_i$, this is a contradiction. \end{proof} \section{Real Algebraic Curves} Now we will apply the theory of the last section to singularities of real algebraic curves. Let $\dim I = 1$, then the analysis of $\hat{\CF} = \CF/\sqrt[r]{(0)}$ will be especially satisfying, since the real radical of an associated prime $\frq$ of $I''$ will be either $\frq$ itself or the maximal ideal $\frm'' = \frm \R[[\bx]]$ of $\R[[\bx]]$: \begin{lemma}\label{lm:ideal_height_real} Let $\frq \leq \R[[\bx]]$ be a prime with $\height \frq = n-1$. Then \[ \sqrt[r]{\frq} = \begin{cases} \frq & \text{$\frq$ real}\\ \frm'' & \text{$\frq$ not real} \end{cases} \] \end{lemma} \begin{lemma}\label{lm:local_ring_normal} Let $(A,\frm_A) \subset (B,\frm_B)$ be a finite extension of local rings. Assume $\frm_A B = \frm_B$ and \[ B/\frm_B = A/\frm_A \] Then $A = B$. \end{lemma} \begin{proof} For any element $b \in B$, there exists $a \in A$ with $b - a \in \frm_B = \frm_A B$ since $A/\frm_A = B/\frm_B$. It follows \[ B = A + \frm_A B. \] Since $m_A$ is the Jacobson-radical of $A$ and $B$ is a finite $A$-module, the statement of the lemma follows from the Lemma of Nakayama. \end{proof} We can now formulate the main result of this section \begin{thm}\label{thm:curves} Let $\dim I = 1$ and $I$ radical. The origin is a manifold point of $\algrv(I)$ if and only if one of the following two conditions is true \begin{itemize} \item[(a)] There is exactly one real maximal ideal $\frn \leq \overline{\CR}$ lying over $\frr = \frm \CR$ and $\frn$ is an isolated primary component of $\frr \overline{\CR}$. \item[(b)] All the maximal ideals $\frn \leq \overline{\CR}$ are not real. In this case the origin is an isolated point of $\algrv(I)$. \end{itemize} \end{thm} \begin{proof} First let $(0) = \frq_1 \cap \ldots \cap \frq_r$ be a primary decomposition in $\CF$. According to Chevalley's Theorem $\CF$ is reduced~\cite[Prop.~2.1]{ruiz:power_series}, hence all the $\frq_i$ are prime. Write now $\frq'_i$ for the ideal of $\CF$ with $\CF/\frq_i = \R[[\bx]]/\frq'$. According to Theorem~\ref{thm:normal} there exist $\frn_1, \ldots, \frn_q$ maximal in $\overline{\CR}$ with \[ \overline{\CF/\frq_i} \cong (\overline{\CR}_{\frn_i})^* \] First we will show the following statement \begin{equation}\label{eq:st_proof_curves} \CF/\frq_i \text{ real} \Leftrightarrow \frn_i \text{ real}. \end{equation} Clearly $\overline{\CF/\frq_i}$ is real if and only if $\CF/\frq_i$ is real, since they are contained in the quotient field of $\CF/\frq_i$, so we need to show, that $(\overline{\CR}_{\frn_i})^*$ is real if and only if $\frn_i$ is real. If $\frn_i$ is not real, then $\overline{\CR}/\frn_i \cong \C$ and one can see from the construction before Theorem~\ref{thm:normal} that $(\overline{\CR}_{\frn_i})^*$ will not be real (since $\C \subset (\overline{\CR}_{\frn_i})^*$). On the other hand let $\frn_i$ be real, then $(\overline{\CR}_{\frn_i})^*$ will be the $\frn_i \overline{\CR}$-adic completion of the local ring $\overline{\CR}_{\frn_i}$. Since $\overline{\CR}$ is normal of dimension $1$, we also know, that $\overline{\CR}_{\frn_i}$ is regular according to Serre's regularity criterion $R_1$ \cite[Theorem~39]{matsumura:comm_alg}. Then $(\overline{\CR}_{\frn_i})^*$ is regular too, with residue field $\overline{\CR}/\frn_i = \R$. Now $(\overline{\CR}_{\frn_i})^*$ must be real because of \cite[Prop. 2.7]{lam:real_alg}. We consider now \begin{equation}\label{eq:rera_dec} \sqrt[r]{I''} = \sqrt[r]{\frq_1'} \cap \ldots \cap \sqrt[r]{\frq_i'}, \end{equation} where $\frq_i'$ is the preimage of $\frq_i$ in $\R[[\bx]]$. As one checks easily $\frq_i$ is real if and only if $\frq_i'$ is real. If none of the $\frn_i$ is real, then none of the $\frq_i'$ is real and according to Lemma~\ref{lm:ideal_height_real}, we would get $\sqrt[r]{I''} = \frm''$ from \eqref{eq:rera_dec} and $\hat{\CF} = \R[[\bx]]/\sqrt[r]{I''} \cong \R$ is regular. Since $\hat{I} = \sqrt[r]{I'} = \sqrt[r]{I''} \cap \R\{\bx\}$ (see the proof of Proposition~\ref{prop:compare}~(e)) we would also have $\hat{I} = \frm' = \frm \R\{\bx\}$ and one checks easily with Definition~\ref{def:hat_i}, that the origin must be an isolated point of $X = \algrv(I)$. If two of the $\frn_i$ are real, then $\hat{\CF} = \R[[\bx]]/\sqrt[r]{I''}$ would not be a domain and therefore not regular. Then the origin cannot be a manifold point of $X$ according to Theorem~\ref{thm:mani_reg}. Now we investigate the case, that exactly one $\frn_i$ is real, w.l.o.g. we choose $\frn_1$ real. Then $\sqrt[r]{I''} = \frq_1$. We have the following commutative diagram: \begin{equation}\label{diag:curves_proof} \begin{tikzcd} \overline{\CR}_{\frn_1} \ar{rr}{\psi} & & \left (\overline{\CR}_{\frn_1} \right)^* \ar{d}{\cong}[swap]{\eta} \\ \overline{\CR} \ar{u}{l} \ar{r} & \overline{\CF} \ar{r}& \left( \overline{\CF/\frq_1}\right) \ar{u} \\ \CR \ar{u} \ar{r} & \CF \ar{r} \ar{u} & \CF/\frq_1 \ar{u}{\iota} \end{tikzcd} \end{equation} First we assume that $\frn_1$ is an isolated primary component of $\frr \overline{\CR}$. We proceed in several steps. (1) $\gamma(\frr) \overline{\CR}_{\frn_1} = \frn_1 \overline{\CR}_{\frn_1}$, where $\gamma \colon \CR \to \overline{\CR}_{\frn_1}$. Since $\frn_1$ is an isolated prime of $\frr \overline{\CR}$ we find a minimal primary decomposition \[ \frr \overline{\CR} = \frn_1 \cap \frs_2 \cap \ldots \cap \frs_k. \] For any $x \in \frn_1 \overline{\CR}_{\frn_1}$ we have $x = a \cdot \frac{p}{q}$ with $p,q \in \overline{\CR}$, $a \in \frn_1$ and $q \notin \frn_1$. Now choose $f_i \in \frs_i \backslash \frn_1$. Then \[ b \coloneqq a \cdot f_2 \cdots f_k \in \frr \overline{\CR} \] and \[ x = a \cdot \frac{p}{q} = b \cdot \frac{1}{f_2} \cdots \frac{1}{f_k} \cdot \frac{p}{q} \in \gamma(\frr) \overline{\CR}_{\frn_1}. \] (2) $\iota(\fro)$ generates the maximal ideal of $\overline{\CF/\frp_1}$, where $\fro$ is the maximal ideal of $\CF/\frq_1$. Since $\psi$ is the $\frn_1 \overline{\CR}_{\frn_1}$-adic completion of $\overline{\CR}_{n_1}$ we know that $\psi(\gamma(\frr))$ generates the maximal ideal of $(\ov{\CR}_{\frn_1})^{*}$. But $\psi(\gamma(\frr)) = \eta(\iota(\fro))$ and we conclude that $\iota(\fro)$ generates the maximal ideal of $\overline{\CF/\frp_1}$. (3) $\CF/\frp_1$ is regular. We have already seen, that the residue field of $(\overline{\CR}_{\frn_1})^*$ is $\R$, hence the same is true of the residue field of $\ov{\CF/\frp_1}$. Also we know, that $\ov{\CF/\frp_1}$ is finite over $\CF/\frp_1$ \cite[Prop.~III.2.3]{ruiz:power_series} and in (2) we have seen, that the maximal ideal of $\CF/\frp_1$ generates the maximal ideal of $\overline{\CF/\frp_1}$. Now we are exactly in the situation of Lemma~\ref{lm:local_ring_normal} with $A = \CF/\frp_1$ and $B = \overline{\CF/\frp_1}$. It follows $\ov{\CF/\frp_1} = \CF/\frp_1$. Then $\CF/\frp_1$ is a normal local ring of dimension at most $1$. With Serre's regularity criterion $R_1$, we see, that $\hat{\CF} = \CF/\frp_1$ is regular and according to Theorem~\ref{thm:mani_reg} the origin must be a manifold point of $X = \algrv(I)$. Now suppose on the contrary that $\CF/\frp_1$ is regular and $\frn_1$ is not an isolated primary component of $\frr \overline{\CR}$. Since $\CF/\frp_1$ is regular, it is a Cohen Macaulay domain. It fulfills $S_2$ and $R_1$ and is normal by Serre's normality criterion~\cite[Theorem~39]{matsumura:comm_alg}. Therefore $\CF/\frp_1 = \overline{\CF/\frp_1} \cong (\overline{\CR}_{\frn_1})^*$. Let $\frb$ be the ideal generated by $\gamma(\frr)$ in $\overline{\CR}_{\frn_1}$. Because diagram~\eqref{diag:curves_proof} commutes and $\iota$ is an isomorphism we have that $\psi(\frb)$ generates the maximal ideal $\fra$ of $(\overline{\CR}_{\frn_1})^{*}$. But since $\psi$ is faithfully flat we have \[ \frb = \psi(\frb)(\overline{\CR}_{\frn_1})^* \cap \overline{\CR}_{\frn_1} = \fra \cap \overline{\CR}_{\frn_1} = \frn_1 \overline{\CR}_{\frn_1}. \] Since $\frn_1$ is not an isolated primary component of $\frr \overline{\CR}$, there exists a primary ideal $\frs$ with $\frr \overline{\CR} \subset \frs \subsetneq \frn_1$ (remember that $\sqrt{\frr \overline{\CR}}$ is the intersection of all maximal ideals of $\overline{\CR}$). But $\frn_1 \overline{\CR}_{\frn_1} = \frb = \langle l(\frr \overline{R}) \rangle \subset \frs \overline{\CR}_{\frn_1}$. Therefore \begin{equation}\label{eq:proof_curve} \frs \overline{\CR}_{\frn_1} = \frn_1 \overline{\CR}_{\frn_1}. \end{equation} Now choose $r \in \frn_1 \backslash \frs$. Because of \eqref{eq:proof_curve} there exist $p,q,s \in \overline{\CR}$ with $q \notin \frn_1$, $s \in \frs$ and \[ \frac{s \, p}{q} = r \] Thus there is $q' \notin \frn_1$ with $q'\,r \, q = q'\,c \, p \in \frs$, which is primary. Because $r \notin \frs$ it must be $(q\,q')^k \in \frs \subset \frn_1$. But then $q \, q' \in \frn_1$, a contradiction. This completes the proof. \end{proof} \begin{ex} We can test the conditions of Theorem~\ref{thm:curves} with all CAS which have a normalizing algorithm for polynomial rings implemented. Consider the following run in \texttt{Singular} to test whether the origin is a manifold point of $X = \algrv(y^3 + 2yx^2 - x^4)$: \begin{lstlisting}[columns=fullflexible, basicstyle=\footnotesize, language=bash] > ideal I = y^3 + 2*y*x^2 - x^4; > def nor = normal(I); > def S = nor[1][1]; > setring S; > ideal M = norid + ideal(x,y); > primdecGTZ(M); [1]: [1]: _[1]=T(2) _[2]=y _[3]=x _[4]=-T(2)^2+T(1)-2 [2]: -- same [2]: [1]: _[1]=T(2)^2+2 _[2]=y _[3]=x _[4]=-T(2)^2+T(1)-2 [2]: -- same \end{lstlisting} \end{ex} With $A = \R[x,y]/\langle y^3 + 2yx^2 - x^4 \rangle$, we see, that $\frm \overline{A} = \frn_1' \cap \frn_2'$, where $\frn_1'$ is real and $\frn_2'$ not. It follows easily, that there is exactly one real ideal $\frn_1$ lying over $\frr \overline{\CR}$ and $\frn_1$ is an isolated prime of $\frr \overline{\CR}$. From Theorem~\ref{thm:curves} we deduce, that the origin is a manifold point of $X$. \section{Algebraic Preliminaries}\label{sec:alg_prelim} In this section let $\K$ be a field with $\Q \subset \K \subset \R$, $f_1, \ldots, f_n$ a set of polynomials in $\K[\bx]$, where $\bx = (x_1, \ldots, x_n)$, and $I = \langle f_1, \ldots, f_n \rangle \leq \K[\bx]$ the ideal generated by the $f_i$. We set $A = \K[\bx]/I$ and consider two sets associated to $A$: \begin{align*} X_\C & := \{\, x \in \C^n \mid f(x) = 0, \text{for all $f \in I$} \,\} = \algv(I), \\ X & := \{\, x \in \R^n \mid f(x) = 0, \text{for all $f \in I$} \,\} = \algrv(I). \end{align*} Sometimes we will call $X$ the \defm{real picture} of $X_\C$. Since we can usually only perform symbolic computations over the rational numbers we need to investigate base changes of $A$. For any extension field $\K \subset \K'$ and any ideal $J \leq A$ we set \begin{align*} I_{\K'} & := \K' \otimes_\K I = I \cdot \K'[\bx], \quad A_{\K'} := \K' \otimes_\K A = \K'[\bx]/ I_{\K'} \\ J_{\K'} & := \K' \otimes_\K J = ( \hat{J} \cdot \K'[\bx]) / I_{\K'}, \quad \text{where $\hat{J} \leq \K[\bx]$ with $\hat{J}/I = J$}. \end{align*} If $\K' = \C$, we call $A_\C$, $I_{\C}$ or $J_{\C}$ the complexification of $A$, $I$ or $J$ respectively. Finally for any $p = (p_1, \ldots, p_n) \in \C^n$ we define the maximal ideal \[ \frm_p = \langle x_1 - p_1, \ldots, x_n - p_n \rangle \subset \C[\bx]. \] \begin{definition} The \defm{singular locus} of $A$ is the set of all prime ideals $\frp \in \spec A$ such that $A_{\frp}$ is not regular. A point $p \in X_{\C}$ is called a \defm{singularity} of $X_{\C}$ if $((A_{\C})_{\mathrm{red}})_{\frm_p}$ is not regular, i.e. $\frm_p$ is in the singular locus of $(A_{\C})_{\mathrm{red}}$. \end{definition} \begin{remark} $(A_{\C})_{\mathrm{red}}$ denotes the reduction $A_{\C}/\sqrt{(0)}$ without nilpotents. The stacking of subscripts in $((A_{\C})_{\mathrm{red}})_{\frm_p}$ is admittedly horrible but we will see in Proposition~\ref{prop:base_change}, that there is a certain kind of freedom in the choice of the coefficient field. So we can get rid of the complexification and/or the reduction if $I$ is radical and/or $\frm_p \leq A$. \end{remark} \subsection{Base Change} We review some facts from commutative algebra regarding extensions of the coefficient field. \begin{prop}[Base Change]\label{prop:base_change} Let $\K'$ be any field extension of $\K$, where $\mathrm{char}(\K) = 0$. Then \begin{itemize} \item[(i)] $I_{\K'} \cap \K[\bx] = I$. \item[(ii)] $\height I_{\K'} = \height I$, $\dim A_{\K'} = \dim A$. \item[(iii)] $\sqrt{I_{\K'}} = \sqrt{I} \, \K'[\bx]$. \item[(iv)] Let $\frp \leq A$ prime. Then $A_{\frp}$ is regular iff $(A_{\K'})_{\frP}$ is regular for one and then all associated primes $\frP$ of $\frp_{\K'}$. \end{itemize} \end{prop} \begin{remark} Since we require $\mathrm{char}(\K) = 0$, $\K$ is a perfect field and therefore $\K'$ separable over $\K$, which means (note that $\K \subset \K'$ doesn't need to be algebraic), that every finitely generated subextension is separably generated over $\K$ compare \cite[A1.2]{eisenbud:comm_alg}. Whereas (i) and (ii) would work for any field extension (iii) and (iv) are in general wrong if $\K \subset \K'$ is not separable. \end{remark} \begin{proof} (i) and (ii) follow because $\K[\bx] \subset \K'[\bx]$ is a faithfully flat ring extension. (iii) is a consequence of the fact, that any reduced $\K$-algebra is geometrically reduced~\cite[Lemma~10.42.6, Lemma~10.44.6]{stacks}. We will show (iv) with the general jacobian criterion \cite[Thm. 5.7.1]{greuel:singular_commutative}, since there appears to be no reference in the usual literature on commutative algebra. First choose any associated prime $\frP$ of $\frp_{\K'}$ and let $\hat{\frp}$, $\hat{\frP}$ denote the preimages of $\frp$ and $\frP$ in $\K[\bx]$ and $\K'[\bx]$ respectively. Now write $K$ for the quotient field of $\K[\bx]/\hat{\frp}$ and $K'$ for the quotient field of $\K'[\bx]/\hat{\frP}$. Since $\hat{\frP} \cap \K[\bx] = \hat{\frp}$ \cite[VII Theorem 36]{zariski:comm_alg}, $K$ is clearly subfield of $K'$. For any $K$-vectorspace $V$ we then have $\dim_K V = \dim_{K'} K' \otimes_K V$, since the tensor product commutes with direct sums. Consequently \[ \rank \left[ \frac{\partial f_i}{\partial x_j} \bmod \hat{\frp} \right]_{i,j} = \rank \left[ \frac{\partial f_i}{\partial x_j} \bmod \hat{\frP} \right]_{i,j} =: h, \] where $\langle f_1, \ldots, f_n \rangle = I$, as stated in the beginning of section~\ref{sec:alg_prelim}. Now assume $A_{\frp}$ is a regular local ring and choose an associated prime $\frq$ of $I$ with $\frq \subset \hat{\frp}$ (note that there should be only one prime with this property, otherwise $A_{\frp}$ wouldn't be regular). Then we conclude $\height \frq = h$ from the jacobian criterion. Now any associated prime of $\frq_{\K'}$ has height $h$ as well \cite[VII Theorem 36]{zariski:comm_alg} and one of them is contained in $\hat{\frP}$. But then $(A_{\K'})_{\frP}$ is regular according to the general jacobian criterion. On the contrary assume $(A_{\K'})_{\frP}$ is regular. Then there exists an associated prime $\frQ$ of $I_{\K'}$ with $\frQ \subset \hat{\frP}$ and $\height \frQ = h$. Now since $\frQ$ is associated to $I_{\K'}$ it is associated to $\frr_{\K'}$ for a primary ideal $\frr \in \K[\bx]$ which is part of a primary decomposition of $I$ (use $(J_1 \cap J_2)\, \K'[\bx] = J_1 \K'[\bx] \cap J_2 \K'[\bx]$ for ideals $J_1,J_2 \leq \K[\bx]$ and $\frQ = (I_{\K'} : \langle b \rangle)$, for some $b \in \K'[\bx]$). So $\frq := \sqrt{\frr}$ is a prime ideal associated to $I$. Now \[ \frr = \frr_{\K'} \cap \K[\bx] \subset \frQ \cap \K[\bx] \subset \hat{\frP} \cap \K[\bx] = \hat{\frp}. \] But then $\frq = \sqrt{\frr} \subset \hat{\frp}$. Also $h = \height \frQ \geq \height \frq$. Consequently $A_{\frp}$ is regular according to the general jacobian criterion. \end{proof} \subsection{Real Algebra} We review some facts from real algebra. Most of them can be found in \cite{lam:real_alg} or \cite{bochnak:real_alg_geom}. \begin{definition} Let $B$ be any commutative ring and $I \leq B$ an ideal. $B$ is called (formally) \defm{real}, iff any equation \[ b_1^2 + \ldots + b_k^2 = 0, \quad k \geq 1, \] implies $b_1 = \ldots = b_k = 0$. $I$ is called \defm{real}, if $B/I$ is real. Also we define the \defm{real radical} \[ \sqrt[r]{I} = \{\, x \in B \mid x^{2r} + b_1^2 + \ldots + b_k^2 \in I, \text{ for $r,k \geq 0$, $b_i \in B$} \,\}, \] which is the smallest real ideal containing $I$ or $B$ if there are no real ideals between $I$ and $B$ cf.\cite{bochnak:real_alg_geom}. Therefore $I$ is real if and only if $\sqrt[r]{I} = I$. \end{definition} The analogue to Hilbert's Nullstellensatz in real algebraic geometry is the \begin{prop}[Risler's Real Nullstellensatz \cite{lam:real_alg}]\label{prop:real_null} Let $I \leq \K[\bx]$ be any ideal. Then \[ \algi_{\K}(\algrv(I)) = \sqrt[r]{I}. \] \end{prop} \begin{exs}\label{ex:real}\hfill \begin{itemize} \item[(i)] $\C$ is clearly not real, since $1^2 + i^2 = 0$, but $\Q$ and $\R$ are. Also any domain $B$ is real iff its field of fractions is real $Q(B)$ can then be ordered. \item[(ii)] Consider the ideal $I = \langle x^2 + y^2\rangle \leq \R[x,y,z]$ from Ex.~\ref{ex:introduction}~(iv). Then $I$ is not real, since $x,y \notin I$. We see easily from the definition, that $x,y \in \sqrt[r]{I}$ and from the real Nullstellensatz~(see Proposition~\ref{prop:real_null}) follows, that $1 \notin \sqrt[r]I$. Hence $\sqrt[r]{I} = \langle x,y \rangle$. \item[(iii)] Let $I = \langle x^3 - 5\,y^3 \rangle \leq \Q[x,y]$ from Ex.~\ref{ex:introduction}~(v). Then $I$ is prime in $\Q[x,y]$. Since there exist points $p \in \algrv(I)$ with $(\R[x,y]/I_\R)_{\frm_p}$ regular, $I$ must be real in $\Q[x,y]$, see remark (i) after Proposition~\ref{prop:simple_point}. $I_\R$ is not real however, since $\sqrt[r]{I_\R} = \algi_{\R}(\algrv(I_\R)) = \langle x - \sqrt[3]{5}\,y \rangle$. This is different for the standard radical, see Proposition~\ref{prop:base_change} \item[(iv)] $f(x,y) = y^3 + 2\,x^2y - x^4$ from Ex.~\ref{ex:introduction}~(vi) is an irreducible polynomial in $\R[\bx]$ and for any $x_0 \ne 0$, there exists a real solution $y_0 \in \R$ of $f(x_0,y) = 0$, since this is a polynomial of degree $3$. Also the local ring at $(x_0,y_0)$ is regular with the jacobian criterion, hence $I = \langle f \rangle$ is a real ideal of $\R[\bx]$ according to~Proposition~\ref{prop:simple_point}. \end{itemize} \end{exs} \begin{prop}[Simple Point Criterion \cite{lam:real_alg}]\label{prop:simple_point} Let $\K=\R$ and $I \leq \R[\bx]$. Then $I$ is real, if and only if $I$ is radical and for every associated prime $\frp$ of $I$ there exists $x \in \algrv(\frp)$, with $A_{\frm_x}$ regular. \end{prop} \begin{remarks}\hfill \begin{itemize} \item[(i)] We can easily modify the proof in \cite{lam:real_alg} to show the following (one-sided) generalization for $I \leq \K[\bx]$: Assume $I$ is radical and for every associated prime $\frp$ of $I$, there exists $x \in \algrv(\frp)$ with $(A_{\R})_{\frm_x}$ regular, then $I$ is real. \item[(iii)] There exist algorithm to compute the real radical of an ideal $J \leq \Q[\bx]$ (e.g. \texttt{realrad} in \texttt{Singular}), but to the authors knowledge, all implemented algorithm so far only compute over $\Q$ since there is ambiguity in the ordering of field extension of $\Q$ (in \texttt{Singular} we have \texttt{realrad(x\^{}3 - 5y\^{}3) = x\^{}3 - 5y\^{}3}). \end{itemize} \end{remarks} \section{Analytic Preliminaries} In the following we let $\K = \R,\C$. Any open set $U \subset \K^n$ is meant to be euclidean open. $f$ is called analytic at $p \in K$ (or holomorphic for $\K = \C$), if \[ f(z) = \sum c_{i_1}, \ldots c_{i_n} \, (z_{1} - p_1)^{i_1} \ldots (z_{n} - p_n)^{i_n} \] in a neighborhood of $p$. A $d$-dimensional smooth (analytic, complex) \defm{submanifold} of $\K^n$ is a set $X \subset \K^n$ such that for every $p$ in $X$ there exists an open set $U \subset \K^n$ and a $C^\infty$-diffeomorphism ($C^{\omega}$, biholomorphism) $\phi \colon U \to V$ to an open set $V \subset \K^n$, with \[ X \cap U = \{ x \in U \mid \phi_{d+1}(x) = \ldots = \phi_{n}(x) = 0 \, \} . \] A set $X \subset \K^n$ with point $p \in X$ is locally at $p$ \defm{the graph} of an analytic (smooth, holomorphic) mapping (in the first $d$ coordinates), if there exists an open neighborhood $U$ of $p$ and an analytic (smooth, holomorphic) mapping $\psi \colon \rho(U) \to \K^{n-d}$ such that \[ X \cap U = \{\, (y, \psi(y)) \mid y \in \rho(U) \,\}, \] where $\rho \colon \K^n \to \K^d$ is the projection to the first $d$ coordinates. Note that it needs to be checked, that this is a local definition which we leave to the reader. \begin{prop}\label{prop:manifold_point} Let $X \subset \K^n$ be any set and $p \in X$. Then the following conditions are equivalent: \begin{itemize} \item[(a)] There is an open neighborhood $U$ of $p$ such that $X \cap U$ is an analytic (smooth, holomorphic) submani\-fold of $\K^n$. \item[(b)] There exists a permutation $\pi \colon \K^n \to \K^n$ of coordinates such that \[ \pi(X) = \{\, \pi(x) \mid x \in X \,\} \] is locally the graph of an analytic (smooth, holomorphic) mapping at $\pi(p)$. \item [(c)] For a generic choice of $A \in \GL(n,\K)$, $A(X)$ is locally the graph of an analytic (smooth, holomorphic) mapping at $Ap$. \end{itemize} \end{prop} \begin{definition} A point $x$ of a set $X \subset \K^n$ is an analytic (smooth, holomorphic) \defm{manifold point} of $X$, if any of the equivalent conditions of Proposition~\ref{prop:manifold_point} is fulfilled. \end{definition} Any smooth mapping parameterizing a real algebraic set will be a smooth semi-algebraic mapping whose component functions are known to be Nash-functions~\cite[2.9.3]{bochnak:real_alg_geom} and in particular analytic. We get the following proposition: \begin{prop}\label{prop:alg_ana_smooth} Let $\K = \R$ and $X \subset \R^n$ be a real algebraic set, with $p \in X$. $p$ is an analytic manifold point of $X$ if and only if $p$ is a smooth manifold point of $X$ \end{prop} In light of Proposition~\ref{prop:alg_ana_smooth} it is enough to work with analytic manifold points if one considers algebraic subsets of $\R^n$. From now on manifold point means analytic/holomorphic manifold point.
1,116,691,501,027
arxiv
\section{Introduction} A few decades ago, a new topic in meson physics drew the attention of intermediate energy nuclear physicists. This was due to the finding that the interaction between the eta meson ($\eta$) and a nucleon is strongly attractive \cite{bhalerao} and that this interaction may generate sufficient attraction to give rise to an exotic bound state (also referred to as ``quasibound" since it decays within a short time) when put in the nuclear environment. The prediction for the existence of such eta-mesic nuclei initiated lots of efforts on the experimental as well as the theoretical front \cite{otherreviews, ourreview}. Due to the lack of eta beams (as the eta meson is extremely short lived), experiments where the $\eta$ was produced in the final state with protons and photons incident on nuclei, were performed. However, apart from two controversial experiments \cite{mainz}, there has been no definite evidence for the existence of these states. Meanwhile, the interest has also shifted from $\eta$ to $\eta^{\prime}$ mesic nuclei \cite{etaprime}. However, the WASA group \cite{wasapapers} is still active in the search for eta-mesic states in light nuclei (see also \cite{lightnucl} for theoretical works on eta-mesic helium nuclei). Many a time in physics, an experimental finding is not a direct measurement but rather a result deduced from the analysis of experimental data using theoretical inputs. For example, nuclear radii are not ``measured" but rather extracted \cite{mccarthy} using theoretical relations involving electromagnetic form factors of nuclei which are deduced from data on electron-nucleus scattering. The experimental searches for eta-mesic nuclei involve certain assumptions and theoretical inputs too. One of the (sufficiently justified) assumption is that the interaction of the $\eta$ meson with the nucleus proceeds through the formation of the S11 N$^*$(1535) resonance. Hence, analyses of an anticipated eta mesic nucleus, model the eta-nucleon interaction to proceed via the formation of an N*(1535) resonance which repeatedly decays, regenerates and propagates within the nucleus until it eventually decays into a free meson and a nucleon. The search for an $^4$He-$\eta$ bound state which for example involves the analysis of the $d \,d \,\to \, ^3$He $\, N \pi$ reaction data, is performed by assuming that the reaction proceeds as follows \cite{magdathesis}: $d \,d \,\to (^4$He-$\eta)_{bound} \, \to \, (^3$He-N$^*) \,\to$ $^3$He $\, N \pi$. Thus it becomes necessary to incorporate the static properties and motion of the N$^*$ resonance inside the nucleus. One essential ingredient in these analyses is the relative momentum distribution of N$^*$-$^3$He inside the $^4$He nucleus (which contains an N$^*$ in place of one proton or neutron). This distribution is necessary to establish the detector system acceptance for the registration of the $d \,d \, \to \, (^3$He-N$^*) \,\to$ $^3$He $\, N \pi$ reaction and to determine the data selection criteria \cite{wasapapers}. However, with the knowledge of the N$^*$ interaction with nucleons not being sufficient (see however the discussion in the next section), it is common to use the momentum distribution of a nucleon inside the nucleus rather than that of the resonance. In fact, even though the momentum distributions inside nuclei provide information which is complementary to that obtained from electromagnetic form factors, much less experimental information is available on the former even in normal nuclei. In the present work, a model for the evaluation of the momentum distribution of an N$^*$ inside a nucleus is presented. In a recent work \cite{actaphysb}, the possibility for the existence of broad N$^*$-nucleus (quasi)bound states was proposed using some available sets of coupling constants for the N N$^*$ $\to$ N N$^*$ interaction. Since a few bound states in the N$^*$-$^3$He and N$^*$-$^{24}$Mg were indeed predicted, in this work we use these binding energies (as well as some others obtained by varying the coupling constants) to evaluate the momentum distribution of the N$^*$ resonance in these nuclei. In the next section, we shall briefly repeat the formalism used in \cite{actaphysb} and proceed further to describe the evaluation of the momentum distributions. An interesting outcome of these investigations is that the momentum distribution of an N$^*$ resonance inside a nucleus is narrower than that of a nucleon inside a nucleus. This fact could indeed be of significant importance in the analyses done in connection with the searches for eta-mesic nuclei. \section{Model for the N$^*$-nucleus potential} Though the existence of a bound state of a baryon resonance and a nucleus is by itself an exotic idea, it has indeed been explored in context with the $\Delta$ (spin-isopsin 3/2) resonance \cite{deltas} in the past. In \cite{dillig}, the author calculated the momentum distribution of such a resonance too. As compared to the $\Delta$, the case of the N$^*$(1535) resonance is relatively simpler. It is a spin 1/2 (negative parity) S11 resonance which decays dominantly into a nucleon and a pion or eta meson. Hence, we shall use a one meson exchange N N$^* \to$ N N$^*$ interaction with the exchange of a $\pi$ and $\eta$ meson. The N$^*$-nucleus potential is then obtained by folding the elementary N N$^*$ interaction with a nuclear density (see some remarks regarding the validity of the folding model in this work, above Eq.(\ref{potn})). We shall also retain the scalar part of the interaction only. Since the N$^*$(1535) is a negative parity baryon, indeed in the one-pion and -eta exchange diagrams, the spin dependent terms are suppressed as compared to the leading scalar terms. As for the $\pi$NN$^*$ and $\eta$NN$^*$ coupling constants, there appears a range of values in literature \cite{couplings, roebigaver, vetmoal, ansagh, kanchan, osetgar, carras}. In the first reference in \cite{roebigaver}, for example, the cross sections for photoproduction of $\eta$ mesons from heavy nuclei were measured and compared with models of the quasifree $A(\gamma,p)X$ reaction. The authors adjusted the coupling constants from an Effective Lagrangian Approach (ELA) in \cite{carras} to reproduce the $p(\gamma,\eta)p$ and $d(\gamma,\eta)np$ data. With $g_{\pi N N^*}$ = 0.699 and $g_{\eta N N^*}$ = 2.005, the experimental $\eta$ photoproduction cross sections on complex nuclei were reproduced within the model of \cite{carras}. In the two references in \cite{vetmoal}, the authors found $g_{\pi N N^*}$ = 0.8 and $g_{\eta N N^*}$ = 2.22 while comparing the calculations within a one boson exchange model with the $ N N \to N N \eta$ and $\pi^- p \to \eta n$ data. Somewhat bigger values of the ${\pi N N^*}$ coupling constant have been found in more recent years with Ref. \cite{ansagh} for example, reporting $g_{\pi N N^*}$ = 1.09 by comparing calculations within a chiral constituent quark model with the experimental data on the partial decay width of the S11(1535) resonance. Mixing pseudoscalar meson-baryon with vector meson-baryon states in a coupled channels scheme with $\pi N$, $\eta N$, $K \Lambda$, $K \Sigma$, $\rho N$ and $\pi \Delta$, the coupling constants, $g_{\pi N N^*}$ = 1.05 and $g_{\eta N N^*}$ = 1.6 were obtained in \cite{osetgar}. In a study of nonstrange meson baryon systems where the N*(1535) was found to get generated as a result of coupled channel dynamics of vector meson-baryon and pseudoscalar-baryon systems, the authors \cite{kanchan} obtain, $g_{\pi N N^*}$ = 0.95 and $g_{\eta N N^*}$ = 1.77. We shall present results with some sets of coupling constants mentioned above. The constants and binding energies of possible N$^*$-$^3$He states are listed in Table I. As compared to the $\pi$N N$^*$ and $\eta$N N$^*$ couplings, the $\pi$N$^*$N$^*$ and $\eta$N$^*$N$^*$ couplings are even much less known. In view of the above uncertainties and also the fact that the present work is aimed at finding out how much the N$^*$ momentum distribution in a nucleus differs from that of a nucleon, we do not attempt a more sophisticated calculation. \subsection{Elementary N N$^*$ interaction} The elementary interaction is considered to proceed by the exchange of a pion and an eta meson as shown in Fig. 1. We consider an N$^*$ which is neutral. The calculation for a positively charged N$^*$ can be repeated in a similar way. \begin{figure}[h] \begin{center} \includegraphics[width=12cm,height=5cm]{figure1.eps} \caption{\label{fig:eps1} Elementary N N$^*$ $\to$ N N$^*$ processes considered in the interaction of the N$^*$ with a nucleus.} \end{center} \end{figure} Diagrams involving the N$^*$N$^*\,\pi$ or N$^*$N$^* \,\eta$ couplings which are hardly known will not be considered. Apart from this fact, for such diagrams, the potential turns out to be spin dependent (and so also suppressed as compared to the leading term in the potential of Fig. 1 ). The $\pi$NN$^*$ and $\eta$NN$^*$ couplings (with N$^*$(1535,1/2$^-$)) are given by the following interaction Hamiltonians \cite{osetetaNN}: \begin{eqnarray}\label{hamil} \delta H_{\pi N N^*} = g_{\pi N N^*} \bar{\Psi}_{N^*} {\vec \tau} \Psi_N \cdot {\vec \Phi_{\pi}} + {\rm h.c.}\\ \nonumber \delta H_{\eta N N^*} = g_{\eta N N^*} \bar{\Psi}_{N^*} \Psi_N \cdot \Phi_{\eta} + {\rm h.c.} \end{eqnarray} Let us consider the diagram for the N$^*$ n $\to$ n N$^*$ process in Fig. 1 and use the standard Feynman diagram rules with the non-relativistic approximation for the spinors \begin{equation}\label{spinoreq3} u_i =\sqrt{2m_i}\left(\begin{array}{c} w_i\\ {\vec{\sigma}_i \cdot \vec{p}_i \over 2m_ic}\, w_i \end{array}\right) \, , \end{equation} to write the amplitude as \begin{equation} {g_{xNN^*}^2 \bar{u}_{N^*}(\vec{p}^{\, \prime}) \, u_n(\vec{p}) \, \bar{u}_n(-\vec{p}^{\, \prime})\, u_{N^*}(-\vec{p}) \over q^2 - m_x^2}\, , \end{equation} where $x = \pi$ or $\eta$ and $q^2= \omega^2 - \vec{q}^2$ is the four momentum squared carried by the exchanged meson ($q = p^{\prime} - p$ as shown in the figure). Here for example, \begin{equation} \bar{u}_n(-\vec{p}^{\, \prime})\, u_{N^*}(-\vec{p}) = N \, \biggl( 1 \, -\, {\vec{\sigma}_n \cdot \vec{p}^{\,\prime} \vec{\sigma}_{N^*} \cdot \vec{p} \over 4 m_N m_N^* c^2} \biggr ) \end{equation} and we drop the second term in the brackets which is spin dependent as well as $1/c^2$ suppressed. The potential in momentum space obtained from the above amplitude is given as: \begin{equation}\label{pot1} v_x(q) = {g^2_{xNN^*} \over q^2 - m_x^2} \, \biggl ({\Lambda^2_x - m_x^2 \over \Lambda_x^2 - q^2} \biggr )^2\, , \end{equation} where the last term in brackets has been introduced to take into account the off-shellness of the exchanged meson. The four momentum transfer squared, $q^2 = \omega^2 - \vec{q}^2$, in the present calculation is approximated simply as $q^2 \simeq - \vec{q}^2$. Since the mass of the N$^*$ is much bigger than that of the nucleon, the neglect of the energy transfer, $\omega$, in the elastic N N$^*\, \to$ N N$^*$ process as such is not well justified. However we do not expect the relative momentum distribution of the N$^*$ in the nucleus to depend strongly on the mass of the N$^*$ (an expectation which will be verified later numerically). We thus proceed further without a non-zero $\omega$ which would give rise to poles in (\ref{pot1}) and make the calculation of the N$^*$ nucleus potential a formidable task. The potential in (\ref{pot1}) is Fourier transformed to obtain the potential in $r$-space. The Fourier transform of (\ref{pot1}) can be calculated analytically and we get, \begin{equation}\label{potelement} v_x(r) = {g^2_{xNN^*} \over 4 \pi} \,\biggl [ {1\over r} \biggl ( e^{-\Lambda_x r} - e^{-m_x r} \biggr ) + {\Lambda_x^2 - m_x^2 \over 2 \Lambda_x} \, e^{-\Lambda_x r} \biggr ]\, . \end{equation} In order to evaluate the above potential, we need to know the coupling constants at the $\pi$NN$^*$ and $\eta$NN$^*$ vertices. One can find a range of values in literature as discussed above. In Fig. 2, we see the sensitivity of these potentials to the use of different sets of parameters. Whereas the first two sets are shown to display the sensitivity to the values of the cut-off parameters, the next set is the one which gives the highest binding of the N$^*$ and nuclei in this work. It gives rise to one bound N$^*$-$^3$He state at -4.78 MeV and 3 bound N$^*$-$^{24}$Mg states at -50.3, -22.5 and -3.25 MeV. Using this as well as other sets listed in Table I, we perform an exploratory study of the momentum distributions of the N$^*$. \begin{figure}[h] \begin{center} \includegraphics[width=9cm,height=11cm]{figure2.eps} \caption{\label{fig:eps2} Elementary potential as given in Eq.(\ref{potelement}). Note that whereas $\pi + \eta$ exchange contributes to the n N$^* \to$ n N$^*$ potential, only $\pi$ contributes to the p N$^* \to$ p N$^*$ potential in N$^*$-$^3$He.} \end{center} \end{figure} \subsection{N$^*$-nucleus potentials} Once the elementary potential has been defined, the folding model with \begin{equation} V(R) = \int \, d^3r\, \rho(r) \, v(|\vec{r} - \vec{R}|) \, , \end{equation} is used to construct the N$^*$ nucleus potential $V(R)$ which is given by \begin{eqnarray}\label{nuclpot} V(R) &=& V_p(R) + V_n(R) \nonumber \\ &=& Z \,\int \, d^3r\, \rho_p(r) \, v_p(|\vec{r} - \vec{R}|) \, +\, N \,\int \, d^3r\, \rho_n(r) \, v_n(|\vec{r} - \vec{R}|) \,, \end{eqnarray} where, $Z$ and $N$ are the number of protons and neutrons, $v_n(r) = v_{\pi^0}(r) + v_{\eta}(r)$ and due to the isospin factor appearing in the $\pi^-$ exchange diagram (see Fig. 1 and Eq.(\ref{hamil})), $v_p(r) = v_{\pi^-}(r) \vec{\tau}_1 \cdot \vec{\tau}_2$. Note that in the case of $^3$He with $Z = 2$ and due to the isospin factor in $v_p(r)$, the contribution of $V_p(R)$ to the total $V(R)$ is much larger than that of $V_n(R)$. Since $v_p(r)$ (and hence $V_p(R)$) involves only the pion exchange diagram (Fig. 1[b]), the dominant contribution to the N$^*$-nucleus potential comes from pion exchange. Since the pion exchange potential is fairly long range, the folding model chosen in the present work, though not ideal, seems acceptable. After performing the angle integration, the above integral reduces for example to \begin{eqnarray}\label{potn} V_n(R) = {-2 \pi A \over R} \, \int \, \biggl \{ {e^{-m_x (|r - R|)} - e^{-m_x (r + R)} \over m_x} \, - \, {e^{-\Lambda_x (|r - R|)} - e^{-\Lambda_x (r + R)} \over \Lambda_x} \nonumber \\ + B \biggl [ \, \biggl({r+R \over \Lambda_x} +{1 \over \Lambda_x^2} \biggr ) \,e^{-\Lambda_x(r+R)}\, -\, \biggl ( {|r-R| \over \Lambda_x} + {1 \over \Lambda_x^2} \biggr ) \,e^ {-\Lambda_x |r - R|}\, \biggr] \,\biggr \}\, r \, dr\, \rho_n(r), \end{eqnarray} where $A = g^2_{xNN^*}/4\pi$ and $B= (\Lambda_x^2 - m_x^2)/2\Lambda_x$. In case of the $^3$He nucleus, the majority of information available in literature is on the charge density distribution of $^3$He obtained from electron scattering. The root mean square radius, $r_{ch}^{3He}$ = 1.88 $\pm$ 0.05 fm, obtained in \cite{mccarthy} from the $^{3}$He charge form factor is a bit smaller than the value of 1.959 $\pm$ 0.03 fm in \cite{amroun}. Ref. \cite{amroun} also provides the charge form factor of $^3$H with $r_{ch}^{3H}$ = 1.755 $\pm$ 0.086 fm. There exists a parametrization of the matter density given in \cite{cookgrif} where a folding model analysis of $^3$He elastic scattering on heavy nuclei is performed. The authors fit the parameters in a gaussian density to reproduce a $^3$He matter radius of 1.68 fm (calculated as $r_{mat}^2 = r_{ch}^{3He} - r_p^2$ with $r_p$ being the radius of the proton). There is however no direct experimental data for the neutron density distribution in $^3$He. We identify the neutron density distribution in $^3$He with the proton density in $^3$H, which not only seems reasonable provided that the charge symmetry breaking is small but also agrees with the matter distribution given in \cite{cookgrif}. Such an approach of calculating the nuclear densities using the charge densities of $^3$He and $^3$H has also been used earlier in literature \cite{tsushima}. Thus, for the proton density distribution $\rho_p(r)$, we choose a sum of Gaussians \cite{amroun}, namely, \begin{equation}\label{helidensity} \rho(r) = {1 \over 2 \pi^{3/2} \gamma^3} \sum_{i=1}^N \, {Q_i \over 1 + 2R_i^2/\gamma^2} \, \biggl ( e^{-(r-R_i)^2/\gamma^2} + e^{-(r+R_i)^2/\gamma^2} \biggr)\, , \end{equation} where the parameters $Q_i$, $R_i$ and $\gamma$ for $^3$He and $^3$H can be found in \cite{amroun}. Thus, with $\rho_p = \rho_{ch}^{3He}$ and $\rho_n = \rho_{ch}^{3H}$ (both normalized to 1), the above integral can in principle be done analytically. However, the analytic results are lengthy expressions which include error functions and exponentials. They are not particularly enlightening and hence we rather perform the integral numerically. The density for $^{24}$Mg is assumed to have the following Woods-Saxon form \cite{magdensity}: \begin{equation}\label{woodensity} \rho(r) = {\rho_0 \over 1 + \exp{\biggl({r - c\over a} \biggr)}}\, , \end{equation} where $c = r_A\, [1 - (\pi^2 a^2/ 3 r_A^2)]$ with $a = 0.54$ fm and $r_A = 1.13 A^ {1/3}$. The N$^*$ nuclear potentials thus evaluated (see \cite{actaphysb}) can be fitted reasonably well to Woods Saxon forms of potentials. This fact facilitates the search for a possible N$^*$-nucleus bound state and the calculation of its wave function and hence momentum distribution. The potentials corresponding to the various sets of parameters in Table I can be fitted by a Woods Saxon potential with the depth parameter $V_0$ ranging between 14 to 42 MeV, $a$ = 0.8 fm and $R$ from 1.15 to 1.34 fm. \begin{table}[ht] \caption{ The $\pi N N^*$ and $\eta N N^*$ coupling constants and the binding energies of the possible N$^*$-$^3$He bound states obtained with the corresponding set in the N N$^*$ $\to$ N N$^*$ potentials.} \begin{tabular}{|l|l|l|l|} \hline & $g_{\pi N N^*}$ & $g_{\eta N N^*}$ & \, E (MeV) \\ & & & \\ \hline Chiral constituent quark model & \, 1.09 & \, 2.07 & \, -4.78 \\ fits partial decay widths \cite{ansagh} & & & \\ \hline Hidden gauge formalism $^{\dagger}$ & \, 1.05 & \, 1.6 &\, -3.6 \\ fits partial widths and $\pi^- p \to \eta n$ \cite{osetgar} & & & \\ \hline vector- and pseudoscalar-baryon & \, 0.95 & \, 1.77 & \, -2.1 \\ coupled channel study \cite{kanchan} & & & \\ $^{\dagger}$ N$^*$(1535) is dynamically generated & & & \\ \hline One boson exchange model & \, 0.8 & \, 2.22 & \, -0.8 \\ fits $ p p \to p p \eta$ data \cite{vetmoal} & & & \\ \hline Data on $\eta$ photoproduction on heavy nuclei \cite{roebigaver} & \, 0.669 & \, 2.005 & \, -0.04 \\ fits $p(\gamma,\eta)p$ and $d(\gamma,\eta)np$ data within ELA \cite{carras}& & & \\ \hline \end{tabular} \end{table} \section{Momentum distribution of the N$^*$ in nuclei} The Schr\"odinger equation for the Woods Saxon potential can be reduced to one for the hypergeometric functions \cite{WShypergm} and a condition for the existence of bound states can be found. For a Woods Saxon potential of the type \begin{equation} V(r) = - {V_0 \over 1 + e^{r-R\over a}} \end{equation} the Schr\"odinger equation \begin{equation}\label{schrodeq} {d^2u \over dr^2} + {2 \over r} {du\over dr} + {2m\over \hbar^2} (E - V) u =0 \end{equation} may be transformed to the independent variable $y = 1 / [1 + e^{r - R/ a}]$ to obtain a hypergeometric differential equation. After some algebra \cite{WShypergm} one obtains the following condition for bound states: \begin{equation}\label{boundcond} {\lambda R \over a} \, +\, \Psi \,-\, 2 \phi \, - \arctan{\lambda \over \beta} \, =\, (2n - 1) {\pi \over 2}\, \, \, \, \, n = 0, \pm 1, \pm 2, ... \end{equation} where, $${2 m E \over \hbar^2} \, a^2 = - \beta^2 ; \,\,\, {2 m V_0 \over \hbar^2} \, a^2 = \gamma^2 ; \, \, \, \lambda = \sqrt{\gamma^2 - \beta^2}$$ and $\phi = arg \Gamma (\beta + i \lambda)$; $\Psi = arg \Gamma (2 i \lambda)$. Defining $u(r) = \chi(r)/r$ and $y = 1 / [1 + e^{r - R/ a}]$, the solution of the hypergeometric differential equation can be found to be \begin{equation} \chi = y^{\nu} \, (1 - y)^\mu \, _2F_1(\mu+\nu, \mu+\nu+1, 2\nu+1;y) \end{equation} where $\nu = \beta$ and $\mu^2 = \beta^2 - \gamma^2$. Since the variable $y$ is given in terms of $r$, we essentially have the wave function $\chi(r)$ which can then be Fourier transformed as follows: \begin{equation} \chi(p) = \biggl ( {2\over \pi} \biggr )^{1/2} \, \int_0^{\infty} \, r j_0(pr) \chi(r) dr \end{equation} to evaluate the momentum distribution $T(p)$ as, \begin{equation} T(p) = {1 \over 4 \pi} \, |\chi(p)|^2\,\,p^2 . \end{equation} $T(p)$ is normalized such, that, \begin{equation} 4 \pi \, \int \, T(p) \, dp = 1 \end{equation} \begin{figure}[h] \begin{center} \includegraphics[width=9cm,height=9cm]{figure3.eps} \caption{\label{fig:eps3} Momentum distribution of the N$^*$ in $^4$He. Calculations with different sets of coupling constants (given in Table I) are shown in the inset on a linear scale.} \end{center} \end{figure} Figure 3 displays the relative momentum distribution $T(p)$ of N$^*$-$^3$He inside a $^4$He nucleus which contains an N$^*$ instead of a neutron. The curve shown on the logarithmic scale corresponds to the set of parameters in Table I which give the highest binding. The results for other parameter sets in Table I are shown on a linear scale in the inset since one does not see much difference at small momenta on the log scale. The offshell cut-off parameters appearing in the elementary N N$^*$ $\to$ N N$^*$ potential are chosen to be $\Lambda_{\pi}$ = 1.3 GeV and $\Lambda_{\eta}$ = 1.5 GeV in all cases. Changing the cut-offs to $\Lambda_{\pi}$ = 0.9 GeV and $\Lambda_{\eta}$ = 1.3 GeV for example, does not change the distribution significantly (except for a small shift at high momenta) and is hence not shown in the figure. \subsection{Dependence on the N$^*$ mass} The N$^*$-$^3$He potentials do not depend on the mass of the N$^*$ but in the search for bound states using the condition (\ref{boundcond}), one has to introduce the N$^*$ mass to calculate the reduced mass in that expression. In order to check the sensitivity of the results to the choice of the N$^*$ mass, we varied it between 1400 and 1550 MeV. The corresponding binding energies of states fulfilling the condition (\ref{boundcond}) varied from 4.34 to 4.84 MeV for the parameter set chosen \cite{ansagh}. This variation introduces a very small change in the form of the bound state wave function as well as momentum distribution as can be seen in Fig. 4[a]. \begin{figure}[h] \begin{center} \includegraphics[width=16cm,height=8cm]{figure4.eps} \caption{\label{fig:eps4} Variation of the N$^*$-$^3$He momentum distribution in $^4$He with N$^*$ mass (solid and dashed lines correspond to 1400 and 1550 MeV respectively in [a]). The dot-dashed line corresponds to [a] the momentum distribution and [b] the wave function of the neutron-$^3$He bound state (calculated within the same model). The wave function of N$^*$-$^3$He (dashed line) for m$_{N*}$ = 1550 MeV is also shown in [b].} \end{center} \end{figure} This finding complements earlier results from \cite{roebigaver} which indicate little modification of the in-medium excitation of the S11(1535). Though some evidence of broadening was reported in \cite{yorita}, the N$^*$ mass of ~1544 MeV calculated in the Quark Meson Coupling (QMC) model (with the N$^*$ interpreted as a 3-quark state) \cite{bassthomas} seems to be consistent with the former experimental findings as well as the results of the present work. \subsection{Comparison with a nucleon momentum distribution in $^4$He} In order to compare the N$^*$-$^3$He relative momentum distribution in $^4$He with that of a nucleon in standard $^4$He, we replace the Woods Saxon parameters by $V_0$ = 66 MeV, $R$ = 1.97 fm and $a$ = 0.65 fm, to get a neutron-$^3$He potential which produces a state at -20.6 MeV while fulfilling the condition in (\ref{boundcond}) with the reduced mass of a neutron and $^3$He. This is indeed close to the energy required to separate a neutron from $^4$He. Even if the curve for the momentum distribution of the neutron calculated in this manner does not have the authenticity of one evaluated using few body equations, it is pretty close to a realistic calculation \cite{nogga} (see Fig. 5 and the discussion below) and serves for the purpose of comparison. In Fig. 4[b] we see the difference between the bound wave functions for the N$^*$-$^3$He and neutron-$^3$He systems which explains the difference in the distributions in Fig. 4[a]. With the N$^*$-$^3$He being loosely bound (-4.78 MeV) (as compared to the neutron which is bound by -20.6 MeV), the wave function of the N$^*$-$^3$He is more spread out in $r$-space (Fig. 4[b]). This causes the momentum distribution to be narrower. Other sets of parameters for the N N$^*$ interaction leading to lesser binding lead to even narrower distributions as seen in the inset in Fig. 3. A better agreement on the $\pi$NN$^*$ and $\eta$NN$^*$ coupling constants would be useful in order to perform a more accurate estimate of the momentum distribution of the N$^*$ in the nucleus. \begin{figure}[ht] \begin{center} \includegraphics[width=9cm,height=9cm]{figure5.eps} \caption{\label{fig:eps5} Comparison of the proton-$^3$H momentum distribution in $^4$He of the present work (solid line) with others calculated using Faddeev-Yakubovsky equations \cite{nogga} with the AV18+TM (dashed) and CD-Bonn+TM (dot-dashed) potentials.} \end{center} \end{figure} In order to test the validity of the calculations done in the present work, we repeat a similar calculation for the proton-$^3$H system in $^4$He for which some results using few body equations exist in literature. Though the momentum distribution for $n$-$^3$He is not expected to be very different from that of $p$-$^3$H in $^4$He, we perform this calculation in order to compare with the available few-body results. With the Woods Saxon parameters of $V_0$ = 66 MeV, $R$ = 1.93 fm and $a$ = 0.65 fm which reproduce the $p$-$^3$H binding of 19.8 MeV, we obtain a distribution which agrees at small and medium momenta with more sophisticated calculations \cite{nogga} shown in Fig. 5. The disagreement is only in the region of large momenta where the magnitude of $T(p)$ has fallen down by three orders of magnitude. Thus, the conclusion that the N$^*$-$^3$He momentum distribution is narrower than the neutron-$^3$He distribution, drawn from the calculations of the present work seems quite reliable. \begin{figure}[ht] \begin{center} \includegraphics[width=9cm,height=9cm]{figure6.eps} \caption{\label{fig:eps6} N$^*$-$^{24}$Mg momentum distribution in $^{25}$Mg for three possible binding energies corresponding to the number of nodes $n = 1$, $2$ and $3$ in the bound wave functions.} \end{center} \end{figure} \subsection{N$^*$-$^{24}$Mg bound states} Using the set of parameters from \cite{ansagh} for the $\pi$NN$^*$ and $\eta$NN$^*$ coupling constants, the condition (\ref{boundcond}) allows three bound states of N$^*$-$^{24}$Mg at energies of -50.3, -22.5 and -3.25 MeV for n = 1, 2 and 3 respectively. The Woods Saxon potential parameters of the N$^*$-$^{24}$Mg system are: $V_0$ = 80 MeV, $a$ = 0.97 fm and $R$ = 2.85 fm. In Fig. 6 one can see the distributions with one, two and three nodes accordingly. \section{Summary} The broad S11 baryon resonance N$^*$(1535) enters as one of the most essential ingredient in reactions involving the production of the neutral pseudoscalar eta meson ($\eta$) and hence also in the analyses of possible eta-mesic nuclei. Since the low energy $\eta$N interaction predominantly proceeds by producing an N$^*$ resonance which propagates, decays and regenerates inside the nucleus, it seems legitimate to ponder about the possible existence of an N$^*$-nucleus bound state too. Indeed, performing such an investigation in \cite{actaphysb}, it was found that depending on the strength of the N N$^*$ interaction, loosely bound, broad quasibound states of the N$^*$ with $^3$He and $^{24}$Mg nuclei can be formed. In the present work, the investigation is continued to evaluate the momentum distribution of such an N$^*$ inside the nucleus. Being aware of the fact that neither does any experimental evidence of N$^*$-nuclei exist nor is the N N$^*$ interaction accurately known, the calculations are done within a folding model where the elementary N N$^*$ $\to$ N N$^*$ potential is folded with the known nuclear densities. The present work finds that since the N$^*$ is loosely (or even very loosely, depending on the $\pi$NN$^*$ and $\eta$NN$^*$ couplings) bound to a nucleus, the bound state wave function of an N$^*$ as compared to that of a nucleon is more spread out in $r$-space and hence the momentum distribution is narrower than in case of the nucleon. This finding is important in view of the fact that experimental analyses generally approximate the momentum distribution of an N$^*$ by that of a nucleon in a nucleus. The present work is a first attempt to evaluate the N$^*$(1535) resonance momentum distribution in nuclei. This distribution, as mentioned in the beginning, is necessary to establish the detector system acceptance for the registration of the $d \,d \, \to \, (^3$He-N$^*) \,\to$ $^3$He $\, N \pi$ reaction and to determine the data selection criteria \cite{wasapapers}. A calculation of the momentum distribution of N$^*$-d in $^3$He would be necessary for the analysis of the $p \, d \, \to \, (d$-N$^*) \,\to$ d $\, N \pi$ reaction aimed at searching $\eta$-mesic $^3$He whose prospects seem higher due to the fact that we have already seen a strong enhancement in the $p \, d\,\to \, ^3$He $\eta$ reaction near threshold. Such a calculation would however be better performed using a few body formalism for the N$^*$-p-n system. An improved knowledge of the N$^*$ coupling constants and experimental searches of N$^*$ nuclei could motivate such sophisticated few body calculations in future. \begin{acknowledgments} The author thanks Prof. Pawel Moskal for useful comments and discussions. The author also thanks the Faculty of Science at the University of Los Andes, Colombia for financial support (project no. P15.160322.009/01-01-FISI02). \end{acknowledgments}
1,116,691,501,028
arxiv
\section{Introduction} Methods from semigroup theory provide an elegant abstract method to establish wellposedness, stability and convergence results for large classes of evolutionary partial differential equations, such as those which govern geometric heat flows. For such purposes, the most useful semigroups are those which admit a holomorphic continuation in the `time' variable. There is a nice characterization of these {\it analytic semigroups} in terms of their infinitesimal generators. More specifically, write the semigroup as $e^{-tA}$, where $A$ is a closed operator acting on a certain Banach space $X$. A classical theorem states that the function $t \mapsto e^{-tA}$ from $\mathbb R^+$ to the space $\mathcal L(X)$ of bounded operators on $X$ admits a holomorphic extension to a neighbourhood of $\mathbb R^+$ in $\mathbb C$ if and only if the operator $A$ satisfies a property known as {\it sectoriality}, which as indicated below, involves restrictions on the spectrum of A, along with an estimate on its resolvent $(\lambda I - A)^{-1}$. We refer to \cite[Chapters 5 and 6]{Angenent} for an elementary introduction to sectoriality and analytic semigroups. In this work, we consider a broad class of geometric operators defined on complete Riemannian manifolds with bounded geometry and show that they are indeed sectorial. This is the content of our main result -- Theorem \ref{thm:main-A} -- which is proven in Section \ref{sec:proofThm} using tools from microlocal analysis. However, the main consequence of our work is that if the operators corresponding to geometric heat flows on manifolds of bounded geometry act on the naturally associated (little) H\"older spaces, and if their symbols satisfy readily verified algebraic conditions, then the initial value problems for those flows are well posed, and have good stability and convergence properties. More precisely, in applications to PDE, the generator $A$ is a (typically elliptic) differential operator and the sectoriality of such operators is known in a variety of settings. Our first goal in this paper is to prove the sectoriality estimate for strongly elliptic differential operators which satisfy a certain uniformity property, acting between sections of vector bundles over a complete Riemannian manifold of bounded geometry. We characterize such operators as \emph{admissible}. The Banach spaces on which we let these operators act are little H\"older spaces; our emphasis on these is because of their use in applications to nonlinear problems. The immediate examples of such operators are generalized Laplacians, i.e., operators of the form $\nabla^* \nabla + \mathcal R$, where $\mathcal R$ is an endomorphism usually constructed from the curvature tensor of the underlying metric and its covariant derivatives. However, the method of proof extends naturally to allow us to prove this estimate for more general higher order operators as well. Our second goal is to apply this sectoriality to deduce stability estimates for nonlinear parabolic evolution equations on these manifolds, as before, acting on little H\"older spaces. We are particularly interested in geometric curvature flows, e.g., the Ricci flow, the mean curvature flow, and certain relatively unexplored higher order flows such as the one associated to the ambient obstruction tensor (see \S \ref{sec:applicats} for a description of this). As discussed below, these flows typically require some sort of gauge fixing in order to become suitably parabolic. The applications in this paper illustrate how one can easily establish wellposedness of quite general flows, on spaces that are not necessarily compact, using this sectoriality property. In a subsequent paper we describe an application in which sectoriality is a key part of the proof of a long-time stability and convergence result. Sectoriality for admissible operators on manifolds on spaces of uniformly bounded geometry has, in fact, been treated previously, notably by H. Amann and his collaborators, see for example \cite{Amann2}, \cite{ES}. Those techniques are spread out over several papers, considerably more abstract and, from a geometric point of view, perhaps less accessible. Our goal here is to provide a straightforward and hopefully more approachable proof which should be more convenient for geometric applications. Let us briefly recall the functional analytic setting in more detail. The fundamental idea when applying semigroup theory to differential equations is to recast the problem as an ordinary differential equation with values in some Banach space. Let $X$ be a complex Banach space and $D$ a dense linear subspace. Consider the $X$-valued autonomous ordinary differential equation \begin{align} \label{eqn:ODEinBanach} \frac{d u}{dt} = F( u(t ) ), \end{align} where $u: [0,T) \to X$ is a $\mathcal C^1$ mapping. Here $F: D \to X$ is a (nonlinear) Fr\'echet differentiable map satisfying certain structural assumptions. The most important of these is that the linearization $L$ of $F$ at $u_0 \in D$ is a sectorial operator on $X$. We now explain this hypothesis. The resolvent set of a closed linear operator $L: X \to X$ is the subset $\mathbf{res}_X(L) \subset \mathbb C$ consisting of all numbers $\lambda$ such that $(\lambda I - L): D \to X$ has an inverse which is a bounded operator on $X$. $L$ is called {\it sectorial} on $X$ if it satisfies: \begin{enumerate} \item [\underline{S1}.] The resolvent set $\mathbf{res}_{X}(L)$ contains a sector of opening angle $2\theta < \pi$ which contains a left half-plane $\mathrm{Re}\,\lambda \leq \omega$ for some $\omega \in \mathbb{R}$, i.e., there exists $\theta \in (0,\pi/2)$ such that \[ \mathbf{res}_{X}(L) \supset S_{\omega, \theta} := \{ \lambda \in \mathbb C \setminus \{\omega\}: |\arg(\omega -\lambda) | > \theta\}, \quad \mathrm{and} \] \item [\underline{S2.}] There exists a constant $C > 0$ so that for all $\lambda \in S_{\omega, \theta}$, we have \begin{equation*} \|(\lambda I - L)^{-1} \|_{\mathcal{L}(X)} \leq \frac{C}{|\lambda - \omega|}. \end{equation*} \end{enumerate} These conditions on $L$ turn out to be equivalent to the analyticity of the semigroup $e^{-tL}$, and from this a wealth of wellposedness and regularity results are then available. Many nonlinear parabolic partial differential operators can be cast into this framework, see \cite{Lunardi} for a thorough account of this general theory. Of particular importance is that spectral stability analysis of $L$ yields stability results for the nonlinear problem in some cases. As for the geometric applications, recall that a complete Riemannian manifold $(M,g)$ is said to have bounded geometry (of a certain order) if its injectivity radius is bounded from below and there is a uniform bound for the norm of the curvature tensor and its covariant derivatives up to some order. This is equivalent to uniform control of the coefficients of the metric and its inverse in an atlas of normal coordinate balls of uniform radius. This class of spaces includes compact Riemannian manifolds, of course, but also complete noncompact manifolds which are asymptotically Euclidean, conic, cylindrical or hyperbolic, respectively, or more generally which are asymptotically modeled on other noncompact symmetric or homogeneous spaces. On any such space $(M,g)$ we consider elliptic differential operators, acting between sections of vector bundles, which satisfy uniformity conditions on their coefficients in these local uniform coordinate charts. The most obvious examples are operators determined directly from the metric $g$, for example, generalized Laplace-type operators \[ L = \nabla^* \nabla + \mathcal R, \] acting on sections of some tensor bundle over $M$. Here $\nabla$ is the induced covariant derivative on this bundle, $\nabla^*$ its adjoint, and $\mathcal R$ a symmetric endomorphism built out of tensor products of contractions of the curvature tensor and its covariant derivatives. Many aspects of the mapping properties of $L$ on $L^2(M, dV_g)$ can be deduced from Hilbert space techniques. However, it is often more convenient for nonlinear geometric problems to consider this operator acting on weighted H\"older spaces instead. Consider a general weighted H\"older space $X = \mathfrak{w} \, \mathcal{C}^{k,\alpha}(M; E)$, where $\mathfrak{w}$ is a (strictly positive) weight function. The assumptions imposed on $\mathfrak{w}$ are specified in Definition~\ref{wthyp}. More generally, we consider $L$ to be a strongly elliptic operator which satisfies certain uniformity conditions specified in Definition~\ref{admop}, acting on weighted H\"older sections of some vector bundle. Note that by replacing $L$ with $\mathfrak w^{-1} L \mathfrak w$ we may as well consider $L$ as acting simply on $\mathcal C^{k,\alpha}(M;E)$. The conditions on $\mathfrak w$ are precisely the ones necessary for this conjugated operator to satisfy the same uniformity hypothesis. The first main result of this paper states that any uniform strongly elliptic operator as above is {\it sectorial}. \begin{maintheorem} \label{thm:main-A} Let $(M^n, g)$ be a complete Riemannian manifold of bounded geometry of order $\ell + \alpha' > m+k + \alpha$, where $m \in \mathbb N$, $k \in \mathbb N_0 = \mathbb N \cup \{0\}$ and $0 < \alpha < \alpha' < 1$, and suppose that $L$ is an admissible elliptic operator of order $m$, i.e., it is strongly elliptic satisfying the uniformity hypotheses in Definition~\ref{admop2} below, which we let act on $X = \mathcal C^{k,\alpha}(M;E)$ for some bundle $E$ over $M$. Then $L$ is sectorial on $X$. \end{maintheorem} We briefly enumerate the main ideas in the proof. We begin with the observation that the sectoriality estimate is equivalent to a uniform estimate for the associated {\it semiclassical operator} $\zeta I - \varepsilon^m L$, where $\varepsilon = |\lambda|^{-1/m}$ and $\zeta = \lambda/|\lambda|$. The first step is to show that this operator is actually invertible on $X$ when $\varepsilon$ is sufficiently small. This is deduced by constructing an approximation for the inverse of this operator, which is called the \emph{semiclassical resolvent}. This involves a detour into the methods of geometric microlocal analysis, and the construction itself is sketched in some detail in Section \ref{scpc} in order to be as self-contained as possible. (This geometric microlocal analytic construction has appeared implicitly in the literature before, but does not seem to appear explicitly in a readily available form elsewhere.) Having established the existence of $(\zeta I - \varepsilon^m L)^{-1}$ as a bounded operator on $X = \mathcal C^{0,\alpha}(M)$ for each $\varepsilon > 0$ sufficiently small, we need to establish uniformity of its norm. This is argued by contradiction: we show using a number of rescaling and blowup arguments that the failure of uniformity of this estimate would lead to various impossible conclusions. A key feature of this argument is that we parlay the (essentially tautological) uniform estimates of this operator acting on {\it semiclassical} H\"older spaces (see \S \ref{sec:background}) to uniformity for the action of this operator on standard H\"older spaces. The passage from sectoriality on $\mathcal C^{0,\alpha}$ to sectoriality on $\mathcal C^{k,\alpha}$ is a trivial extension. The key motivation for all this work is its application to proving wellposedness of geometric flows on complete noncompact manifolds. We obtain the following theorem: \begin{maintheorem} \label{thm:main-B} Let $(M,g)$ be a complete Riemannian manifold of bounded geometry of order $k+m+\alpha'$, where $m \in \mathbb N$, $k \in \mathbb N_0 = \mathbb N \cup \{0\}$ and $0 < \alpha < \alpha' < 1$, let $U$ be an open subset of $\mathcal C^{m+k,\alpha}$, and let $F: U \to \mathcal C^{k,\alpha}$ be a smooth elliptic operator of order $m$ such that the linearization $DF_u$ at any $u \in U$ is admissible. Then for any $u_0 \in U$, there exists $T > 0$ so that the equation \[ \frac{du}{dt} = F( u(t) ), \; u(0) = u_0 \] has a unique solution $u: [0,T) \to \mathcal C^{k,\alpha}$. Moreover, any two solutions $v(t)$ and $w(t)$ with initial values $v_0$ and $w_0$ in $U$ satisfy \[ \|v(t) - w(t)\|_{\mathcal C^{m+k,\alpha}} \leq C \| v_0 - w_0\|_{\mathcal C^{m+k,\alpha}}, \; \; \mbox{for all} \; t \in [0,T) \] \end{maintheorem} There are many possible applications. We illustrate this by focusing on a flow for a metric involving the ambient obstruction tensor, $\mathcal O_n$, developed by Fefferman and Graham \cite{FG}. As we review in \S \ref{sec:applicats}, this is a conformally invariant tensor that involves $n$ derivatives of the metric. Due to the higher-order nature of the system of equations for this flow, the usual technique of using an exhaustion and maximum principles to prove existence are not easy to apply. To our knowledge our method is the only wellposedness result to date for this flow. Our main result is: \begin{maintheorem} \label{thm:main-C} Let $(M^n,g)$ be a complete Riemannian manifold of bounded geometry of order $2n + \alpha'$, with even dimension $n = 2\ell$, and where $0 < \alpha < \alpha' < 1$. If $g_0$ is any smooth metric on $M$, then there exists $T > 0$ and a family of unique metrics $g: [0,T) \to \mathcal C^{n,\alpha}(M,g)$ solving the ambient obstruction flow \begin{align} \begin{cases} \partial_t g &= \mathcal{O}_n(g) + c_n (-1)^{\frac{n}{2}} ( (-\Delta)^{\frac{n}{2}- 1} S ) g \\ g(0) &= g_0, \end{cases} \end{align} where $c_n = (2^{n/2 - 1} ( \frac{n}{2} - 2)! (n-2) (n-1))^{-1}$ and $S$ is the scalar curvature of $g$. \end{maintheorem} The remainder of this paper is structured as follows. In \S \ref{sec:background} we describe the analytic and geometric background. After discussing sectoriality, we define manifolds of bounded geometry and define the operators of interest. We discuss pointed limits of manifolds of bounded geometry, and prove a relationship between the resolvent set of an operator and its limiting operators under this construction, Proposition \ref{limspecrel} that may be of independent interest. In \S \ref{sec:proofThm} we explain the reduction of sectoriality to semiclassical estimates and prove Theorem \ref{thm:main-A}. This section presumes the existence of uniform bounds for the semiclassical resolvent of an admissible operator, and we give a detailed construction of this resolvent in \S \ref{scpc} using the techniques of geometric microlocal analysis. Finally in \S \ref{sec:applicats} we apply our results to prove Theorem \ref{thm:main-B}, and conclude with the proof of the wellposedness result for the ambient obstruction flow, Theorem \ref{thm:main-C}. \subsubsection*{Acknowledgments} The authors thank Jack Lee, Yoshihiko Matsumoto, Andr\'as Vasy, and Guofang Wei for useful conversations during this work. This work was supported by collaboration grants from the Simons Foundation (\#426628, E. Bahuaud and \#283083, C. Guenther). J. Isenberg was supported by NSF grant PHY-1707427. \section{Background} \label{sec:background} \subsection{Sectoriality} \label{sec:analytic-bkgd} We begin by defining what it means for a closed unbounded operator acting on a Banach space to be sectorial. The abstract notion of sectoriality, and its precise relationship with the theory of analytic semigroups is classical and can be found, for example, in \cite{Yosida}[Chapter IX]. The monographs \cite{Amann1, Lunardi} contain applications of sectoriality to the study of evolution equations, and the papers \cite{BGI, GIK} focus on its specific application to Ricci flow. Let $X$ be a complex Banach space and $\mathcal{L}(X)$ the space of bounded linear operators on $X$; we denote the operator norm by $\| \cdot \|_{\mathcal{L}(X)}$. Suppose that $L$ is a closed {\it unbounded} linear operator on $X$ which has dense domain $D \hookrightarrow X$. The \emph{resolvent set} of $L$, $\mathbf{res}_{X}(L)$, is the set of $\lambda \in \mathbb C$ for which the resolvent operator \[ R_L(\lambda) := (\lambda I - L)^{-1} \] lies in $\mathcal{L}(X)$. The range of $R_L(\lambda)$ is the domain $D$. The \emph{spectrum} of $L$, denoted $\mathbf{spec}_X(L)$, is the complement $\mathbb C \setminus \mathbf{res}_{X}(L)$. \begin{definition} \label{sectorialdef} A closed unbounded operator $L: X \to X$ with domain $D$ is sectorial in $X$ if: \begin{enumerate} \label{def-sec} \item [\underline{S1}.] The resolvent set $\mathbf{res}_{X}(L)$ contains a sector of opening angle $2\theta < \pi$ which contains a left half-plane $\mathrm{Re}\,\lambda \leq \omega$ for some $\omega \in \mathbb{R}$, i.e., there exists $\theta \in (0,\pi/2)$ such that \[ \mathbf{res}_{X}(L) \supset S_{\omega, \theta} := \{ \lambda \in \mathbb C \setminus \{\omega\}: |\arg(\omega -\lambda) | > \theta\}, \quad \mathrm{and} \] \item [\underline{S2}.] There exists a constant $C > 0$ so that \begin{equation} \label{ResEst} \| R_L(\lambda) \|_{\mathcal{L}(X)} \leq \frac{C}{|\lambda - \omega|}\ \ \mbox{for all}\ \ \lambda \in S_{\omega,\theta}. \end{equation} \end{enumerate} We often simply say that $L$ is sectorial if the space $X$ is understood. \end{definition} \begin{remark} We adopt the convention that the spectrum of $L$ lies in a sector with an acute opening angle and is strictly contained in a {\it right} half-plane $\mathrm{Re}\, \lambda \geq \omega$. In the applications below, $L$ is a differential operator with the leading part equal to some power of an iterated Laplacian $\Delta^k$, and our convention then agrees with the one where the $L^2$ spectrum of $\Delta$ lies in the positive half-line. Note that our convention is different from our earlier work \cite{BGI} and the monograph \cite{Lunardi}. \end{remark} Sectoriality is equivalent to an apparently weaker condition: \begin{lemma}[Proposition 2.1.11 of \cite{Lunardi}]\label{lem:halfplane} Let $X$ be a complex Banach space, and $L: X \to X$ a closed linear operator with dense domain $D$ such that $\mathbf{res}_X(L)$ contains a closed half-plane $\{\lambda \in \mathbb C: \Re \lambda \leq \omega\}$, for some $\omega \in \mathbb R$. If there exists a constant $C > 0$ such that \begin{equation} \| \lambda (\lambda - L)^{-1} \|_{\mathcal{L}(X)} \leq C, \label{sectest} \end{equation} for all $\lambda$ in this half-plane, then $L$ is sectorial. \end{lemma} \begin{proof} By \eqref{sectest}, $||R_L(\omega + i\mu)||_{\mathcal L(X)} \leq \frac{C}{|\omega + i\mu|}$, so if $|\lambda - (\omega + i\mu)| \leq |\omega + i\mu|/2C$, then \[ \begin{split} (\lambda I - L) = & ((\omega + i\mu) I - L) + (\lambda - (\omega + i\mu)) I \\= &((\omega + i\mu)I - L) \left( I + \left(\lambda - \left(\omega + i\mu\right) \right) R_L(\omega + i\mu) \right). \end{split} \] The second factor on the right is of the form $I + A$ where $||A|| \leq 1/2$, and the first factor on the right is invertible by hypothesis, so their product is invertible and hence $\lambda \in \mathbf{res}_X(L)$. Since the radii of the balls around $\omega + i\mu$ on which the resolvent is defined grow asymptotically linearly in $\mu$, the union of the original half-plane together with these balls contains a sector $S_{\omega,\theta}$ for some $\theta \in (0,\pi/2)$. The estimate \eqref{ResEst} follows. \end{proof} \subsection{Manifolds of bounded geometry} \label{sec:geo-bkgd} As stated in the introduction, we consider the sectoriality of a general class of \emph{admissible} elliptic differential operators of even order $m = 2m'$, $m' \in \mathbb{N}$ acting between H\"older spaces, $L: \mathcal C^{m+k,\alpha}(M,g)~\to~\mathcal C^{k,\alpha}(M,g)$, where $(M,g)$ is a complete manifold with bounded geometry of order at least $m+ k + \alpha'$ for some $\alpha' \in (\alpha, 1)$. We may consider any such $L$ as an {\it unbounded} operator on $\mathcal C^{k,\alpha}(M,g)$. In this paper we work exclusively with the `little' H\"older spaces, which by definition are the closure of $\mathcal C^\infty$ in the corresponding H\"older norm. For a given $k, \alpha$, the little H\"older space of this order is a separable closed subspace of the full H\"older space. To lighten the notational burden, we denote this little space by the same symbol $\mathcal C^{k,\alpha}$, with the understanding that we never use the big H\"older spaces here. In this section we begin with a description of manifolds of bounded geometry, and direct the reader to Subsections \ref{subsec:funct-spcs} and \ref{sec-admiss} for more detail on the operators and function spaces which appear below. Briefly, key examples of the operators we consider are elliptic operators arising naturally in geometric analysis of the form \[ L = (\nabla^* \nabla)^{m'} + \text{lower order terms} \] where $\nabla$ is the covariant derivative acting on sections of some Hermitian vector bundle $V$ over $M$, and where the lower order terms involve the curvature tensor of the underlying metric. More generally, we also consider such operators acting between weighted (little) H\"older spaces: \begin{equation} L: \, \mathfrak w \, \mathcal C^{m+k,\alpha}(M,g) \longrightarrow \mathfrak w \, \mathcal C^{k,\alpha}(M,g), \label{wtL} \end{equation} where $\mathfrak w$ is a weight function satisfying certain uniformity hypotheses, see Definition \eqref{wthyp}. The mapping \eqref{wtL} is equivalent to \[ (\mathfrak w)^{-1} L \mathfrak w = (\nabla^* \nabla)^{m'} + S: \mathcal C^{m +k ,\alpha}(M,g) \longrightarrow \mathcal C^{k,\alpha}(M,g), \] where $S$ is an operator of order $m-1$ which includes both the conjugate of the lower order terms in $L$ and also $(\mathfrak w)^{-1} [ (\nabla^* \nabla)^{m'}, \mathfrak w]$. Let us begin by recalling the definition of a manifold of bounded geometry: \begin{definition} A complete Riemannian manifold $(M,g)$ is said to have bounded geometry of order $\ell + \alpha'$, where $\ell \in \mathbb N_0$ and $0 \leq \alpha' < 1$, if: \begin{itemize} \item[a)] There exists a radius $r_0 > 0$ such that for every $q \in M$, the exponential map $\exp_q: \{v \in T_qM: |v| < r_0\} \to B_{r_0}(q)$ is a diffeomorphism, i.e., the injectivity radius at $q$ is greater than $r_0$; \item[b)] For every $q \in M$, the components of the pulled back metric $\exp_q^* g$ are bounded in $\mathcal C^{\ell,\alpha'}$ and the components of the matrix inverse of $\exp_q^*g$ are bounded in $\mathcal C^0$ on $\{ v \in T_q M: |v| < r_0\}$, where the bounds are independent of $q$, and hence uniform over $M$. \end{itemize} \end{definition} \begin{remark} We have denoted the fractional part of this uniformity order by $\alpha'$ to distinguish it from the $\alpha$ index in the H\"older spaces we are using. We need the extra room given by the inequality $\alpha' > \alpha$ when taking limits using the Arzela-Ascoli Theorem. \end{remark} It is often easier to check an intrinsic version of condition b). As discussed in \cite{Eichhorn}, for example, if $\ell \geq 1$ and $\alpha'= 0$, then b) is implied by \begin{itemize} \item [b')] $\sup_{j \leq \ell} | \nabla^j \mathrm{Riem} | \leq C_\ell$ for some constant $C_\ell$. \end{itemize} The proof of that implication presumably generalizes without difficulty to the case where $\alpha' \neq 0$. Any compact Riemannian manifold has bounded geometry of order equal to the regularity class of the metric. There are many other natural examples of manifolds with bounded geometry. We list some familiar classes: \begin{itemize} \item[i)] Asymptotically Euclidean or asymptotically conic manifolds, \item[ii)] Manifolds with asymptotically cylindrical ends, \item[iii)] Asymptotically (real) hyperbolic manifolds, \item[iv)] Asymptotically complex hyperbolic manifolds, \item[v)] Any symmetric space $M = G/K$ of noncompact type, with invariant metric $g$, or indeed, any perturbation $g = g_0 + h$ of the symmetric metric $g_0$, where $| \nabla^j h|_{g_0} \leq C_j$ for $j \leq \ell$, \item[vi)] Any infinite cover $(M,g)$ of a compact manifold $(M_0, g_0)$. \end{itemize} Let us briefly recall each of these classes. Before doing so, observe that if $(M,g)$ has bounded geometry of order $\ell + \alpha'$ and if $\tilde{g} = g + h$, where $|h|_g \leq 1-\epsilon$ for some $\epsilon \in (0,1)$ (so that $\tilde{g}$ is boundedly equivalent to $g$) and the $\mathcal C^{\ell,\alpha'}$ norms of the components of $h$ are controlled in $g$ normal coordinate charts, then $\tilde{g}$ also has bounded geometry of order $\ell + \alpha'$. This means that we may as well describe these various classes of spaces in their simplest model forms. Bounded geometry then follows for any metrics which are perturbations of these models in the sense above. We are particularly interested in perturbations which decay to the appropriate model metrics in a suitable sense at infinity, and shall mention the rate of decay in each of these cases. \ \noindent{\bf Asymptotically Euclidean and asymptotically conic metrics.} A Riemannian manifold $(M^n,g)$ is called \emph{conic at infinity} if there exists a compact Riemannian manifold $(Y, h_0)$ of dimension $n-1$, a compact set $K$ in $M$ and a diffeomorphism from $M \setminus K$ to $[r_0, \infty) \times Y$, such that \[ g = dr^2 + r^2 h_0. \] More generally, $(M,g)$ is called \emph{asymptotically conic} (AC) if it can be written as the sum of a metric which is conic at infinity and an extra term $k$ which satisfies $|\nabla^j k|_g \leq C r^{-\beta - j}$ for some $\beta > 0$ and for $0 \leq j \leq \ell$, and $[\nabla^\ell k]_{0,\alpha'} \leq C r^{-\beta - \ell - \alpha'}$. In the following examples, we shall simply state a decay rate, e.g. $r^{-\beta}$, but with corresponding decay rates on the derivatives implicit. An AC space is called \emph{asymptotically Euclidean} (AE) if $(Y, h_0)$ is isometric to the standard sphere. Elliptic theory on this class of spaces has been very thoroughly studied for several decades; see \cite{Bartnik} for a survey of results from a `classical' perspective, and \cite{Melrose-Mendoza} for another approach which appears frequently below. There are important generalizations of AE and AC spaces that arise in various geometric settings, including the classes of \emph{quasi-asymptotically conic} (QAC) manifolds \cite{DegMaz}, certain of the four-dimensional `gravitational instantons' (of types ALE/F/G/H), along with their higher dimensional generalizations \cite{ChenChen}, etc. \ \noindent{\bf Asymptotically cylindrical metrics.} If $(M^n, g)$ is cylindrical at infinity, then outside some compact set it is isometric to a product cylinder $(a,\infty)\times Y^{n-1}$ with metric $dt^2 + h_0$. Setting $r = e^{-t}$ we arrive at the equivalent form \[ g = \frac{dr^2}{r^2} + h_0, \] which is conformal to the exact conic metric $dr^2 + r^2 h_0$, which is useful for translating results from one setting to the other. The allowable perturbations in this setting decay like $e^{-\beta t}$ as $t \to \infty$, or equivalently, like $r^{\beta}$ as $r \to 0$. \ \noindent{\bf Asymptotically hyperbolic metrics.} Next, suppose that $M$ is a compact Riemannian manifold with boundary. Fix a smooth boundary defining function $\rho$ on $M$, i.e., $\rho \geq 0$ on $M$, $\rho^{-1}(0) = \partial M$, and $d\rho \neq 0$ at the boundary. Fix also a metric $h_0$ on $\partial M$. The class of `exact' asymptotically hyperbolic (AH) metrics consists of metrics taking the form \[ g = \frac{d\rho^2 + h_0}{\rho^2} \] near $\rho = 0$. The metric $\overline{g} = \rho^2 g$ is called a conformal compactification of $g$. Allowable perturbations in this case are tensors which decay like $\rho^\mu$ for some $\mu > 0$. This geometry mimics that of the Poincar\'e ball model of hyperbolic space, where \[ g = \frac{ 4 |dz|^2}{ (1-|z|^2)^2}. \] Thus $\rho = (1-|z|^2)/2$ and the Euclidean metric $|dz|^2$ equals a particular choice of conformal compactification $\bar{g}$. Since $\rho$ is not canonically defined in terms of $g$, only the conformal class of $\overline{g}$ (and in particular, $\overline{g}|_{T \partial M}$) is intrinsic to $g$. To see that an AH space has bounded geometry, a simple calculation shows that the sectional curvatures of any AH metric tend to $-1$ and covariant derivatives of the curvature tensor tend to $0$, all as $\rho \to 0$. If $q \in M$ and $\rho(q) = \epsilon$, then the $\overline{g}$ ball $B_{\epsilon/2}(q)$ has inradius and diameter which are uniformly bounded away from both $0$ and $\infty$, and the restriction of $g$ to any such ball converges to a hyperbolic metric on a ball of nonzero radius. We refer to \cite{Lee} for more details on this. \ \noindent{\bf Asymptotically complex hyperbolic metrics.} One generalization of this last example is to the class of asymptotically complex hyperbolic manifolds. There are various ways to define these spaces; we refer to \cite{Biquard} for one approach and a more extended discussion than the one below. Proceeding as in the AH case, let $M$ be a compact Riemannian manifold with boundary, of even real dimension $2n$, and $\rho$ a boundary defining function. Suppose that $\eta$ is a contact form on $\partial M$, i.e., $\eta$ is a $1$-form such that $\eta \wedge (d\eta)^{n-1}$ is everywhere nonvanishing. Let $T$ denote the Reeb vector field on $\partial M$, i.e., the unique vector field such that $\eta(T) \equiv 1$ and $d\eta(T, \cdot) \equiv 0$. Finally, choose a set of smooth independent vector fields $X_1, \ldots, X_{2n-2}$ which span the kernel of $\eta$ in $T \partial M$, all on $\partial M$. Let $\eta, \omega_1, \ldots, \omega_{2n-2}$ be the coframe dual to $T, X_1, \ldots, X_{2n-2}$. Fixing a product decomposition of a collar neighborhood of $\partial M$ in $M$, we say that $g$ is (exact) asymptotically complex hyperbolic (ACH) if \[ g = \frac{d\rho^2 + \sum \omega_j^2}{\rho^2} + \frac{\eta^2}{\rho^4} \] in that neighborhood. (Here $\omega_j^2 = \omega_j \otimes \omega_j$ and $\eta^2 = \eta \otimes \eta$.) The difference with the AH case is of course simply that the metric blows up faster in the $\alpha^2$ direction. A CR (Cauchy-Riemann) structure involves not only the hyperplane bundle $\mathrm{ker}\, \eta$ but also an endomorphism $J$ on this subbundle which satisfies $J^2 = -I$; however, this almost complex structure is not relevant to these metric asymptotics. An allowable perturbation $k$ again decays like some $\rho^\mu$ as $\rho \to 0$. This mimics a standard representation of the complex hyperbolic metric on the unit ball in $\mathbb C^n$ (with holomorphic sectional curvature $-4$), \[ g = \frac{ g_{\mathrm{Euc}} }{1-r^2} + \frac{r^2 \left( dr^2 + ( J dr )^2 \right) }{(1-r^2)^2}, \] where $g_{\mathrm{Euc}}$ is the Euclidean metric, $r = |z|$, and $z \in \mathbb C^n$. The monograph \cite{Biquard} explains the relationship of this construction to CR geometry of the boundary. Bounded geometry of ACH metrics can be proved quite similarly to the AH case. The `cubes' of approximate radius $1$ have (approximate) dimensions $\epsilon$ in the $\partial_\rho$ and $X_j$ directions and $\epsilon^2$ in the $T$ direction. There are further generalizations to classes of exact and asymptotically quaternion hyperbolic metrics and (asymptotically) octonion hyperbolic planes. These involve generalizing the contact structures used to define the ACH metric; see \cite{Biquard}. \ \noindent{\bf Other examples.} A Riemannian symmetric space $M = G/K$ of noncompact type, or more generally a Riemannian homogeneous space $M = G/H$ with invariant metric, again has bounded geometry. The definitions are a bit more intricate, and we point to one of the many standard references on the subject, for example \cite{Hel}, at least for the symmetric space case. Infinite covers of compact Riemannian manifolds, where the metric is obtained by pullback via the covering map, are an interesting class of spaces. General results about the $L^2$-resolvent of even the scalar Laplacian on such manifolds are almost nonexistent, except if the covering group is `small' (e.g., amenable). Perhaps surprisingly, we are able to carry out the analysis below for the resolvents of admissible operators on $\mathcal C^{0,\alpha}$, not only on these classes of spaces, but even on general manifolds of bounded geometry. \ \noindent{\bf Pointed limits of manifolds of bounded geometry.} To conclude this subsection, we recall an important construction in the category of manifolds with bounded geometry which shows that this class of spaces is complete in a certain sense. Let $(M,g)$ have bounded geometry of some order $\ell + \alpha'$ and consider the sequence of pointed spaces $(M, g, p_j)$ where $p_j$ is a sequence of points in $M$ which diverges to infinity. Then there is a complete Riemannian manifold $(M_\infty, g_\infty,p_\infty)$, which is the pointed Gromov-Hausdorff limit of the sequence $(M,g,p_j)$. More specifically, for any $R > 0$, the $g$-ball of radius $R$ around $p_j$ in $M$ converges, at least up to passing to a subsequence, as a Riemannian space, to the $g_\infty$ ball of radius $R$ around $p_\infty$. This convergence can be shown to occur in $\mathcal C^{\ell, \alpha''}$ for any $0 < \alpha'' < \alpha'$. This is a standard fact in the `convergence theory' of Riemannian manifolds. This type of construction is originally due to Cheeger, and admits many generalizations. A version that encompasses the particular statement above appears as Theorem 11.36 in Petersen's book \cite{Petersen2016}. (We are grateful to Guofang Wei for pointing out this reference.) As a very brief sketch of how this is proved, one first shows that some subsequence of the $(M, g, p_j)$ converges in the Gromov-Hausdorff topology to $(M_\infty, g_\infty, p_\infty)$; this topology is, of course, quite weak and in fact one initially only obtains its metric space structure. Further analysis shows that this convergence happens in a much stronger topology. Indeed, using the bounds on the metric tensor in normal coordinates, we may extract a subsequence of metrics on each ball $B_g(p_j, c)$ which converges in $\mathcal C^{\ell,\alpha''}$. The curvature bounds imply that any ball of larger radius $R$ can be covered by a controlled number of balls of radius $r_0$. By a successive diagonalization argument, the metric tensor converges on each one of these. The lower bound on the injectivity radius is used to show that there is no collapsing in the limit. Here are two examples of this sort of convergence. If $(M,g)$ has an asymptotically cylindrical end, and if $p_j$ diverges along this cylindrical end, then the corresponding limit space $M_\infty$ is the Riemannian product cylinder $\mathbb R \times Y$. Similarly, if $(M,g)$ is asymptotically hyperbolic, and $p_j$ diverges to some point $\bar{p}$ on the boundary of the conformal compactification of $M$, then $M_\infty$ is a copy of hyperbolic space $\mathbb H^n$. These illustrate that the limit space can `lose' a lot of the topology of the original manifold $M$. \subsection{Function spaces} \label{subsec:funct-spcs} The definition of bounded geometry of order $\ell + \alpha'$ relies on the H\"older norms in uniform local coordinate charts, and the standard local Euclidean definition can be used. To define H\"older spaces globally on $M$ we need to say a bit more. Fix a manifold $(M,g)$ with bounded geometry of order $\ell +\alpha'$, We define the H\"older spaces $\mathcal C^{\kappa,\alpha}(M,g)$ for any $0 \leq \kappa \leq \ell$ and $0 < \alpha < \alpha' < 1$. Introduce the $\mathcal C^{\kappa,\alpha}$ norm \[ ||u||_{\kappa,\alpha} := \sum_{j = 0}^\kappa \sup |\nabla^j u|_g + \sup_B \sup_{x, y \in B \atop x \neq y} \frac{ |\nabla^\kappa u(x) - \nabla^\kappa u(y)|}{ \mathrm{dist}_g(x,y)^\alpha}. \] The supremum in the final term on the right is over all geodesic balls $B \subset M$ of radius $r_0$ (as given in the definition of bounded geometry). We assume that the tensor bundles over each such $B$ are trivialized, for example using the exponential map based at the centers of these balls. If we are considering sections of some other vector bundle $V \to M$, we assume the existence of a uniform set of local trivializations, relative to some `uniform' cover of $M$ by balls $B_{r_0}(q_j)$. The details are straightforward and left to the reader. As noted in the Introduction, in this paper we use the \emph{`little' H\"older spaces}, $\mathcal C^{\kappa,\alpha}(M,g)$ exclusively; by definition, these are the completion of $\mathcal C^\infty$ with respect to the norms above. These little H\"older spaces have several of advantages: they are separable, and it is possible to use approximation arguments with them; further, one can easily define interpolation spaces that allow access to maximal regularity theory for nonlinear applications (Section Ch 35 in \cite{RFIV}). We recall the simple and useful fact that an equivalent norm is obtained by taking the supremum over all $x \neq y$ in $M$ in the final H\"older seminorm, rather than just over $x, y \in B$; in other words, we claim that \[ \sup_{x, y \in M \atop x \neq y} \frac{ |\nabla^\kappa u(x) - \nabla^\kappa u(y)|}{\mathrm{dist}_g(x,y)^\alpha} \leq C ||u||_{\kappa,\alpha} \] for some fixed $C > 0$. If $\mathrm{dist}_g(x,y) < r_0$, this is obvious, while if $\mathrm{dist}_g(x,y) \geq r_0$, then \[ \frac{ |\nabla^\kappa u(x) - \nabla^\kappa u(y)|}{\mathrm{dist}_g(x,y)^\alpha} \leq 2 r_0^{-\alpha} \sup |\nabla^\kappa u|_g. \] It is also clear that for each $B$, \[ \sup_{x,y \in B \atop x \neq y} \frac{ |\nabla^\kappa u(x) - \nabla^\kappa u(y)|}{\mathrm{dist}_g(x,y)^\alpha} \leq \sup_{x,y \in M \atop x \neq y} \frac{ |\nabla^\kappa u(x) - \nabla^\kappa u(y)|}{\mathrm{dist}_g(x,y)^\alpha}. \] Hence, taking the supremum over all balls $B$ on the left, we conclude that we may define the $\mathcal C^{\kappa,\alpha}$ seminorm either as we have done initially, or else by replacing the final term in that definition with one where the supremum is taken over all distinct $x, y \in M$. It is clear that it makes sense to consider the spaces $\mathcal C^{\kappa,\alpha}$ on a manifold of bounded geometry of order $\ell + \alpha'$ only when $\kappa \leq \ell$ and if $\kappa = \ell$ then $\alpha \leq \alpha'$. We assume the strict inequality $\alpha < \alpha'$ because, as described in the last subsection, the pointed limit of spaces of order $\ell + \alpha'$ may only have order $\ell + \alpha''$ for any $\alpha'' < \alpha'$. (Actually, by a standard real analysis argument, the limiting space does have order $\ell + \alpha'$ but the convergence only takes place in the weaker norm, which may be important at certain points.) We also define the family of {\it semiclassical} H\"older spaces $\mathcal C^{\kappa,\alpha}_\varepsilon(M,g)$, where $\varepsilon$ is a parameter in $(0,1]$. The name comes from their natural association with families of operators undergoing semiclassical degeneration, which is described below. These spaces appear in a fundamental way in the arguments of Section \ref{sec:proofThm}. For any given $\kappa \leq \ell$ and $\alpha$ (as usual with $\alpha \leq \alpha'$ if $\kappa = \ell$), this family of spaces contains all functions $u$ such that \[ ||u||_{\kappa, \alpha, \varepsilon} := \sum_{j = 0}^\kappa \varepsilon^j \sup |\nabla^j u|_g + \varepsilon^{\kappa + \alpha} \sup_B \sup_{x, y \in B \atop x \neq y} \frac{ |\nabla^\kappa u(x) - \nabla^\kappa u(y)|}{ \mathrm{dist}_g(x,y)^\alpha} < \infty. \] In other words, every derivative is accompanied by a power of $\varepsilon$ and the $\alpha$ H\"older seminorm has a factor of $\varepsilon^\alpha$. In fact, these semiclassical spaces are simply the ordinary H\"older spaces associated to the family of rescaled metrics $\varepsilon^{-2} g = g_\varepsilon$: \[ \mathcal C^{\kappa,\alpha}_\varepsilon(M,g) \cong \mathcal C^{\kappa,\alpha}(M, g_\varepsilon). \] Clearly each $g_\varepsilon$ has bounded geometry of order $\ell + \alpha'$, and the bounds are uniform as $\varepsilon \to 0$. \begin{remark} From these last remarks, it is clear that in the definition of the semiclassical H\"older seminorm above, we may take the supremum either over all balls $B$ of radius $1$ or alternately of radius $\varepsilon$ with respect to $g$. \label{scnormballs} \end{remark} \subsection{Admissible differential operators}\label{sec-admiss} We shall prove our main sectoriality estimate for any differential operator $L$ which is strongly elliptic, and satisfies an additional uniformity condition. We begin by recalling that the principal symbol of $L$ of order $m$ is a smooth function $\sigma_m(L)(x, \xi)$ on $T^*M$ which restricts to be a homogeneous polynomial of order $m$ (matrix-valued if $L$ acts between bundles) on each fiber $T_x^*M$. There are various invariant ways to define this principal symbol, but the most familiar is that it is obtained by dropping all terms of order less than $m$ and then replacing each derivative $\partial_x^\alpha$, $|\alpha| = m$, by the monomial $(i \xi)^\alpha$. (The factor of $i$ is customary because of the relationship of this symbol with the Fourier transform.) \begin{definition} We say that $L$ is \emph{strongly elliptic} if $\sigma_m(L)(x,\xi)$ has numerical range (or spectrum, if it is a matrix) contained in a sector in the right half-plane: \[ \mathrm{spec}\, \sigma_m(L)(x,\xi) \subset \{\lambda \in \mathbb C: |\arg(\lambda)| \leq \theta' < \pi/2\} \] for all $(x,\xi) \in T^* M$. \end{definition} It is straightforward to check that if $L$ is strongly elliptic, then its order $m$ is even. If $L$ is symmetric and has real-valued coefficients, then $\sigma_m(L)(x,\xi)$ is real-valued. For example, if \[ L = (\nabla^* \nabla)^{m/2} + \text{lower order terms}, \] then $\sigma_m(L)(x,\xi) = |\xi|^m$ (or $|\xi|^m$ times the identity matrix), which clearly satisfies this condition. \begin{definition} \label{def-admiss} We say that $L$ is uniform of order $\ell + \alpha'$ (relative to a metric $g$ with bounded geometry of order $\ell + \alpha'$ on a manifold $M$) if the following two conditions are satisfied: \begin{itemize} \item[i)] the pullback by $\exp_q$ of the operator $L$ has coefficients bounded in $\mathcal C^{\ell, \alpha'}$ in each ball $B_{r_0}(q)$; \item[ii)] there exists a closed cone $\Gamma$ strictly contained in $\{\lambda \in \mathbb C: \mathrm{Re}\, \lambda > 0\} \cup \{0\}$ such that if $\zeta \in \mathbb C \setminus \Gamma$, then the endomorphism $\zeta I - \sigma_m(L)(x,\xi)$ is invertible, with the inverse satisfying \[ || (\zeta I - \sigma_m(L)(x,\xi))^{-1}|| \leq C (1 + |\xi|)^{-m} \] for some fixed $C$ (depending on $\zeta$) for all $(x,\xi)$ in the cotangent bundle of $M$. \end{itemize} \label{admop} \end{definition} \begin{definition} The elliptic differential operator $L$ is called admissible if it is both strongly elliptic and uniform of order $\ell + \alpha'$. \label{admop2} \end{definition} It is important for our purposes that admissibility is preserved under passage to a limiting space: \begin{lemma} Suppose that $(M,g)$ has bounded geometry of order $\ell + \alpha'$, and let $p_j$ be a diverging sequence of points in $M$. Let $L$ be an admissible differential operator on $M$, as described above. If $(M, g, p_j)$ converges to some limiting space $(M_\infty, g_\infty, p_\infty)$ in $\mathcal C^{\ell, \alpha''}$, then (some subsequence of) the restrictions of the operator $L$ to balls $B_R(p_j)$ converges in $\mathcal C^{\ell, \alpha''}$, as both $j \to \infty$ and $R \to \infty$ to an operator $L_\infty$ on this limiting space, and any such limiting operator $L_\infty$ obtained in this way is uniform on its space of definition. \label{adm-persists} \end{lemma} \begin{proof} By a diagonalization process using Arzela-Ascoli, it is clear that $L$ induces a limiting operator $L_\infty$ on $M_\infty$, and that $L_\infty$ is again strongly elliptic. (Its coefficients are only in $\mathcal C^{\ell, \alpha''}$ for any $\alpha'' < \alpha'$.) \end{proof} There is an additional important relationship between $L$ and its limiting operators: \begin{prop} There is an inclusion \[ \bigcap \mathbf{res}_{\mathcal C^{k,\alpha}}\, (L_\infty) \supset \mathbf{res}_{\mathcal C^{k,\alpha}}\,(L), \] or equivalently, \[ \bigcup \mathbf{spec}_{\mathcal C^{k,\alpha}} (L_\infty) \subset \mathbf{spec}_{\mathcal C^{k,\alpha}}(L), \] In both cases, the intersection or union is over all possible limiting spaces $M_\infty$ and model operators $L_\infty$. \label{limspecrel} \end{prop} \begin{proof} Using this second formulation, suppose that $\lambda \in \mathbf{spec}_{\mathcal C^{k,\alpha}}(L_\infty)$ for some limit $L_\infty$, i.e., $\lambda I - L_\infty$ is not boundedly invertible on $\mathcal C^{k,\alpha}(M_\infty, g_\infty)$. In the following considerations, note that the natural domain of this unbounded map is $\mathcal C^{k+m,\alpha}(M_\infty, g_\infty)$. There are three ways that invertibility might fail: \begin{itemize} \item[a)] there exists a nontrivial function $u \in \mathcal C^{m+k,\alpha}$ such that $L_\infty u = \lambda u$; \item[b)] the range of $\lambda I - L_\infty$ is dense in $\mathcal C^{k,\alpha}$, but not closed; \item[c)] the closure of the range of $\lambda I - L_\infty$ is equal to some proper closed subspace of $\mathcal C^{k,\alpha}$. \end{itemize} We show that each of these three possibilities is incompatible with the assumption that $\lambda \not\in \mathbf{spec}_{\mathcal C^{k,\alpha}}(L)$. In the first case, suppose that $u$ is a $\mathcal C^{m+k,\alpha}$ solution of this limiting equation. Choose a sequence of radii $R_i \to \infty$, and let $\chi_i$ be a sequence of smooth cutoff functions on $M_\infty$ such that \[ \chi_i = \begin{cases} & 1\ \mbox{on}\ B_{R_i/2}(p_\infty) \\ & 0\ \mbox{outside}\ B_{R_i}(p_\infty) \end{cases}, \] $0 \leq \chi_i \leq 1$ everywhere, and \[ |\nabla^q \chi_i| \leq C/R_i^q \] for $q \leq m$ and with $C$ independent of $i$. For fixed $i$, the ball $B_{R_i}(p_\infty)$ in $M_\infty$ is a limit as $j \to \infty$ of balls $B_{R_i}(p_j)$ in $M$. Using this, we may transplant $\chi_i u$ to $B_{R_i}(p_j)$ and then extend it to equal $0$ on the rest of $M$. We now compute that \[ (\lambda I - L) (\chi_i u) := h_i =\chi_i ( \lambda I - L_\infty) u + \chi_i (L_\infty - L) u + [L, \chi_i] u. \] Clearly, there exist constants $c_1, c_2$ such that $0 < c_1 \leq ||\chi_i u||_{k,\alpha} \leq c_2$, uniformly in $i$, and using the limiting properties in this construction, $||h_i||_{k,\alpha} \to 0$. Since the $\chi_i u$ span an infinite dimensional space, we conclude that $(\lambda I - L)$ does not have closed range on $\mathcal C^{k,\alpha}$, contrary to the choice of $\lambda$. Next suppose that $(\lambda I - L_\infty)$ has dense but nonclosed range, so it does not have a bounded inverse. There exists a sequence $u_i$ on $M_\infty$ with infinite dimensional span such that $||u_i||_{k,\alpha} = 1$ and $||(\lambda I - L_\infty) u_i||_{k,\alpha} \to 0$. Precisely the same transplantation argument used above shows that $c_1 \leq ||\chi_i u_i||_{k,\alpha} \leq c_2$ and $||( \lambda I - L) (\chi_i u_i)||_{k,\alpha} \to 0$ on $M$, which is once again a contradiction. Finally, suppose that the range of $(\lambda I - L_\infty)$ is equal to or at least dense in some proper closed subspace. Then the dual operator $(\bar{\lambda} I - L_\infty^*)$ has nontrivial nullspace in $(\mathcal C^{k,\alpha})^*$. This dual space is distributional, of course, but since $L_\infty^*$ is elliptic, any element $v$ of its nullspace is again as regular as the coefficients of the operator and the metric allow, and this element must be bounded as well (else it would be easy to find some $\phi \in \mathcal C^{k,\alpha}$ such that $\langle v, \phi \rangle$ is undefined). We are then in the situation of the first case, once we observe that the dual operator $L^*$ is admissible. This completes the proof. \end{proof} \subsection{Examples of admissible operators} \label{subsec:geomlap} The main examples of admissible operators that we have in mind are generalized Laplacians on Riemannian manifolds with bounded geometry, or more generally, operators of the form $(\nabla^* \nabla)^{m/2} + S$, where $S$ is an operator of order $m-1$ usually closely associated with the metric $g$. We begin with the second order case. By definition a generalized Laplacian is an operator of the form \[ \nabla^* \nabla + \mathcal K, \] acting on sections of some tensor bundle $E$ over $M$ (or slightly more generally, a twisted spin bundle -- the key feature is that its connection is induced from the Levi-Civita connection for $g$.) Here $\nabla^*$ is the adjoint of the covariant derivative with respect to the natural inner product on each fiber of $E$ and with respect to the the volume form $dV_g$. The term $\mathcal K$ is a symmetric endomorphism of $E$ obtained via contractions of sums of tensor products of the curvature tensor and its covariant derivatives. The following is a list standard examples of such operators: \begin{itemize} \item[a)] The scalar Laplacian $\Delta_g = \nabla^* \nabla$ acts on the trivial rank $1$ bundle; slightly more generally, we also consider the Hodge Laplace operator $\Delta_{g,k} = d \delta + \delta d$ acting on sections of the bundle of exterior $p$-forms, $p = 0, \ldots, n$. The original Weitzenb\"ock formula states that \[ \Delta_{g,p} = \nabla^* \nabla + \mathcal K_p, \] where $\mathcal K_p$ is an endomorphism of $\bigwedge^pM$ constructed from the curvature tensor. For example, $\mathcal K_1 = \mathrm{Ric}$, considered as a symmetric endomorphism on $1$-forms. \item[b)] Next, consider the Lichnerowicz Laplacian \[ \nabla^* \nabla + 2 (\mathrm{Ric} - \mathrm{Riem}) \] where $\mathrm{Ric}$ is the Ricci tensor and $\mathrm{Riem}$ the full curvature tensor. These act as symmetric endomorphisms on symmetric $2$-tensors via \begin{equation*} \begin{split} h_{ij} & \mapsto (\mathrm{Ric}(h))_{ij} = \frac12 \left( \mathrm{Ric}_{ik} h^k_j + \mathrm{Ric}_{jk} h^k_i\right), \\ h_{ij} & \mapsto (\mathrm{Riem}(h))_{ij} = R_{ipjq}h^{pq}. \end{split} \end{equation*} \item[c)] Generalizing a) in a different way is the conformal Laplacian \[ \nabla^* \nabla + \frac{n-2}{4(n-1)} R_g, \] acting on scalar functions, where $R_g$ is the scalar curvature of the metric. \end{itemize} \medskip Each of the operators in the list above is symmetric, i.e., \[ \langle L u, v \rangle = \langle u, L v \rangle\ \ \mbox{for}\ \ \ u, v \in \mathcal C^\infty_0(M). \] A classical theorem due to Chernoff \cite{Chernoff} states that because $g$ is complete, each of these has a unique self-adjoint extension as an unbounded operator on $L^2(M, dV_g)$. Self-adjointness guarantees that the $L^2$ spectrum lies on the real line. If $g$ has bounded geometry of high enough order, the pointwise norm of the endomorphism $\mathcal K$ is uniformly bounded. Since $\nabla^* \nabla \geq 0$, we deduce that \[ \nabla^* \nabla + \mathcal K \geq -C \Longrightarrow \mathbf{spec}_{L^2}( \nabla^* \nabla + \mathcal K) \subset [-C, \infty). \] There are many interesting higher order elliptic operators associated to the metric $g$. The most well-known are the higher order ``GJMS operators'', which generalize the conformal Laplacian above. The GJMS operator $P_m$ of order $m$ is a conformally covariant operator which is simply equal to $(\nabla^*\nabla)^{m/2}$ if $g$ is the flat Euclidean metric, but in general has a (complicated) set of lower order terms involving the curvature tensor. This operator exists only for $m \leq n$ if $n$ is even, and for all $m$ if $n$ is odd. A (very thoroughly studied) example is the Paneitz operator \[ P_4 = \Delta^2 - \delta ( (n-2) J -4V) d + (n-4)Q, \] where $V$ is the so-called Schouten tensor of the metric $g$, $J$ its trace, and $Q$ an associated scalar quantity called the $Q$-curvature. It is known \cite{Graham-Zworski} that these operators are symmetric, and the same reasoning as above implies that the $L^2$ spectrum lies in a half-line $[-C,\infty)$. Notice that we have said nothing about the spectrum of any of these operators on $\mathcal C^{0,\alpha}(M,g)$, and in general the relationship between the $\mathcal C^{0,\alpha}$ and the $L^2$ spectrum may be quite difficult to understand. In many geometric problems, however, we actually wish to study the action of $L$ not on $\mathcal C^{0,\alpha}$ itself, but between some {\it weighted} H\"older spaces: \begin{equation} L: \mathfrak w\, \mathcal C^{m,\alpha}(M,g) \longrightarrow \mathfrak w\, \mathcal C^{0,\alpha}(M,g), \label{mapwhs} \end{equation} where, by definition, $\mathfrak w$ is a strictly positive $\mathcal C^\infty$ (or $\mathcal C^{\ell,\alpha'}$) function and \[ \mathfrak w\, \mathcal C^{\kappa,\alpha}(M,g) = \{ u = \mathfrak w\, v: v \in \mathcal C^{\kappa,\alpha}(M,g)\}. \] Observe that the mapping \eqref{mapwhs} is equivalent to \begin{equation} L_{\mathfrak w} = {\mathfrak w}^{-1} L \mathfrak w: \mathcal C^{m,\alpha}(M,g) \longrightarrow \mathcal C^{0,\alpha}(M,g). \label{mapwhs2} \end{equation} Observe also that since $\lambda I - L_{\mathfrak w} = \mathfrak w^{-1}( \lambda I - L) \mathfrak w$, the spectra of \eqref{mapwhs} and \eqref{mapwhs2} are the same. In most of the specific examples of manifolds of bounded geometry we have given, it is possible to prove that there exist weight functions $\mathfrak w$ for which \eqref{mapwhs} (or equivalently \eqref{mapwhs2}) is Fredholm, i.e. has closed range and finite dimensional kernel and cokernel. It is sometimes possible to choose $\mathfrak w$ so that $\mathfrak w\, \mathcal C^{0,\alpha}(M,g) \subset L^2$, and if this is the case, and if $f \in \mathfrak w\, \mathcal C^{0,\alpha}(M,g)$, then so long as $\lambda \not\in \mathbf{spec}_{L^2}(L)$, there exists a function $u \in L^2$ with $(\lambda I - L)u = f$. Local elliptic regularity implies that $u \in \mathcal C^{m,\alpha}$ on each ball $B_{r_0}(p)$. If we could then somehow show that $u \in \mathfrak w\, \mathcal C^{m,\alpha}$, we would have obtained information about $\mathbf{spec}_{\mathcal C^{0,\alpha}}(L)$. Before doing so, we list some necessary assumptions on these weight functions. \begin{definition} A weight function $\mathfrak w$ is called uniform with respect to $L$ if: \begin{itemize} \item[a)] the mappings \eqref{mapwhs} and \eqref{mapwhs2} are Fredholm; \item[b)] the conjugated operator $L_{\mathfrak w}$ is admissible with respect to the metric $g$. \end{itemize} \label{wthyp} \end{definition} It is clear that $L_{\mathfrak w}$ is strongly elliptic if $L$ is, since they have the same leading order terms. The uniformity of $L_{\mathfrak w}$ imposes a strong condition on the weight function and its derivatives. For example, if $L$ is the scalar Laplacian, then \[ \Delta_{\mathfrak w} = \Delta + \mathfrak w^{-1} [\Delta, \mathfrak w] = \Delta + 2 \frac{\nabla \mathfrak w}{\mathfrak w} \cdot \nabla + \frac{\Delta \mathfrak w}{\mathfrak w} \] involves first and second derivatives of $\mathfrak w$. Thus all these lower order terms must be uniform in the sense we have described earlier. \subsection{The resolvent on asymptotically hyperbolic spaces} We conclude this section with a description of one particular setting, namely the class of asymptotically hyperbolic spaces, where there is a somewhat more direct path to understanding the $\mathcal C^{0,\alpha}$ spectrum of generalized Laplacians. Suppose that $(M,g)$ is asymptotically hyperbolic, as described in Section \ref{sec:geo-bkgd}, and let $L = \nabla^* \nabla + \mathcal K$. Choose coordinates $(x,y)$ on $M$ near the boundary of $M$, where $x$ is a boundary defining function and $y$ is a local coordinate on the boundary extended to this collar neighborhood. Using this we identify the collar neighborhood with $[0,1) \times \partial M$. It is known that we can choose these functions in such a way that \[ g = \frac{dx^2 + h(x,y)}{x^2}, \] where $x \mapsto h(x, \cdot)$ is a family of tensors on $\partial M$ in this collar neighborhood decomposition, and $h(0,y)$ is any prescribed metric representing the conformal class on $\partial M$ associated to $g$. We have already noted that the $L^2$ spectrum of $L$ lies in some half-line $[-C,\infty)$, hence $\mathbf{res}_{L^2}(L) \supset \mathbb C \setminus [-C,\infty)$. Let $R_L$ denote both the resolvent of $L$ as an abstract operator, but also its Schwartz kernel, which is a distribution on $M \times M$. This distribution, $R_L(\lambda; x, y, \tilde{x}, \tilde{y})$, is singular along the diagonal, where $x = \tilde{x}$ and $y = \tilde{y}$, and has additional singularities along the boundaries where $x \to 0$ or $\tilde{x} \to 0$. The nature of these singularities can be understood in a very detailed way using the methods of geometric microlocal analysis. We refer to \cite{MazzeoEdge} for the construction of this distribution. The question we wish to consider is whether there exists a range of $\mu$ for which \begin{equation} R_L: x^\mu \mathcal C^{k,\alpha}(M,g) \longrightarrow x^\mu \mathcal C^{k,\alpha}(M,g), \label{resahmu} \end{equation} is bounded if $\mathrm{Re}\,\lambda \leq -C$, or equivalently, whether the conjugated Schwartz kernel \begin{equation} x^{-\mu} R_{L}(\lambda; x,y, \tilde{x}, \tilde{y}) (\tilde{x})^\mu \label{conjker} \end{equation} acting by convolution induces a bounded mapping on $\mathcal C^{0,\alpha}$. This is true for $\mu$ lying in a certain interval determined by the so-called indicial roots of $L$. To describe this more carefully, recall that an indicial root of $L$ is a number $\gamma \in \mathbb C$ such that \[ L (x^\gamma u(x,y)) = \mathcal O(x^{\gamma+1}), \] where $u$ is any function smooth up to $\partial M$. (We consider only the scalar case for notational simplicity.) It is not hard to see that this can only happen if there is some leading order cancellation, which depends only on some algebraic condition determined by $\gamma$ and the values at $x=0$ of certain of the coefficients of $L$. For a second order scalar operator, this algebraic condition is a quadratic polynomial in $\gamma$, and hence there are two indicial roots. For a higher order operator or system, there are more. Again in this second order setting where $L$ is assumed to be symmetric on $L^2$, these indicial roots take the form \[ \gamma^\pm = \frac{n-1}{2} \pm \zeta_0 \] for some $\zeta_0$ which is either real and nonnegative or else purely imaginary. We can define the indicial roots of $\lambda I - L$ in the same way, and write these as \[ \gamma^\pm(\lambda) = \frac{n-1}{2} \pm \zeta_0(\lambda). \] If $\lambda$ is real and sufficiently negative, then $\zeta_0(\lambda) > 0$. There exists some $C_0 \in \mathbb R$ such that if $\lambda > C_0$ then $\zeta_0(\lambda)$ is purely imaginary. If $\lambda \in \mathbb C \setminus [C_0, \infty)$, then $\mathrm{Re}\, \zeta_0(\lambda) > 0$ and tends to infinity as the distance from $\lambda$ to $[C_0,\infty)$ gets larger. One consequence of this is that any $\lambda > C_0$ lies in the continuous $L^2$-spectrum of $L$, see \cite{MazzeoUniCtn}. Fix the half-plane $\mathrm{Re}\, \lambda \leq -B$, and define \[ \delta = \inf_{\mathrm{Re}\, \lambda \leq -B} \mathrm{Re} \, \zeta_0(\lambda). \] Note then, by the remarks immediately above, we can increase $\delta$ by increasing $B$ along the real axis. We need a key structural theorem from \cite{MazzeoEdge} about the pointwise behavior of the Schwartz kernel of $R_L(\lambda)$: \begin{prop} \label{prop:AHresolventdecay} The distribution $R_L(\lambda; x, y, \tilde{x}, \tilde{y})$ satisfies \begin{equation*} \begin{split} & | R_L( \lambda; x,y, \tilde{x}, \tilde{y})| \leq C x^{\frac{n-1}{2} + \delta},\ \ x \to 0,\ \tilde{x} \geq c > 0; \\ & | R_L( \lambda; x,y, \tilde{x}, \tilde{y})| \leq C \tilde{x}^{\frac{n-1}{2} + \delta},\ \ \tilde{x} \to 0,\ x \geq c > 0; \end{split} \end{equation*} \end{prop} Actually, there is a considerably sharper structural theorem which also describes the precise behavior of $R_L$ as $x, \tilde{x} \to 0$, but for the present purposes we do not need this. We now state and prove a basic result on the spectrum of $L$ acting on weighted H\"older spaces. \begin{theorem} \label{thm:ah} Let $(M^n,g)$ be an asymptotically hyperbolic space where $L$ is a generalized Laplacian acting on some tensor bundle over $M$. For every $\delta > 0$, there exists $\omega = \omega(\delta) > 0$ so that if \begin{align} \label{eqn:loc-interval} \mu \in \left(\frac{n-1}{2} -\delta,\frac{n-1}{2} + \delta\right), \end{align} then $\mathbf{res}_{x^{\mu} \mathcal C^{0,\alpha}}(L) \supset \{ \lambda: \Re \lambda \leq -\omega\}$. \end{theorem} \begin{proof} For simplicity we confine our discussion to the scalar case. \ Given a choice of $\delta > 0$, by the defining remarks above concerning $\zeta_0$, we need only increase the value of $\omega$ so to ensure the interval of equation \eqref{eqn:loc-interval} is non-empty. Then, by Proposition \ref{prop:AHresolventdecay}, the conjugated kernel \eqref{conjker} decays at least at the rate $\frac{n-1}{2} + \delta + \mu$ as $\tilde{x} \to 0$ and at least like $\frac{n-1}{2} + \delta - \mu$ as $x \to 0$. We must determine whether \[ (\tilde{x}/x)^\mu R_L(\lambda; x, y, \tilde{x}, \tilde{y}): \mathcal C^{0,\alpha}(M,g) \longrightarrow \mathcal C^{0,\alpha}(M,g) \] is bounded, where this kernel acts by \[ u(x,y) \longmapsto \int (\tilde{x}/x)^\mu R_L(\lambda; x, y, \tilde{x}, \tilde{y}) u(\tilde{x}, \tilde{y})\, \frac{d\tilde{x}d\tilde{y}}{\tilde{x}^n}. \] (The singular measure is uniformly equivalent to the $L^2$ measure for $g$.) Since $u$ does not decay, it is certainly necessary that the product of these factors (including the singular Jacobian factor) is bounded by $\tilde{x}^{-1 + \epsilon}$ for some $\epsilon > 0$. This implies the necessity of \[ \frac{n-1}{2} + \delta + \mu - n > -1 \Leftrightarrow \mu > \frac{n-1}{2} - \delta. \] On the other hand, the rate of growth (or decay) of the output is determined by the behavior of this conjugated kernel in two regions: the first as both $x, \tilde{x} \to 0$ and the second as $x \to 0$, $\tilde{x} > 0$. We refer to \cite[Theorem 3.27]{MazzeoEdge} for the precise explanation. The kernel is bounded in this first regime, and is bounded by $x^{ (n-1)/2 + \delta - \mu}$ in the second. This is then bounded if and only if \[ \mu < \frac{n-1}{2} + \delta. \] The case of equality must be omitted here because in that special case the solution might behave like $\log x$ as $x \to 0$. Altogether then, we have argued that \[ (\tilde{x}/x)^\mu R_L(\lambda; x, y, \tilde{x}, \tilde{y}): \mathcal C^{0,0}(M,g) \longrightarrow \mathcal C^{0,0}(M,g) \] is bounded if and only if \begin{equation} \frac{n-1}{2} - \delta < \mu < \frac{n-1}{2} + \delta. \label{limitsmu0} \end{equation} Boundedness on H\"older spaces now follows readily from elliptic estimates. \end{proof} We have described the asymptotically hyperbolic case in some detail, but note that it is possible to prove similar results in the asymptotically conic, cylindrical, complex hyperbolic and symmetric or homogeneous cases described above. There is no such argument (to our knowledge) for infinite covers of compact manifolds. This material was included to indicate that the spectral hypothesis of admissibility can be verified in different ways. In Section \ref{scpc} we describe a different sort of parametrix construction which turns out to be sufficient for our purposes. It works for arbitrary manifolds of bounded geometry, but only if $\lambda$ has sufficiently large negative real part. In many ways, that parametrix construction is simpler than the one needed in the AH case, but is perhaps less familiar. \section{Sectoriality of admissible operators on H\"older spaces} \label{sec:proofThm} We now turn to the proof of the main sectoriality theorem, Theorem \ref{thm:main-A}. The first key observation is that if $L$ is an elliptic differential operator of order $m$, then the sectoriality of $L$ is equivalent to a certain estimate for the {\it semiclassical} resolvent of $L$, which means the following. Define $\varepsilon = |\lambda|^{-1/m}$ and $\zeta = \lambda/|\lambda|$. We then rewrite the operator that appears in Lemma \ref{lem:halfplane} characterizing sectoriality as \[ \lambda (\lambda I - L)^{-1} = \frac{\lambda}{|\lambda|} ( (\lambda/|\lambda|) I - |\lambda|^{-1} L)^{-1} = \zeta ( \zeta I - \varepsilon^m L)^{-1}. \] Disregarding the harmless unit prefactor $\zeta$, the operator $(\zeta I - \varepsilon^m L)^{-1}$ is called the semiclassical resolvent of $L$. Since $\mathrm{Re}\, \lambda \leq \omega$ and we may as well assume that $\omega < 0$, we only need consider such $\zeta$ with $\mathrm{Re}\,\zeta < 0$. Altogether then, an equivalent formulation of the sectoriality of $L$ is that \begin{equation} (\zeta - \varepsilon^m L)^{-1}: \mathcal C^{k,\alpha}(M,g) \longrightarrow \mathcal C^{k,\alpha}(M,g) \label{scformulation} \end{equation} exists and has norm which is uniformly bounded independently of $\varepsilon \in (0, \varepsilon_0)$, for some $\varepsilon_0 >0$, and $\zeta$ with $|\zeta| = 1$, $\mathrm{Re}\, \zeta \leq -c < 0$. In other words, \begin{quotation} {\bf sectoriality is equivalent to the uniform boundedness of the {\it semiclassical} resolvent on regular, i.e., {\it non-semiclassical}, function spaces.} \end{quotation} Part of our assertion is that this resolvent exists as a bounded operator provided $\varepsilon$ is sufficiently small, i.e., $\lambda I - L$ is invertible on $\mathcal C^{k,\alpha}(M,g)$ when $\mathrm{Re}\, \lambda$ is sufficiently negative. As explained at the end of this section, the case $k > 0$ follows from the case $k=0$, so we assume that $X = \mathcal C^{0,\alpha}$ until the final part of this section. We prove this in a series of steps, outlined here and carried out in the rest of this section. In the first step, we show that $(\zeta I - \varepsilon^m L)^{-1}$ exists as a bounded operator for each sufficiently small $\varepsilon$ and for every $\zeta$ with $\mathrm{Re}\, \zeta < 0$, but with no claim about uniformity. Here it is irrelevant whether standard or semiclassical H\"older spaces (as defined in Section \ref{subsec:funct-spcs}) are used since they are equivalent for each fixed $\varepsilon$. This is a `perturbative result', and follows from the existence of a semiclassical parametrix. We state this result carefully in the next proposition, and for the reader's convenience sketch this parametrix construction using geometric microlocal analytic techniques in Section \ref{scpc}; this methodology is explained there. Next we recall the `easy' semiclassical elliptic Schauder estimate associated to any strongly elliptic semiclassical family $\zeta I - \varepsilon^m L$. This is equivalent to the uniform boundedness of the inverse of this semiclassical operator between {\it semiclassical} H\"older spaces. The main step of the whole proof is to upgrade this to an estimate between standard H\"older spaces $\mathcal C^{0,\alpha}$ which is independent of $\varepsilon > 0$. This is done by establishing uniform bounds for this semiclassical resolvent as a map $\mathcal C^j \to \mathcal C^j$ for $j = 0, 1$, and applying interpolation. \begin{prop} Let $L$ be an admissible operator of order $m$ on a manifold $(M,g)$ of bounded geometry. Then the unbounded operator $\zeta I - \varepsilon^m L: \mathcal C^{0,\alpha}_\varepsilon(M,g) \to \mathcal C^{0,\alpha}_\varepsilon(M,g)$ has a bounded inverse for each sufficiently small $\varepsilon > 0$ and for each $\zeta$ with $|\zeta| = 1$ and $\mathrm{Re}\, \zeta < 0$. \label{scparprop} \end{prop} \begin{proof} This proof uses the machinery of microlocal analysis extensively. A detailed introduction to these methods is provided in Section \ref{scpc}. As is carefully defined and explained in that section, there exists a parametrix $G_\varepsilon$ for $\zeta I - \varepsilon^m L$ which is an element of order $-m$ in the semiclassical uniform pseudodifferential calculus $\Psi^{*,*}_{\mathrm{sc-unif}}(M,g)$ (see Definition \ref{def:sc-pseud}). Thus for each $\varepsilon$, $G_\varepsilon$ is an approximate inverse for $\zeta I - \varepsilon^m L$; its discrepancy from being an exact inverse is captured by the `residual operators' $Q_{j,\varepsilon} \in \Psi^{-\infty,\infty}_{\mathrm{sc-unif}}(M,g)$, $j = 1, 2$, via the identities \[ (\zeta I - \varepsilon^m L) G_\varepsilon = I - Q_{1,\varepsilon}, \quad G_\varepsilon (\zeta I - \varepsilon^m L) = I - Q_{2,\varepsilon}. \] Each $Q_{j,\varepsilon}$ is a smoothing operator (this is the meaning of the first superscript $-\infty$) with Schwartz kernel supported in some fixed neighborhood of the diagonal $\{(z,\tilde{z}): \mathrm{dist}_g(z, \tilde{z}) \leq C\}$, and vanishing to all orders as $\varepsilon \searrow 0$ (which is the meaning of the second superscript, $+\infty$). In more detail, $G_\varepsilon$ and the $Q_{j,\varepsilon}$ are one-parameter families of operators, with Schwartz kernels $G(\varepsilon, z, \tilde{z})$ and $Q_{j}(\varepsilon, z, \tilde{z})$, $z, \tilde{z} \in M$. For each $\varepsilon > 0$, $G(\varepsilon, z, \tilde{z})$ is an ordinary pseudodifferential operator of order $-m$ which is a parametrix for $\zeta I - \varepsilon^m L$, and the $Q_j$ are the smoothing error terms. These Schwartz kernels vary smoothly in $\varepsilon$ for $\varepsilon > 0$. The important new feature is their behavior as $\varepsilon \to 0$. First, if $z \neq \tilde{z}$, then $G(\varepsilon, z, \tilde{z})$ and $Q_j(\varepsilon, z, \tilde{z})$ decay faster than $\varepsilon^N$ for any $N$. This convergence is uniform in any region where $\mathrm{dist}_g(z, \tilde{z}) \geq c'' > 0$ for any $c'' > 0$. The behaviour of $G(\varepsilon, \cdot, \cdot)$ near the diagonal as $\varepsilon \to 0$ requires a bit more work to describe; this is done in Section \ref{scpc}. On the other hand, for $j = 1, 2$, $Q_{j}(\varepsilon, z, \tilde{z}) \in \mathcal C^\infty([0,\varepsilon_0) \times M^2)$, and these kernels decay rapidly along with all derivatives as $\varepsilon \to 0$, uniformly on $M \times M$. This construction works assuming that $\mathrm{Re}\, \zeta < 0$, but the rate of decay (which is actually exponential) diminishes as $\mathrm{Re}\, \zeta \to 0$. It is straightforward to deduce from this structure that $||Q_{j,\varepsilon}||_{\mathcal L(\mathcal C^{0,\alpha})} \to 0$ as $\varepsilon \to 0$. Hence both $(I - Q_{1,\varepsilon})^{-1}$ and $(I - Q_{2,\varepsilon})^{-1}$ exist as bounded operators on $\mathcal C^{0,\alpha}$ for any fixed sufficiently small $\varepsilon > 0$. Thus we can write \[ (\zeta I - \varepsilon^m L)^{-1} = G_\varepsilon (I - Q_{1,\varepsilon})^{-1} = (I - Q_{2,\varepsilon})^{-1} G_\varepsilon ; \] this proves that $(\lambda I - L)^{-1}$ exists for any $\lambda$ with $\mathrm{Re}\, \lambda$ is sufficiently negative. \end{proof} In the next step we continue in this same semiclassical vein: \begin{prop}[Semiclassical elliptic estimate] \label{lemma:sc} There exists a constant $C > 0$ such that for all $\varepsilon \in (0,1)$, $u \in \calC_\vep^{2,\alpha}$ and $\zeta$ with $|\zeta| = 1$ and $\mathrm{Re}\, \zeta \leq c' < 0$, \begin{equation} \|u\|_{m,\alpha,\varepsilon} \leq C \left( \| (\zeta I - \varepsilon^m L) u\|_{0,\alpha,\varepsilon} + \sup_M |u| \right). \label{scholderest} \end{equation} \end{prop} \begin{proof} As already hinted in the definition given earlier of semiclassical H\"older spaces, we show that \eqref{scholderest} is simply the `standard' Schauder estimate relative to the metric $g_{\varepsilon} = \varepsilon^{-2} g$. If $z$ is a normal coordinate for $g$ in a ball $B(r_0)$ of radius $r_0$ about any point, then $w = z/\varepsilon$ is a normal coordinate for the metric $g_\varepsilon = \varepsilon^{-2}g$ on a ball of radius $r_0/\varepsilon$. Indeed, \[ g = g_{ab}(z) dz^a dz^b= (\delta_{ab} + E_{ab}(z)) dz^a dz^b, \qquad |E|_g = \mathcal O(|z|^2), \] hence \[ g_{\varepsilon}= (\varepsilon^{-2} \delta_{ab} + \varepsilon^{-2} E_{ab}(\varepsilon w)) dz^a dz^b = (\delta_{ab} + E_{ab}(\varepsilon w)) dw^a dw^b. \] Using the hypothesis of bounded geometry, this shows that the coefficients and derivatives of $g_{\varepsilon}$ are uniformly controlled in the rescaled normal coordinates. Denoting this dilation operator on a given ball by $S_\varepsilon$, then the admissibility hypothesis shows that the rescaled operator $\varepsilon^m S_\varepsilon^*L$ is strongly elliptic on each such ball. The standard local elliptic estimate on any geodesic ball $B$ for $g_{\varepsilon}$ states that if $B'$ is the ball of half the radius and same center, then for all $u \in \mathcal C^{m,\alpha}(g_{\varepsilon})$, \[ \| u \|_{B', m, \alpha, g_{\varepsilon}} \leq C \left( \|(\zeta I - \varepsilon^m L) u\|_{B, 0, \alpha, g_{\varepsilon}} + \sup_B |u| \right). \] Taking the supremum of the right hand side over all balls $B$ of fixed radius (provided by bounded geometry), and then taking the supremum over the corresponding balls $B'$ on the left yields the global estimate. Crucially, the constant $C$ is independent of $\varepsilon \leq 1$. The proof is now completed by observing that the usual H\"older norm for $\mathcal C^{m,\alpha}(g_{\varepsilon})$ is precisely the same as the semiclassical H\"older norm for $\mathcal C^{m,\alpha}_\varepsilon(g).$ \end{proof} We now establish the $\mathcal C^0$ version of the sectoriality estimate. \begin{prop} There is a constant $C > 0$ such that \[ ||u||_{0} \leq C ||(\zeta I - \varepsilon^m L)u||_{0} \] for all unit $\zeta$ with $\mathrm{Re}\, \zeta \leq c' < 0$, $\varepsilon \in (0,1]$ and $u \in \mathcal C^{m,\alpha}(M,g)$. \label{prop3.3} \end{prop} \begin{proof} If this assertion were false, there would exist sequences $\zeta_j$, $\varepsilon_j $ and $u_j \in \mathcal C^{m,\alpha}$ such that \begin{equation} \| u_j \|_{0} > j \| ( \zeta_j - \varepsilon_j^m L) u_j \|_0 \end{equation} Replace $u_j$ by $v_j = u_j / \| u_j \|_0$ and set $f_j = ( \zeta_j - \varepsilon_j^m L ) v_j$, so that $\|v_j\|_0=1$ and $\|f_j\|_0 \leq 1/j \to 0$. Passing to a subsequence, we assume that $\zeta_j \to \zeta_*$. Next, choose a point $p_j \in M$ where $|v_j(p_j)| > 1/2$. By virtue of the bounded geometry of $(M,g)$, the restriction of the metric $g$ and the coefficients of the operator $L$ (expressed in normal coordinates) on the balls $B_j := B_{r_0}(p_j)$ are bounded in $\mathcal C^{m,\alpha}$, independently of $j$. There are two cases to consider. First suppose that $\varepsilon_j \to \varepsilon_* > 0$. If the $p_j$ remain in a compact set of $M$, then it is straightforward to extract a limit $v \in \mathcal C^{0,\alpha}$ of the sequence $v_j$ which is not identically zero, and which satisfies $(\zeta_* I - \varepsilon_*^m L) v = 0$. This is impossible given our hypothesis that $\varepsilon_*^{-m}\zeta_*$ does not lie in the spectrum. Now turn to the case where the sequence $p_j$ diverges. Following the discussion in 'Pointed limits of manifolds of bounded geometry' in Section \ref{sec:geo-bkgd}, choose a subsequence so that $(M,g, p_j)$ converges to a limiting space $(M_\infty, g_\infty, p_\infty)$ as pointed Riemannian spaces in the $\mathcal C^{\ell,\alpha'}$ topology, and so that $L$ converges in this construction to an operator $L_\infty$ on $M_\infty$. By Lemma~\ref{adm-persists}, $L_\infty$ is admissible. Since $1/2 \leq |v_j(p_j)| \leq 1 = \sup |v_j|$, it follows as before that some subsequence of these functions converges to a nontrivial limiting function $v_\infty$ on $M_\infty$ which satisfies $(\zeta_* I - \varepsilon_*^m L_\infty) v_\infty = 0$. Clearly $|v_\infty| \leq 1$ everywhere and by local Schauder estimates, there exists a constant $C$ such that $1 \leq ||v_\infty||_{0,\alpha} \leq C$. We may now employ exactly the same transplantation argument as in the proof of Lemma \ref{adm-persists} to show that the existence of this solution $v_\infty$ contradicts the fact that $\varepsilon_*^{-2} \zeta_*$ does not lie in the spectrum of $L$. The only minor modification from the proof is that we now write \begin{multline*} ( \zeta_j I - \varepsilon_j^m L) ( \chi_i v_{\infty} ) := h_i = \chi_i ( \zeta_* I -\varepsilon_*^m L_{\infty}) v_{\infty} + \\ (\zeta_j - \zeta_*) \chi_i v_{\infty} - \chi_i (\varepsilon_j^m - \varepsilon_*^m) L_{\infty} v_{\infty} - \chi_i \varepsilon_j^m (L - L_{\infty}) v_{\infty} -\varepsilon_j^m [ L, \chi_i ] v_{\infty}, \end{multline*} but this tends to $0$ in norm, as before. Thus we reach a contradiction in this case too. We have now reduced to the case where $\varepsilon_j \to 0$. If $z$ is a normal coordinate in $B_j$, then as discussed above, $w = z/\varepsilon_j$ is a normal coordinate for the metric $g_j = \varepsilon_j^{-2}g$ on a ball of radius $r_0/\varepsilon_j$. Indeed, \[ g_j= (\delta_{ab} + E_{ab}(\varepsilon_j w)) dw^a dw^b,\ \ E_{ab}(\varepsilon_j w) = \mathcal O( \varepsilon_j^2C^2) \ \ \mbox{for} \ |w| \leq C. \] Thus $g_j$ converges uniformly to the Euclidean metric on any compact set of $\mathbb R^n$. A very similar computation shows that if $L_j$ denotes the operator $L$ expressed in these rescaled coordinates, then \[ \varepsilon_j^m L_j \to L_E, \] a constant coefficient operator on $\mathbb R^n$; this limit is uniform on any compact subset of $\mathbb R^n$. Observe that any term in $L$ involving a derivative of order less than $m$ tends to $0$ in this limit. In fact, using multi-index notation, \[ \mbox{if}\ L = \sum_{|J| \leq m} a_J(z) \partial_z^J, \ \mbox{then}\ L_E = \sum_{|J| = m} a_J(0) \partial_w^J. \] (In the language developed in \S \ref{scpc} below, $L_E$ is simply the constant coefficient operator associated to the semiclassical symbol $\sigma_m^\mathrm{sc}(L)(z,\xi)$ of $L$ at $z=0$; we refer to that section for more on this.) Now, passing to a subsequence, we may assume that $\zeta_j \to \zeta_*$ and that $v_j \to v$ in $\mathcal C^m_{\mathrm{loc}}(\mathbb R^n)$. Clearly $|v| \leq 1$; furthermore, $1/2 \leq |v_j(0)| \leq 1$, so the function $v$ is nontrivial. It also must satisfy \[ (\zeta_* - L_E) v = 0 \] on all of $\mathbb R^n$. The existence of such a bounded solution is easily ruled out by Fourier analysis. Indeed, if we regard $v$ as a tempered distribution, then taking the Fourier transform transforms this equation to \[ (\zeta_* - \sigma_m(L)(0,\xi)) \hat{v}(\xi) = 0. \] Using our definition of strong ellipticity, and the fact that $\mathrm{Re}\, \zeta_* \leq 0$, $\zeta_* \neq 0$, the factor $(\zeta_* - \sigma_m(L)(0,\xi))$ is invertible for all $\xi \in \mathbb R^n$, hence $\hat{v} = 0$, so $v = 0$ as asserted. We have arrived at a final contradiction, and have thus established the $\mathcal C^0$ bound. \end{proof} The (nearly) final step in the proof of Theorem 1 is to establish the corresponding $\mathcal C^1$ sectoriality estimate, by reducing it to the $\mathcal C^0$ estimate. \begin{prop} \label{prop:second-resolvent-estimate} For any fixed $c' < 0$ sufficiently small, there exists a constant $C$ such that for all unit $\zeta$ with $\mathrm{Re}\,~\zeta~ \leq~c'~<~0$ and $\varepsilon \in (0,1]$, \begin{equation} \| u \|_{1} \leq C \| (\zeta - \varepsilon^m L) u \|_{1} \end{equation}for all $u \in \mathcal C^{m+1,\alpha}(M,g)$. \end{prop} \begin{proof} Begin by differentiating both sides of the equation $(\zeta - \varepsilon^m L) u = f$, and then commute derivatives to obtain \[ (\zeta - \varepsilon^m L) (\nabla u) = \nabla f - \varepsilon^m [\nabla, L] u. \] The $\mathcal C^0$ bound in the previous proposition yields that \[ ||\nabla u||_0 \leq C || \nabla f||_0 + C \varepsilon^m || [\nabla,L]u||_0. \] However, $[\nabla,L]$ is a differential operator of order $m$ with uniformly bounded coefficients, and the semiclassical estimate in Lemma \ref{lemma:sc} shows that \[ \varepsilon^m || [\nabla,L] u ||_0 \leq C ||u||_{m,\alpha, \varepsilon} \leq C' ( || f||_{0,\alpha, \varepsilon} + \sup_M |u| ) \leq C' ||f||_1. \] We have used the $\mathcal C^0$ estimate from the previous Proposition in this last inequality to estimate $\sup_M |u| \leq \sup_M |f|$. This completes the proof of Proposition \ref{prop:second-resolvent-estimate}. \end{proof} The proof of the sectoriality estimate on $\mathcal C^{0,\alpha}$ is attained by applying an interpolation argument. The two results above show that there exist constants $C_0, C_1$ such that \[ ||(\zeta - \varepsilon^m L)^{-1} f||_\ell \leq C_\ell ||f||_{\ell}, \quad \ell = 0, 1, \] for all $f \in \mathcal C^\infty(M,g)$ and for all unit $\zeta$ with $\mathrm{Re}\, \zeta \leq c' < 0$, $\varepsilon \in (0,1]$. The space $\mathcal C^{0,\alpha}(M,g)$ is identified with the interpolation space $[\mathcal C^0(M,g), \mathcal C^1(M,g)]_\alpha$, and from this we conclude that \[ ||(\zeta - \varepsilon^mL)^{-1} f||_{0,\alpha} \leq C_0^{1-\alpha} C_1^{\alpha} ||f||_{0,\alpha} \] for all unit $\zeta$ with $\mathrm{Re}\, \zeta \leq c' < 0$, $\varepsilon \in (0,1]$ and $u \in \mathcal C^{\infty}(M,g)$, and hence in little H\"older spaces by density of $\mathcal C^\infty$ in the little H\"older spaces. This is the sectoriality estimate on $\mathcal C^{0,\alpha}$. We conclude this section by showing how sectoriality on $\mathcal C^{k,\alpha}$, $k \geq 1$, follows from the case $k = 0$. \begin{corollary} If $(M,g)$ has bounded geometry of order $k + m + \alpha'$ for some $0 < \alpha' < 1$, and $L$ is an admissible operator with coefficients uniform of order $k + \alpha'$, then $L$ is sectorial on $\mathcal C^{k,\alpha}(M,g)$. \label{higherregsect} \end{corollary} \begin{proof} We have proved the uniform boundedness of the norm of $\lambda (\lambda I - L)^{-1}: \mathcal C^{0,\alpha} \to \mathcal C^{0,\alpha}$. Suppose that $f \in \mathcal C^{k,\alpha}(M,g)$, and write $u = u_\lambda = \lambda (\lambda I - L)^{-1} f$. The $\mathcal C^{0,\alpha}$ sectoriality estimate we have just proved shows that $||u_\lambda||_{0,\alpha} \leq C \|f \|_{0,\alpha}$ uniformly in $\lambda$. Furthermore, by local elliptic regularity, $u_\lambda \in \mathcal C^{k+m,\alpha}_{\mathrm{loc}}$, and if $B$ is any ball of radius $\frac12 r_0$ and $B'$ the ball of radius $r_0$ with the same center, then \[ ||u_\lambda||_{B, k+m,\alpha} \leq C( ||f||_{B', k, \alpha} + ||u_\lambda||_{B', 0}) \leq C ( ||f||_{B', k, \alpha} + ||u_\lambda||_{B', 0, \alpha}). \] Taking the supremum over all such balls, we obtain \[ ||u_\lambda||_{k+m,\alpha} \leq C (||f||_{k,\alpha} + ||u_\lambda||_{0,\alpha}) \leq C (||f||_{k,\alpha} + ||f||_{0,\alpha}) = C' ||f||_{k,\alpha}, \] which is the sectoriality estimate on $\mathcal C^{k,\alpha}(M,g)$. \end{proof} This concludes the proof of Theorem \ref{thm:main-A}. \section{The semiclassical parametrix construction} \label{scpc} We now provide details about the construction of the semiclassical parametrix for the family of operators $(\zeta I - \varepsilon^m L)$. There are two novel features in our presentation. The first is a minor one: the semiclassical calculus is best documented in the setting of operators on $\mathbb R^n$, cf.\ \cite{Dim-Sj, Martinez}, or in more modern expositions, on certain other special manifolds \cite{Zworski}. It adapts easily to the setting of manifolds of bounded geometry. The second is that, unlike the approaches in these citations, we carry out this construction using geometric microlocal analysis. This particular application of that theory was observed by Melrose and partially developed in his lecture notes \cite{Mel-berkeley}. We review this method to keep this paper relatively self-contained. Strictly speaking, the techniques here tacitly assume that both the operator $L$ and the Riemannian manifold $(M,g)$ are $\mathcal C^\infty$. Therefore we shall assume that this is the case, i.e., that all data are smooth, until near the end of this section. Only there will we show how to obtain the main conclusion of this section, i.e., the existence of the semiclassical resolvent, when $L$ and $g$ only have finite regularity. \smallskip \noindent{\bf The semiclassical double space.} A family of operators $A = A_\varepsilon$ is called a \emph{semiclassical family of pseudodifferential operators} if each $A_\varepsilon$ is a pseudodifferential operator in some standard calculus of such operators on $M$ (see below). The Schwartz kernel $K_{A_\varepsilon}$ of each $A_\varepsilon$ is a distribution on $M \times M$ which has a classical conormal, or polyhomogeneous, singularity along the diagonal $\mathrm{diag} \subset M^2$ (see Definition \ref{def:conormal} for the precise definitions of these terms). As $\varepsilon \searrow 0$, the distribution $K_{A_\varepsilon}$, and in particular its singularity along the diagonal, must degenerate somehow. The ``geometric'' microlocal way to describe this involves regarding this family of Schwartz kernels as a single distribution $K_A(\varepsilon, z, \tilde{z})$ on the augmented double-space $[0,\varepsilon_0)_\varepsilon \times M^2$. As such, it has a singularity along the family of diagonals $(0,\varepsilon_0)\times \mathrm{diag}$, and as we now describe, an extra singularity along $\{0\} \times \mathrm{diag}$. Our goal is to describe this extra singularity. To do this, we pass to a slightly larger space obtained by taking the real blow up of the submanifold $\{0\} \times \mathrm{diag}$ in $[0,\varepsilon_0)\times M^2$. This process -- which should be carefully distinguished not only from the process of `complex blow-up' which is common in algebraic geometry, but also from the use of the phrase blow-up associated to various sorts of rescaling arguments in PDE -- amounts to introducing a singular cylindrical coordinate system around this submanifold and `adding' the points where `$r=0$' as a new boundary hypersurface. Assuming that $z, \tilde{z}$ are identical coordinates on the two copies of $M$, introduce polar coordinates $r = | (\varepsilon, z-\tilde{z})| \geq 0 $, $\omega = (\varepsilon, z-\tilde{z})/r \in S^n_+$, and declare this new {\it semiclassical double space}, which we write as $M^2_\mathrm{sc}$, to be the manifold with corners on which $(r, \omega, \tilde{z})$ is a nonsingular smooth coordinate system. Note that $M^2_{\mathrm{sc}} \setminus \{r=0\}$ is canonically isomorphic to $[0,\varepsilon_0) \times M^2 \setminus (\{0\} \times \mathrm{diag})$, but each point on this diagonal at $\varepsilon=0$ is replaced by its inward-pointing spherical normal bundle at that point. The entire hypersurface $\{r=0\}$ is called the front face and denoted by $\mathrm{ff}$. It is the total space of a fibration over $\{0\} \times \mathrm{diag}$, with each fiber a closed hemisphere $S^n_+$. The points of $\mathrm{ff}$ correspond to directions of approach to $\{0\}\times \mathrm{diag}$. Although the polar coordinates give a nonsingular coordinate system, it is usually more convenient to use projective coordinates instead. Therefore we introduce \[ \varepsilon, w = (z-\tilde{z})/\varepsilon, \tilde{z} \] as a coordinate system on $M^2_\mathrm{sc}$ away from the `original' face $\{\varepsilon = 0\} \times (M^2 \setminus \mathrm{diag})$. These are defined and smooth up to points in the interior of $\mathrm{ff}$, but are undefined at $\varepsilon=0$ away from the diagonal. In their region of definition, $\varepsilon$ serves as a defining function for $\mathrm{ff}$. Using these, the interior of each hemisphere fiber in $\mathrm{ff}$ is identified with $\mathbb R^n$, with linear coordinate $w$. In fact, this projective identification of each fiber of $\mathrm{ff}$ with $\mathbb R^n$ is {\it well-defined up to linear transformations}. In other words, if $z'$ is any other choice of local coordinates, and $\tilde{z}'$ the same coordinate system on the second copy of $M$, then $w' = (z' - \tilde{z}')/\varepsilon = A w + \mathcal O(\varepsilon)$, for some $A \in \mathrm{Gl}_n$, and hence $w' = Aw$ at $\varepsilon = 0$, where $A$ is a matrix (that may depend smoothly on $\tilde{z}$, i.e., vary with the hemisphere fibers). Let $\pi: M^2_{\mathrm{sc}} \to [0,\varepsilon_0)$ be the composition of blowdown $M^2_{\mathrm{sc}} \to [0,\varepsilon_0)\times M^2$ followed by projection to the first factor. Each level set $\pi^{-1}(\varepsilon)$, $\varepsilon > 0$, is a copy of $M^2$. The preimage $\pi^{-1}(0)$ is the union of two manifolds with boundary: namely, the manifold with boundary obtained by blowing up $M^2$ along its diagonal, and the closure of $\mathrm{ff}$, which is a bundle of closed hemispheres. The intersection of these two hypersurfaces is naturally identified with the spherical normal bundle of the diagonal in $M^2$. \medskip \noindent{\bf Lifts of semiclassical differential operators} The blowup construction in this particular setting is motivated by a simple computation: consider the lift of $\zeta I - \varepsilon^m L$ first to the left factor of $M$ in $M^2$ (i.e., differentiating in $z$ rather than $\tilde{z}$), then to $[0,\varepsilon_0)\times M^2$ and finally to $M^2_\mathrm{sc}$. To compute this lift, first observe that \[ \varepsilon \partial_{z_j} = \partial_{w_j}; \] hence, writing $L = \sum_{|\beta|} a_\beta(z) \partial_z^\beta$ using that $z = \tilde{z} + \varepsilon w$, we arrive at \[ \varepsilon^m \sum_{|\beta| \leq m} a_\beta(z) \partial_z^\beta = \sum_{|\beta| = m} a_\beta(\tilde{z}) \partial_w^\beta + \sum_{|\beta| \leq m} c_\beta(\tilde{z},w, \varepsilon) \varepsilon^{m-|\beta|}\partial_w^\beta. \] The coefficients $c_\beta(\tilde z, w, \varepsilon)$ depend smoothly on all three variables, and arise from the Taylor expansions in $\varepsilon$ of the functions $a_\beta(\tilde{z} + \varepsilon w)$. The first set of terms on the right in this equality are the terms of order $0$ in this Taylor expansion when $|\beta| = m$; these contain no positive powers of $\varepsilon$. On the other hand, the coefficient functions with $|\beta| = m$ in the second sum on the right arise only from the higher order terms of the Taylor expansions of $a_\beta(\tilde{z} + \varepsilon w)$ for the same $\beta$, and hence these vanish at least like $\varepsilon$. The key point in this calculation is that the the lift of $\varepsilon^m L$ is the sum of a homogeneous operator of order $m$ with coefficients which depend only on $\tilde{z}$, and hence has {\it constant coefficients} on each fiber of $\mathrm{ff}$ at $\varepsilon = 0$, and a remainder term which is a differential operator of order $m$ with coefficients depending smoothly on $\varepsilon$, $w$ and $\tilde{z}$, and which vanishes at $\varepsilon=0$. Using this, we now write \[ \zeta I - \varepsilon^m L = (\zeta I - \sigma_m^{\mathrm{sc}}(L)(\partial_w)) + \text{(error term vanishing at $\varepsilon=0$)}. \] Here, by definition, the {\it semiclassical symbol} of $L$, $\sigma_m^{\mathrm{sc}}(L)(\partial_w)$, is the constant coefficient operator which appears in the computation above. A standard microlocal notation is to set $D_{w_j} = (1/i) \partial_{w_j}$; under Fourier transform this corresponds to the linear variable $\xi_j$. This semiclassical symbol is closely related to the principal symbol of $L$ by \[ \sigma_m \left(\sigma_m^{\mathrm{sc}}(L)( (1/i)\partial_w) \right) = \sigma_m(L)(\tilde{z}, \xi). \] (Since $\sigma_m^{\mathrm{sc}}$ is homogeneous and $m$ is even, the factor of $i$ reduces to an even more harmless factor of $\pm 1$.) The semiclassical symbol is well defined up to a linear change of coordinates in $w$. It also depends smoothly on $\tilde{z}$, but the dependence is only parametric, and we frequently drop it from the notation below. The key point in all that follows below is that there exists a distribution $H_\zeta(w) = \varepsilon^{-n}\overline{H}_\zeta(w)$ (which depends smoothly on $\tilde{z}$) such that \[ (\zeta I - \sigma_m^{\mathrm{sc}}(L)(\partial_w)) \overline{H}_\zeta(w) = \overline{H}_\zeta(w) (\zeta I - \sigma_m^{\mathrm{sc}}(L)(\partial_w)) =\delta(w). \] As we now explain, this is a consequence of the strong ellipticity of $L$. We explain this and develop some further properties of $H_\zeta$, in the next subsection. The appearance of the slightly odd-looking factor of $\varepsilon^{-n}$ will be explained there too. The full parametrix construction itself is simply a perturbative construction which uses this distribution as the leading term of a formal series in $\varepsilon$ which is constructed to be a `formal' inverse of $\zeta I - \varepsilon^m L$. This is then readily converted to a good parametrix with rapidly decaying error terms, and then to an actual inverse if $\varepsilon$ is small. \medskip \noindent{\bf Green function for the model problem.} Here we establish the existence and certain properties of $\overline{H}_\zeta(w)$. \medskip \begin{lemma} \label{lem:greenfunc}There exists a distribution $\overline{H}_\zeta(w)$ on $\mathbb R^n$ such that $\overline{H}_\zeta(w - \tilde{w})$ is the Schwartz kernel of a translation-invariant pseudodifferential operator on $\mathbb R^n$ of order $-m$ and which has the the following two properties. First \begin{equation} (\zeta I - \sigma_m^{\mathrm{sc}}(L)(\partial_w)) \circ \overline{H}_\zeta = \overline{H}_\zeta \circ (\zeta I - \sigma_m^{\mathrm{sc}}(L)(\partial_w) = \delta(w). \label{modeldelta} \end{equation} In addition, $\overline{H}_\zeta$ depends smoothly on $\tilde{z}$, and satisfies $|\overline{H}_\zeta(w)| \leq C e^{-\delta |w|}$ where $\delta > 0$ provided $\mathrm{Re}\, \zeta < 0$. \end{lemma} \begin{proof} Strong ellipticity of $L$ implies the function $\xi \mapsto (\zeta I - \sigma_m^{\mathrm{sc}}(L)(\xi))^{-1}$ is $\mathcal C^\infty$ and polynomially bounded, hence is an element of $\mathcal S'$, the space of tempered distributions. As such, we may take its inverse Fourier transform and define \[ \overline{H}_\zeta(w) = \frac{1}{(2\pi)^n} \int_{\mathbb R^n} e^{iw \cdot \xi} (\zeta I - \sigma_m^{\mathrm{sc}}(L)(\xi))^{-1}\, d\xi. \] This too is an element of $\mathcal S'$. This integral converges if $m > n$, but if this is not the case, we can make sense of it as the distributional limit \[ \lim_{\delta \to 0} \frac{1}{(2\pi)^n} \int_{\mathbb R^n} e^{iw \cdot \xi} \chi(\delta \xi) (\zeta I - \sigma_m^{\mathrm{sc}}(L)(\xi))^{-1}\, d\xi, \] where $\chi \in \mathcal C^\infty_0(\mathbb R^n)$ equals $1$ in some neighborhood of the origin. In other words, pairing this expression with a Schwartz function $\phi(w)$, we note that the limit in $\delta$ can be taken in the classical sense on the right side of \begin{multline*} \int_{\mathbb R^n} \left(\int_{\mathbb R^n} e^{iw \cdot \xi} \chi(\delta \xi) (\zeta I - \sigma_m^{\mathrm{sc}}(L)(\xi))^{-1}\, d\xi\right)\, \phi(w)\, dw = \int_{\mathbb R^n} \hat{\phi}(-\xi) \chi(\delta \xi) (\zeta I - \sigma_m^{\mathrm{sc}}(L)(\xi))^{-1}\, d\xi.\end{multline*} Using similar standard distributional manipulations, we can also justify that \[ (\zeta I - \sigma_m^{\mathrm{sc}}(L)(\partial_w)) \overline{H}_\zeta(w) = \overline{H}_\zeta(w) (\zeta I - \sigma_m^{\mathrm{sc}}(L)(\partial_w)) = (2\pi)^{-n} \int_{\mathbb R^n} e^{iw\cdot \xi} \, d\xi = I. \] We recall also that since $(\zeta I - \sigma_m^{\mathrm{sc}}(L)(\xi))^{-1}$ is $\mathcal C^\infty$, its Fourier transform decays rapidly. Indeed, interpreted again as Fourier transforms of tempered distributions, this follows from the (distributional) identity \[ \begin{aligned} \overline{H}_\zeta(w) & = |w|^{-2\ell} (2\pi)^{-n}\int_{\mathbb R^n} \left(\Delta_\xi^\ell e^{iw\cdot \xi}\right) (\zeta I - \sigma_m^{\mathrm{sc}}(L)(\xi))^{-1}\, d\xi \\ & = |w|^{-2\ell} (2\pi)^{-n}\int_{\mathbb R^n} e^{iw\cdot \xi} \Delta_\xi^\ell (\zeta I - \sigma_m^{\mathrm{sc}}(L)(\xi))^{-1}\, d\xi \end{aligned} \] for any $\ell > 0$, and the fact that the integral is classically convergent once $-n > -m - 2\ell$, i.e., $\ell > (n-m)/2$. Next, apply any derivative $\partial_w^\beta$ to both sides of this equality, where $\beta$ is a multi-index, and choose $\ell > (n-m + |\beta|)/2$ so that the integral is still absolutely convergent, to deduce that $\overline{H}_\zeta(w)$ is smooth away from $w=0$. For an even more refined statement, apply $w^\alpha \partial_w^\beta$ where $|\alpha|= |\beta|$. Passing through the Fourier transform, this becomes a constant multiple of $\partial_\xi^\alpha \xi^\beta$ acting on the exponential, which integrates by parts to an expression where $\xi^\beta (-\partial_\xi)^\alpha$ acts on $\Delta_\xi^\ell (\zeta I - \sigma_m^{\mathrm{sc}}(L)(\xi))^{-1}$. The resulting integral is once again convergent if $\ell > (n-m)/2$. This shows that $\overline{H}_\zeta(w)$ has `stable regularity' with respect to repeated differentiation by the vector fields $w_i \partial_{w_j}$, a property which is known as conormality with respect to the origin $\{w=0\}$. By a further analysis, which is left to the reader, the expansion of $\Delta_\xi^\ell (\zeta I - \sigma_m^{\mathrm{sc}}(L)(\xi))^{-1}$ as $|\xi| \to \infty$ is transformed to an expansion in powers of $w$ as $w \to 0$, which is the assertion that $\overline{H}_\zeta(w)$ is polyhomogeneous at $w=0$. We now explain the assertion about exponential decay. The integrand is a smooth function of $\xi$. By the uniformity of $L$ as a function of $\tilde{z}$, there exists some $\delta' > 0$ sufficiently small depending on $\mathrm{Re}\, \zeta < 0$ so that if $\eta \in \mathbb R^n$ and $|\eta| < \delta'$, then the deformed map $\xi \mapsto (\zeta I - \sigma_m^{\mathrm{sc}}(L)(\xi + i\eta))^{-1}$ remains smooth in $\xi$ and decays as $\xi \to \infty$ at the same rate as the undeformed function. Using these observations and a similar renormalization scheme to make the integral convergent, we can deform the contour of integration from $\mathbb R^n$ to $\mathbb R^n + i\eta$ and write \[ \overline{H}_\zeta(w) = e^{- w \cdot \eta} \frac{1}{(2\pi)^n} \int_{\mathbb R^n} e^{iw \cdot \xi} (\zeta I - \sigma_m^{\mathrm{sc}}(L)(\xi + i\eta))^{-1}\, d\xi. \] The integral is defined as before, using the same observations and calculations as above, and is bounded as $w \to \infty$, but is now accompanied by the prefactor $e^{- w\cdot \eta}$ which decays exponentially in any cone properly contained in the half-space $w \cdot \eta \geq 0$. Since $\eta$ can point in any direction, this shows that $\overline{H}_\zeta(w)$ decays exponentially, at some rate depending on $\mathrm{Re}\, \zeta < 0$, as $|w| \to \infty$. This solution is smooth in the parameters $\zeta$ and $\tilde{z}$. This establishes the lemma. \end{proof} Using these results, it is now straightforward to see that $\overline{H}_\zeta \star f (w)$ is exponentially decreasing in $w$ if $f \in \mathcal C^\infty_0(\mathbb R^n)$. \medskip \medskip \noindent{\bf Manifolds with corners, blowups and polyhomogeneity.} To get further into the details, we first recall the following terminology and notions. All of these are described more carefully in \cite{MazzeoEdge}. First, suppose that $X$ is a manifold with corners. This means that near any point $q \in X$ there is a local coordinate system $(x_1, \ldots, x_k, y_1, \ldots, y_{\ell-k})$, $\ell = \dim X$, such that each $x_i \geq 0$ and each $y_i \in (-\epsilon, \epsilon)$, and $(x,y) = (0,0)$ at $q$. We say that $q$ then lies on a corner of codimension $k$. We next define the space $\mathcal V_b(X)$ of $b$-vector fields on $X$ to consist of all smooth vector fields which are unconstrained in the interior of $X$ and which are tangent to all boundary faces. In terms of the coordinates above, any $V \in \mathcal V_b(X)$ is, near $q$, a linear combination, with smooth coefficients, of the basic vector fields $x_i \partial_{x_j}$, $i, j = 1, \ldots, k$ and $\partial_{y_i}$, $i = 1, \ldots, \ell-k$. \begin{definition} \label{def:conormal} A distribution $u$ on $X$ is said to be conormal to $\partial X$ if there exists some fixed function space $E$ such that \[ V_1 \ldots V_N u \in E,\ \ \text{for all}\ V_i \in \mathcal V_b(X)\ \text{and all}\ N \in \mathbb N. \] \end{definition} Typically we let $E$ be a weighted $L^2$ or $L^\infty$ space. Thus if $u$ is conormal, then $u$ is $\mathcal C^\infty$ on the interior of $X$, but may have singular behavior along the boundary. These singularities are, however, `tangentially smooth'. \begin{definition} \label{def:polyhom} A distribution $u$ on $X$ is said to be polyhomogeneous at $\partial X$ if it is conormal, and in addition has a classical expansion at each boundary face and product-type expansions at the corners. Thus, near a codimension one face $x=0$, for example, with $y$ a coordinate system along that face, \[ u(x,y) \sim \sum_{\mathrm{Re} \gamma_j \to \infty} \sum_{k=0}^{N_j} a_{jk}(y) x^{\gamma_j} (\log x)^k, \] for some discrete set of exponents $\gamma_j$, where each coefficient $a_{jk}$ is smooth. Near each corner, where $x_1, \ldots, x_k = 0$, $u$ has an expansion involving various (possibly nonintegral) powers of each $x_j$ and, for each monomial in the series, additional factors which are positive integer powers of each $\log x_i$. \end{definition} These expansions hold for all derivatives of $u$ as well, in the sense that any derivative of $u$ has an expansion where the summands are the corresponding derivatives of each term in the series for $u$. These series are asymptotic, but (usually) not convergent. The space of all such polyhomogeneous distributions is denoted $\mathcal A_{\mathrm{phg}}(X, \partial X)$. We may easily extend this to define polyhomogeneous sections of vector bundles over $X$ as well. A submanifold $Z \subset X$ is called a \emph{$p$-submanifold} if near any $q \in Z$ there is a set of coordinates for $X$ as above such that in some neighborhood of $q$, $Z = \{x_1 = \ldots = x_r = 0,\ y_1 = \ldots = y_s = 0\}$. Thus locally near $q$, $X$ is the product of (some small neighborhood of) $Z$ with another manifold with corners; the $p$ stands for `product'. If $Z$ is such a $p$-submanifold, then we may define the blowup of $X$ around $Z$, denoted $[X; Z]$ to be the union of $X \setminus Z$ and the inward-pointing spherical normal bundle of $Z$. Thus $[X ; Z]$ is a new manifold with corners, with a new boundary hypersurface created by this blowup. As in our special case of the semiclassical double space defined earlier, we may `construct' this blowup by taking cylindrical coordinates around $Z$ in $X$, say $(r, \omega, z)$, where $z \in Z$, $\omega$ is a spherical normal vector and $r \geq 0$, and ``adding the $r=0$ face''. We denote the new `front face' of this blowup by $\mathrm{ff} [X;Z]$. \medskip \noindent{\bf Pseudodifferential operators via their Schwartz kernels} Now let $M$ be a manifold (possibly open, but without boundary). We may define the blowup $[M^2, \mathrm{diag}]$. This has a front face, $\mathrm{ff}$, which is the spherical normal bundle of $\mathrm{diag}$ in $M^2$. \begin{definition} A pseudodifferential operator $A:C_c^{\infty}(M) \to \mathcal{D}^{\prime}(M)$ on $M$ is a linear operator for which the Schwartz kernel $K_A$ of $A$ is the sum of an element of $\mathcal A_{\mathrm{phg}}( [M^2, \mathrm{diag}], \mathrm{ff})$ and a distribution which is conormal and supported along $\mathrm{ff}$. \end{definition} This is a purely intrinsic and `geometric' definition of the space of pseudodifferential operators. It is not easy to work with computationally, however, and it is more customary to use other definitions using oscillatory integrals, cf.\ \cite{Shubin} for example, which are better suited for proving such things as showing that the composition of two pseudodifferential operators is again pseudodifferential. The inclusion in this definition of the extra terms which are supported on the front face may seem a technical annoyance, but it is worth pointing out that the Schwartz kernel of the identity operator, $\delta(z - \tilde{z})$, has this property. In fact, if $P$ is any differential operator (with smooth coefficients) on $M$, then the Schwartz kernel of $P$ is equal to $P$ (acting in the $z$ variable) applied to $\delta(z - \tilde{z})$, and hence is again supported on this front face. \medskip \noindent{\bf Semiclassical pseudodifferential operators} We are now in a position to define semiclassical families of pseudodifferential operators on $M$. The semiclassical double-space $M^2_{\mathrm{sc}}$ has a distinguished submanifold $\mathrm{diag}_{\mathrm{sc}}$, which we call the lifted diagonal. It is the closure in $M^2_{\mathrm{sc}}$ of $(0,\varepsilon_0) \times \mathrm{diag}$. This closure intersects the front face of $M^2_{\mathrm{sc}}$ at a submanifold which contains a single point $\{w=0\}$ on each hemisphere fiber. Clearly $\mathrm{diag}_{\mathrm{sc}}$ is a $p$-submanifold of $M^2_{\mathrm{sc}}$, so we may pass to the blowup $[M^2_{\mathrm{sc}}, \mathrm{diag}_{\mathrm{sc}}]$. This has three boundary hypersurfaces: the original face at $\varepsilon=0$ away from the diagonal, the front face $\mathrm{ff}$ obtained by blowing up the diagonal at $\varepsilon=0$, and the new front face obtained in this final blowup. We say that $A_\varepsilon$ is a \emph{semiclassical pseudodifferential operator} \label{ref:sc-psido} if its Schwartz kernel $K_A(\varepsilon, z, \tilde{z})$ lifts to $[M^2_{\mathrm{sc}}, \mathrm{diag}_{\mathrm{sc}}]$ to be the sum of a distribution polyhomogeneous at all boundaries of this manifold with corners and a conormal distribution supported on the new front face, and if $K_A$ vanishes faster than any power of $\varepsilon$ along the original face. It makes sense to restrict $K_A$ to each level set $\pi^{-1}(\varepsilon)$ for $\varepsilon > 0$, and on each of these we must obtain the Schwartz kernel of a pseudodifferential operator. This definition now imposes precise constraints on how these Schwartz kernels degenerate as $\varepsilon \to 0$. In particular, if we change to coordinates $\varepsilon, w, \tilde{z}$ on $M^2_{\mathrm{sc}}$, then the `pseudodifferential singularity' occurs at $w=0$ for each $\varepsilon$, and at $\varepsilon = 0$ there is an expansion $K_A \sim \sum \varepsilon^{\gamma_j} K_A^{(j)}(w,\tilde{z})$ where each coefficient is a pseudodifferential operator on each hemisphere fiber of $\mathrm{ff}$ (and as before, $\{\gamma_j\}$ is a discrete set of exponents with real parts tending to infinity). In fact, we are only interested in Schwartz kernels for which the expansion as $\varepsilon \to 0$ is of the form \[ K_A(\varepsilon, w, \tilde{z}) \sim \sum_{j=-n}^\infty \varepsilon^j K_{A,j}(w,\tilde{z}). \] The reason for starting this series at $-n$ is as follows. As we have already described, the lift of $\varepsilon^m L$ (as a differential operator, not its Schwartz kernel) is an operator with coefficients smooth up to $\mathrm{ff}$; in particular, it has a series expansion as $\varepsilon \to 0$ with leading term $\sigma_m^{\mathrm{sc}}(L)(\partial_w)$. On the other hand, the identity operator has Schwartz kernel \[ \delta(z - \tilde{z}) = \delta( \varepsilon w) = \varepsilon^{-n} \delta(w). \] Thus if we expand each factor in the equation \begin{equation} (\zeta I - \varepsilon^m L) G(\varepsilon, w, \tilde{z}) = \varepsilon^{-n} \delta(w) \label{expand} \end{equation} in powers of $\varepsilon$, and continue to think of the first factor on the left as a differential operator instead of a Schwartz kernel, then at least formally we expect $G$ to have a series expansion involving the powers $\varepsilon^j$ for $j \geq -n$. In any case, this illustrates how the introduction of the space $M^2_{\mathrm{sc}}$ provides a setting where it is possible to ``see'' the transition from the ordinary pseudodifferential operators on each $\pi^{-1}(\varepsilon)$ to the limiting model problem on $\mathbb R^n$. Accordingly, we now define a class of semiclassical pseudodifferential operators whose expansions at $\mathrm{ff}$ involve only integer powers: \begin{definition} \label{def:sc-pseud} If $(M,g)$ is a manifold of bounded geometry, then $\Psi^{k,\ell}_{\mathrm{sc}-\mathrm{unif}}(M,g)$ consists of those semiclassical pseudodifferential operators on $M$ which have pseudodifferential order $k$ on each level set $\pi^{-1}(\varepsilon)$ and which have a series expansion in $\varepsilon$ at $\mathrm{ff}$ with only integer powers, with initial term $\varepsilon^\ell$. The subscript `unif' indicates that we restrict further to allow only kernels which are supported in some neighborhood $\mathrm{dist}_g(z, \tilde{z}) \leq C$ of $\mathrm{diag}_{\mathrm{sc}} \cup \mathrm{ff}$, where the constant $C$ may depend on the operator. \end{definition} \medskip \noindent{\bf Parametrix construction} We now commence with the parametrix construction. Our goal is to find an element $G \in \Psi^{-m,-n}_{\mathrm{sc}-\mathrm{unif}}(M,g)$ such that, for each $\zeta$ with $|\zeta| = 1$ and $\mathrm{Re} \, \zeta < 0$, \[ (\zeta I - \varepsilon^m L) G = I - Q_{1}, \quad G (\zeta I - \varepsilon^m L) = I - Q_{2}, \] where $Q_1, Q_2 \in \Psi^{-\infty, \infty}_{\mathrm{sc}-\mathrm{unif}}(M,g)$. As explained above, we do this `formally', i.e., in Taylor series, at $\mathrm{ff}$, and then take a Borel sum of the resulting formal expansion. More carefully, returning to \eqref{expand}, expand each factor into its formal series expansion: \[ \left( (\zeta I - \sigma_m^{\mathrm{sc}}(L)(\partial_w)) + \sum_{j=1}^\infty \varepsilon^j E_j \right) \sum_{k=-n}^\infty \varepsilon^k G_k(w,\tilde{z}) = \varepsilon^{-n} \delta(w). \] Here the $E_j$ are the differential operators (of order $\leq m$) arising in the Taylor expansion of $\zeta I - \varepsilon^m L$. Carrying out the composition and collecting like powers of $\varepsilon$, we obtain a sequence of equations \[ \begin{aligned} (\zeta I - \sigma_m^{\mathrm{sc}}(L)(\partial_w)) G_{-n}(w, \tilde{z}) & = \delta(w) \\ (\zeta I - \sigma_m^{\mathrm{sc}}(L)(\partial_w)) G_{-n+k}(w, \tilde{z}) & = F_k(w,\tilde{z}), \end{aligned} \] where each $F_k = \sum_{i=1}^{k} E_i G_{-n+ k-i}$ is an `error term'. This is a sequence of equations on $\mathbb R^n_w$, where the various terms all depend smoothly on $\tilde{z}$. These equations indicate that we must set \[ G_{-n}(w) = \overline{H}_\zeta(w), \quad \mbox{and}\ \ \ G_{-n+k}(w) = \overline{H}_\zeta(w) \star F_k(w)\ \ k \geq 1. \] for each $\tilde{z}$. Since the equations with $k \geq 1$ are constant coefficient in $w$, they are solved by convolving with the fundamental solution $\overline{H}_\zeta(w)$. Since each $E_i G_{-n+k-i}$ has pseudodifferential order at most $0$ and $\overline{H}_\zeta$ has order $-m$, each $F_k$ is the Schwartz kernel of a translation-invariant (in $w$) pseudodifferential operator of order no more than $-m$. We may solve this equation inductively, given the properties of $\overline{H}_\zeta$ established in Lemma \ref{lem:greenfunc}, but then are faced with showing that the Borel sum of this series of singular kernels still has the correct behavior. There is a slightly easier way to proceed which allows us to work more directly with $\mathcal C^\infty$ kernels. Namely, we first find a pseudodifferential operator $\widetilde{G}(\varepsilon, \cdot, \cdot)$ on each level set $\pi^{-1}(\varepsilon)$ which solves \[ (\zeta I - \varepsilon^m L) \widetilde{G}(\varepsilon) = \varepsilon^{-n} \delta(w) + \varepsilon^{-n}\mathcal R(\varepsilon, w, \tilde{z}), \] where $\mathcal R$ is smooth in all variables, $\varepsilon, w, \tilde{z}$. This involves carrying out the complete parametrix construction for the nondegenerate operator on each level set in the standard pseudodifferential calculus, but carrying along $\varepsilon$ as a smooth parameter. To compensate for the factors $\varepsilon^{-n}$ on the right, we choose $\widetilde{G} \in \Psi^{-m, -n}_{\mathrm{sc}-\mathrm{unif}}$. Said differently, this is simply the nondegenerate elliptic parametrix construction (with parametrix cut off to have support in a neighborhood of the diagonal), carried out smoothly in the parameter $\varepsilon$. We now need to find additional terms in the parametrix which cancel off the full Taylor series in $\varepsilon$ of the remainder term $\varepsilon^{-n}\mathcal R$. This involves inductively solving a sequence of equations \[ \begin{aligned} (\zeta I - \sigma_m^{\mathrm{sc}}(L)(\partial_w)) \widetilde{G}_{-n}(w, \tilde{z}) & = \mathcal R_0(w,\tilde{z}) \\ (\zeta I - \sigma_m^{\mathrm{sc}}(L)(\partial_w)) \widetilde{G}_{-n+k}(w, \tilde{z}) & = \sum_{i=1}^k E_i \mathcal R_{-n + k-i}, \end{aligned} \] where $\mathcal R \sim \sum \varepsilon^k \mathcal R_k$. The advantage is that the right hand sides are all smooth and we can assume that they are compactly supported in, say, $\{|w| \leq 1\}$ for all $\tilde{z}$. The solutions are given by \[ \widetilde{G}_{-n} = \overline{H}_\zeta\star \mathcal R_0 (w), \qquad \widetilde{G}_{-n+k} = \overline{H}_\zeta \star \left( \sum_{i=1}^k E_k \mathcal R_{-n+k-i}\right)(w). \] \noindent{\bf Conclusion of parametrix construction} We have now constructed both $\widetilde{G}$ and the sequence of smooth, exponentially decaying terms $\widetilde{G}_{-n+k}$, $k \geq 0$. An important fact is that given any such sequence, it is possible to construct a function $\widetilde{G}'(w)$ which is smooth in the interior of $M^2_{\mathrm{sc}}$, decays to all orders at the original face, and which has expansion in powers of $\varepsilon$ at $\mathrm{ff}$ of the form \[ \widetilde{G}'(w) \sim \sum_{k=0}^\infty \varepsilon^{-n+k} \widetilde{G}_k(w). \] This is the Borel sum of this series and is an element of $\Psi^{-\infty, -n}_{\mathrm{sc}-\mathrm{unif}}$. Our final semiclassical parametrix is now defined by \[ G(\varepsilon, w, \tilde{z}) = \widetilde{G}(\varepsilon, w, \tilde{z}) + \widetilde{G}'(\varepsilon, w, \tilde{z}) \in \Psi^{-m, -n}_{\mathrm{sc}-\mathrm{unif}}. \] \medskip \noindent{\bf Boundedness properties} We conclude this section by sketching the proof of boundedness properties of elements of $\Psi^{*,*}_{\mathrm{sc}-\mathrm{unif}}(M)$ on the semiclassical H\"older spaces $\mathcal C^{k,\alpha}_\mathrm{sc}(M,g)$, as defined in \S \ref{sec:background}. First define the family of spaces \[ \varepsilon^\lambda \mathcal C^{k,\alpha}_{\varepsilon} = \{u(\varepsilon, z) = \varepsilon^\lambda \tilde{u}(\varepsilon, z)\ \mbox{where}\ \tilde{u} \in \mathcal C^\infty( \, [0, \varepsilon_0)\, ; \, \mathcal C^{k,\alpha}_{\varepsilon}(M,g) \, )\}. \] In other words, an element of this space can be represented by a formal series $u \sim \sum_{j \geq 0} \varepsilon^{\lambda+j} u_j$ with each $u_j \in \mathcal C^{k,\alpha}_{\varepsilon}$. \begin{proposition} If $A \in \Psi^{\kappa, \mu}_{\mathrm{sc}-\mathrm{unif}}$ for some $\kappa \in \mathbb Z$, then \[ A: \varepsilon^\lambda \mathcal C^{k,\alpha}_{\varepsilon} \longrightarrow \varepsilon^{\lambda + \mu + n}\mathcal C^{k-\kappa,\alpha}_{\varepsilon} \] is bounded provided $\kappa \leq k$. \label{bddprop} \end{proposition} \begin{remark} We can extend this result to allow operators of non-integral order, for example using a standard interpolation result, but omit this here since it is not needed. \end{remark} \begin{proof} The first observation is that if $\kappa \leq k$, and $P = \sum_{|\alpha| \leq \kappa} a_\alpha(z) \varepsilon^{|\alpha|}\partial_z^\alpha$ is a semiclassical {\it differential} operator with coefficients which are uniformly bounded in $\mathcal C^\infty$, then directly from the definition, \begin{equation} P: \mathcal C^{k,\alpha}_\varepsilon(M,g) \longrightarrow \mathcal C^{k-\kappa,\alpha}_{\varepsilon}(M,g) \label{bddP} \end{equation} is bounded. To relate this to a pseudodifferential boundedness theorem, we have previously noted that the Schwartz kernel of $P$ on $M^2_{\mathrm{sc}}$ is a distribution supported along $\mathrm{diag}_{\mathrm{sc}}$, namely \[ K_{P}(\varepsilon, w, \tilde{z}) = \varepsilon^{-n} \sum_{|\alpha|\leq \kappa} a_\alpha(\tilde{z} + \varepsilon w) (\partial_w^\alpha \delta(w)) \in \Psi^{\kappa, -n}_{\mathrm{sc}-\mathrm{unif}}. \] Thus \eqref{bddP} is a very special case of the boundedness in Proposition \ref{bddprop}, with $\lambda = 0$. Recalling the definition of a semiclassical pseudodifferential operator from page \pageref{ref:sc-psido}, we prove the boundedness property for kernels in $\Psi^{-\infty,-n}_{\mathrm{sc}-\mathrm{unif}}(M,g)$ supported away from the diagonal, and another for kernels in $A \in \Psi^{\kappa,-n}_{\mathrm{sc}-\mathrm{unif}}(M,g)$ supported near the diagonal. Thus first suppose that $A \in \Psi^{-\infty,-n}_{\mathrm{sc}-\mathrm{unif}}(M,g)$ has Schwartz kernel $K_A$ supported away from $\mathrm{diag}_{\mathrm{sc}}$ in $M^2_{\mathrm{sc}}$; to be specific, suppose $\mathrm{supp}(K_A) \subset \{\varepsilon \leq \mathrm{dist}(z, \tilde{z}) \leq C\}$, or equivalently, $1 \leq |w| \leq C/\varepsilon$. We then have that in projective coordinates $(z, \tilde{w})$, $\tilde{w} = (\tilde{z} - z)/\varepsilon$, \begin{align*} \left|\int_M K_A(\varepsilon, z, \tilde{z}) u(\tilde{z})\, dV_g(\tilde{z})\right| &\leq C \|u\|_{\infty} \int_{1 \leq |\tilde{w}| \leq C/\varepsilon} |K_A(\varepsilon, z, \tilde{w})|\, \varepsilon^n dV(\tilde{w}) \\ &\leq C \|u\|_{\infty} \int_{1 \leq |\tilde{w}| \leq C/\varepsilon} ( 1 + |\tilde{w}|)^{-N}\, d\tilde{w} \leq C' \|u\|_{\infty} < \infty \end{align*} since $K_A$ blows up like $\varepsilon^{-n}$ and $\varepsilon^n |K_A|$ is Schwartz in $\tilde{w}$, uniformly as $\varepsilon \searrow 0$. Furthermore, any semiclassical derivative $\varepsilon^{|\alpha|} \partial_z^\alpha$ applied to $K_A$ yields a kernel of the same form. This proves that if $A \in \Psi^{-\infty, -n}_{\mathrm{sc}-\mathrm{unif}}$ has kernel with this special support property, then \[ A: \mathcal C^{k,\alpha}_{\varepsilon}(M,g) \longrightarrow \mathcal C^{r,\alpha}_{\varepsilon}(M,g) \] for any $r \in \mathbb N$. Now let $A \in \Psi^{\kappa,-n}_{\mathrm{sc}-\mathrm{unif}}(M,g)$ have Schwartz kernel supported in the region $|w| \leq 1$, and as before, assume $\kappa \leq k$. We do not need to assume that $\kappa$ is an integer, but then must change the H\"older indices accordingly and interpret the borderline cases where $\kappa = k -i + \alpha$, $i \in \mathbb N$, in terms of Zygmund spaces. We can then proceed by a combination of a rescaling argument and invoking the fact that an ordinary pseudodifferential operator of order $\kappa$ in a ball of size $2$ induces a map $\mathcal C^{k,\alpha}_0(B_1(0)) \rightarrow \mathcal C^{k-\kappa,\alpha}(B_1(0))$, between {\it ordinary} H\"older spaces (and where elements of the domain space are compactly supported in $B_1(0)$). Indeed, given any $u \in \mathcal C^{k,\alpha}_{\varepsilon}$, choose a locally finite cover $B_{2\varepsilon}(q_j)$ of bounded covering multiplicity and a partition of unity $\chi_j$ for which there are uniform bounds on its semiclassical derivatives up to order $k$, and such that $\chi_j = 1$ on $B_{\varepsilon}(q_j)$. Then $u = \sum \chi_j u$ and since only at most some fixed number of the $A(\chi_j u)$ have support at any one point, it suffices to estimate the norms of each of these summands. Recalling Remark~\ref{scnormballs}, we may compute the semiclassical H\"older norms by taking the supremum over balls of radius $\varepsilon$ (or any fixed multiple of $\varepsilon$). Thus, rescaling the coordinates of each summand by a factor of $1/\varepsilon$, we reduce to the action of a standard pseudodifferential operator of order $\kappa$ on a standard $\mathcal C^{k,\alpha}$ function on a ball of fixed radius, where the result is well known. Finally, if $A \in \Psi^{\kappa, \mu}_{\mathrm{sc}-\mathrm{unif}}(M,g)$, then it is clear from all of the above, and the basic definitions, that the assertion of Proposition \ref{bddprop} holds. \end{proof} \medskip \noindent{\bf Operators with finite regularity coefficients.} As explained at the beginning of this section, the geometric microlocal techniques used here require the smoothness of both the metric $g$ and the coefficients of $L$. Using that assumption, we have shown that the inverse $(\zeta I - \varepsilon^m L)^{-1}$ exists and is bounded on $\mathcal C^{k,\alpha}$ if $\varepsilon$ is sufficiently small. We now extend this to operators and metrics of lower regularity. \begin{prop} Suppose that $(M,g)$ has bounded geometry of order $\ell + \alpha'$ where $\ell \geq k + m$ and $\alpha' \in (\alpha, 1)$, and that $L$ is an admissible elliptic operator of order $m$ with coefficients uniformly bounded in $\mathcal C^{k,\alpha'}$. Then there exists $\varepsilon_0 > 0$ such that $G_{\zeta, \varepsilon} := (\zeta I - \varepsilon^m L)^{-1}$ exists as a bounded operator on $\mathcal C^{k,\alpha}$ if $0 < \varepsilon < \varepsilon_0$. \label{finitereg} \end{prop} While it is possible to carry out some version of the parametrix construction under these regularity hypotheses, this would take extra work, particularly when the regularity order $k$ is small. Thus we prove this another way, using techniques close to those in Section \ref{sec:proofThm}. \begin{proof} Choose an approximating sequence of metrics $g^{(i)}$ and operators $L^{(i)}$, which are all smooth, but so that $g^{(i)} \to g$ in $\mathcal C^{\ell, \alpha'}$ and the coefficients of $L^{(i)}$ converge to those of $L$ in $\mathcal C^{k,\alpha'}$. This can be done by selecting a locally finite open cover of $M$ by normal coordinate balls for $g$ and using a mollifier in each such ball. Although the norms on the spaces $\mathcal C^{j,\beta}$ depend on the metric, it is clear that we may define these all relative to any fixed smooth metric. In fact, we assume that the metric $g$ is fixed (and smooth) for simplicity, since the way it enters the argument below is minor. As has been shown earlier in this section, for each $i$, and for $0 < \varepsilon < \varepsilon_0^{(i)}$, there exists a bounded inverse \[ G^{(i)}_{\varepsilon,\zeta} = (\zeta I - \varepsilon^m L^{(i)})^{-1}: \mathcal C^{k,\alpha} \longrightarrow \mathcal C^{k,\alpha}. \] Denote the operator norm of this inverse by $A_{\varepsilon,\zeta, i}$. There are two main points we must address. The first is that there exists $\varepsilon_0$ where $\varepsilon_0^{(i)} \geq \varepsilon_0 > 0$ for all $i$, and the second is that the operator norms $A_{\varepsilon,\zeta,i}$ are bounded for each $\varepsilon$ and $\zeta$ as $i \to \infty$. Suppose first then that there exist sequences $\varepsilon^{(i)} \to 0$ and $\zeta^{(i)}$ such that \[ P_i := (\zeta^{(i)} I - (\varepsilon^{(i)})^m L^{(i)}) \] does not have a bounded inverse. This failure occurs for one of three reasons: either $P_i$ has nullspace in $\mathcal C^{k,\alpha}$, or its range is dense but not closed, or the closure of its range has positive codimension. All of this is just as in Proposition~\ref{limspecrel}, and the proofs to rule out each of these cases is very similar to the proof of that Proposition, as well as the arguments of Section \ref{sec:proofThm}. For that reason, we shall be brief. In the first of these cases, there exists a sequence $u^{(i)} \in \mathcal C^{k,\alpha}$ such that $||u^{(i)}||_{k,\alpha} = 1$ and $P_i u^{(i)} = 0$. We may extract a limit $u$ of this sequence by rescaling around a point $q_i \in M$ where $|u^{(i)}(q_i)| \geq 1/2$. Since $\varepsilon^{(i)} \to 0$, this limiting function $u$ lies in $\mathcal C^{k,\alpha}(\mathbb R^n)$ and satisfies $(\zeta I - L_E) u = 0$, where $\zeta$ is a limit of (a subsequence of the) $\zeta^{(i)}$, and $L_E$ is the constant coefficient strongly elliptic operator arising in this rescaling process. As showed in the proof of Proposition~\ref{prop3.3}, by the strong ellipticity of $L$ (and hence $L_E$), there are no nontrivial solutions of this equation. Next, if the range of $P_i$ is dense, there exist sequences $u^{(i)}, f^{(i)} \in \mathcal C^{k,\alpha}$ such that $P_i u^{(i)} = f^{(i)}$, with $||u^{(i)}||_{k,\alpha} = 1$ and $||f^{(i)}||_{k,\alpha} \to 0$. Just as in the previous paragraph, there is a nontrivial limit $u \in \mathcal C^{k,\alpha}(\mathbb R^n)$ such that $(\zeta I - L_E)u = 0$, which is impossible. Finally, if the closure of the range of $P_i$ is a proper subspace, then we may apply the same type of argument to the sequence of distributions $v^{(i)}\in (\mathcal C^{k,\alpha})^*$ which satisfy $P_i^* v^{(i)} = 0$. To do this, we must note that since $v^{(i)}$ satisfies this elliptic equation, it lies in $\mathcal C^{k+m,\alpha}$. There is a limiting function $v$ which satisfies $(\bar{\zeta} I - L_E^*)v = 0$, which again cannot happen. This proves that the inverses $G_{\varepsilon,\zeta}^{(i)}$ all exist for $\varepsilon$ lying in some fixed interval $(0, \varepsilon_0)$. Now fix any $\varepsilon$ in this interval, and any $\zeta$, and suppose that the norms $A_i = ||G_{\varepsilon,\zeta}^{(i)}||_{\mathcal L(\mathcal C^{k,\alpha})}$ are unbounded as $i \to \infty$. This implies that there is a sequence $f_i \in \mathcal C^{k,\alpha}$ such that \[ ||G_{\varepsilon, \zeta}^{(i)} f_i||_{k,\alpha} \geq \frac12 A_i ||f_i||_{k,\alpha}. \] Writing $u_i = G^{(i)}_{\varepsilon,\zeta} f_i$, this is the same as \[ ||u_i||_{k,\alpha} \geq \frac12 A_i || (\zeta_i I - \varepsilon^m L^{(i)}) u_i||_{k,\alpha}. \] Normalizing so that $||u_i||_{k,\alpha} = 1$ for all $i$, then $||f_i||_{k,\alpha} \leq 2/A_i \to 0$. Passing to a limit as usual, but recalling that $\varepsilon$ is fixed, there exists a limiting function in $\mathcal C^{k+m,\alpha}$, defined either on $M$ or on one of its limiting spaces $M_\infty$, such that $(\zeta I - \varepsilon^m L)u = 0$ or $(\zeta I - \varepsilon^m L_\infty)u_\infty = 0$. The proof is complete once we show that these last possibilities cannot occur. Let us focus on the first, since the second is essentially the same. The point is simply that $\zeta I - \varepsilon^m L$ cannot have nullspace for arbitrarily small values of $\varepsilon$. Indeed, if this operator were to have nullspace in $\mathcal C^{k,\alpha}$ for some sequence $\varepsilon_i\to 0$, then the same rescaling argument as we have done several times already would yield a limiting function $u$ in $\mathcal C^{k,\alpha}$ on $\mathbb R^n$ such that $(\zeta I - L_E)u = 0$, which we know is impossible. \end{proof} \begin{remark} It is perhaps worth emphasizing the flow of logic in this argument. We first show that for every one of the approximating operators $L^{(i)}$, the operator $\zeta I - \varepsilon^m L^{(i)}$ is invertible for $\varepsilon < \varepsilon_0$ where $\varepsilon_0$ does not depend on $i$. However, it may be necessary to restrict to a slightly smaller interval $0 < \varepsilon < \varepsilon_1 < \varepsilon_0$ in order to guarantee that $\zeta I - \varepsilon^m L$ does not have nullspace. The main point is that it is only the constant coefficient operators $\zeta I - L_E$ which we can check specifically do not have nullspace. These operators only appear as limits when $\varepsilon \to 0$, and the argument to rule out nullspace when $\varepsilon$ is sufficiently small, while it is not quantitative, is insensitive to the regularity of the coefficients of $L$. \end{remark} \section{Applications} \label{sec:applicats} In this section we present an application of the results proven in this paper. We begin by stating a general theorem which establishes short-time existence, uniqueness and continuous dependence on initial conditions for a large class of geometric flows on manifolds with bounded geometry. We then illustrate its application by obtaining a new result concerning the short-time existence and stability of the higher order `ambient obstruction flow' on open manifolds with bounded geometry. \subsection{A general result} For $k, m \in \mathbb{N}$, $0 < \alpha < \alpha' < 1$, suppose that $(M,g)$ is a complete Riemannian manifold with bounded geometry of order $k + m + \alpha'$, and let $F$ be a smooth, possibly nonlinear, elliptic partial differential operator of order $m$ acting on some open subset of the space of sections of some uniform vector bundle over $M$. Set $X = \mathcal C^{k,\alpha}(M,g)$ and $D = \mathcal C^{m+k,\alpha}(M,g) \subset X$. (As above, these are little H\"older spaces.) Let $\mathcal U$ be an open subset of $D$ for which \[ F: \mathcal U \longrightarrow X \] is a smooth mapping. Now consider the Cauchy problem for Banach-valued sections: \begin{equation} \frac{du}{dt} = F( u(t) ), \qquad u(0) = u_0 \in \mathcal U. \label{eqn:CPforu} \end{equation} Assume that the linearization $DF_u$ at any $u \in \mathcal U$ is admissible; hence by Theorem \ref{thm:main-A}, each such $DF_u$ is sectorial as an unbounded map from $X$ to itself. Following \cite{Lunardi}, we then obtain the wellposedness result of Theorem \ref{thm:main-B}, restated here for convenience: \begin{theorem} \label{thm:STE} Assuming the notation as well as the hypotheses above, \begin{enumerate} \item (\emph{Short-time existence, uniqueness}) There exists $T > 0$ such that the initial value problem \eqref{eqn:CPforu} has a unique smooth solution for $t \in [0,T)$. \item (\emph{Continuous dependence}) In addition, the estimate \begin{equation} \label{wplocest} \|v(t) - w(t)\|_{D} \leq C \| v_0 - w_0\|_D, \; \; \mbox{for all} \; t \in [0,\varepsilon) \end{equation} is valid for any two solutions $v(t)$ and $w(t)$ with initial values $v_0, w_0 \in \mathcal U$. \end{enumerate} \end{theorem} \begin{proof} Observe that since we are using little H\"older spaces, $D$ is dense in $X$. Note also that since sectoriality is an open condition, see \cite[Proposition 2.4.2]{Lunardi}, it suffices to prove sectoriality of $L = DF_{u_0}$ at any one particular point $u_0 \in \mathcal U$ in order to prove sectoriality at every $u'_0$ in some perhaps slightly smaller neighborhood $\mathcal U'$. Invoking Theorem \ref{thm:main-A}, we may now apply Theorem 8.1.1 and Corollary 8.1.2 in \cite{Lunardi} to obtain existence and uniqueness; continuous dependence in $t$ is addressed in Section 8.3 in \cite{Lunardi}. \end{proof} \subsection{Ambient obstruction flows} In their study of conformal invariants of a compact manifold endowed with a conformal structure, Fefferman and Graham introduce the ambient obstruction tensor, \cite{FG}. If $n=2\ell$ is even, the ambient obstruction tensor $\mathcal{O}_n$ on a manifold $(M^n, g)$ is a conformally covariant, trace-free, divergence-free symmetric $2$-tensor associated to the metric $g$. Its expression involves $n-2$ derivatives of the Ricci tensor. In the particular case $n=4$, the obstruction tensor $\mathcal{O}_4$ coincides with the Bach tensor \begin{align} B_{ij} = {P_{ij,k}}^k - {P_{ik,j}}^k - P^{kl} \, W_{kijl}, \end{align} where $P_{ij} = \frac{1}{2} \left( Rc_{ij} - \frac{S}{6} g_{ij} \right)$, and $W_{ijkl}$ and $S$ are the Schouten tensor, the Weyl tensor and the scalar curvature of $g$, respectively. We refer to \cite{FG}, where the importance of this tensor to conformal geometry is explained. We now study a flow associated to this ambient obstruction tensor. If this flow exists and converges as $t \to \infty$, the limit must be ``obstruction-flat'', a condition describing a natural class of canonical metrics in higher dimensions. On compact manifolds, the wellposedness and uniqueness of solutions to this flow is the topic of the two papers \cite{BahuaudHelliwell, BahuaudHelliwell2} by the first author here and Helliwell. As an application of the methods of the present paper, we generalize these results to the setting of complete manifolds of bounded geometry. We describe this briefly here and refer the reader to \cite{BahuaudHelliwell, BahuaudHelliwell2} for more detail. The obstruction flow itself, namely $\partial_t g = \mathcal O_n(g)$, is degenerate both because of the underlying conformal covariance as well as the usual diffeomorphism invariance. To counter the first of these, we introduce the modified obstruction flow \begin{align} \label{eqn:AOF} \begin{cases} \partial_t g &= \mathcal{O}_n(g) + c_n (-1)^{\frac{n}{2}} ( (-\Delta)^{\frac{n}{2}- 1} S ) g \\ g(0) &= g_0, \end{cases} \end{align} where \begin{align} c_n = \frac{1}{2^{n/2 - 1} ( \frac{n}{2} - 2)! (n-2) (n-1)}. \end{align} In $4$ dimensions this is the modified Bach flow \begin{align} \begin{cases} \partial_t g &= B(g) - \frac{1}{12} (\Delta S ) g \\ g(0) &= g_0. \end{cases} \label{mBF} \end{align} This modification breaks the conformal gauge in the sense that stationary points of this modified flow are obstruction flat metrics with harmonic scalar curvature. The scalar curvature condition is the normalization within a conformal class. The proof of this uses that $\mathcal{O}_n$ is trace-free. The invariance under diffeomorphisms can be handled using a version of DeTurck's method, which is as an effective tool to handle this degeneracy for the Ricci flow (see Chapter 2, Section 6 of \cite{CLN}). We now describe this method in the present setting. Fix a background metric $\widetilde{g}$; then any smooth one-parameter family of metrics $g(t)$ now defines a time-dependent vector field \begin{align} \label{def:DeTvf} V(t) = \sum V_k(t,z) \partial_{z_k}, \quad \mbox{where}\qquad V^k(t,z) := g^{pq}(t) \left( \Gamma(g(t))^k_{pq} - \Gamma(\widetilde{g})^k_{pq} \right) \end{align} using the Christoffel symbols $\Gamma$ of the indicated metrics. From this we define the DeTurck vector field \begin{align} \label{eqn:DT-vectorfield} U = c_n (n-1) (-1)^{\frac{n}{2}-1} (-\Delta)^{\frac{n}{2}-1} V + \frac{c_n ( n-2) (-1)^\frac{n}{2} }{2} (-\Delta)^{\frac{n}{2}-2} \, \nabla S. \end{align} and finally the obstruction-DeTurck flow \begin{align} \label{eqn:ODT} \begin{cases} \partial_t g &= \mathcal{O}_n(g) + c_n (-1)^{\frac{n}{2}} ((-\Delta)^{\frac{n}{2}- 1} S ) g + L_U g \\ g(0) &= g_0. \end{cases} \end{align} As usual, one must show that solutions of this gauged flow lead to solutions of the original (modified) flow \eqref{eqn:AOF}. To this end, given a solution $g(t)$ to \eqref{eqn:ODT}, solve the ODE \begin{align} \label{eqn:DT-trick} \begin{cases} \frac{d}{dt} \phi_t&=-U \circ \phi_t\\ \phi_0&=\mathrm{id}, \end{cases} \end{align} and let $\phi_t$ be the one-parameter family of diffeomorphisms generated by $-U$. The fact that $\widetilde{g}$ and $g(t)$ have bounded geometry implies that $\phi_t$ exists at least for $t$ in some small interval around $0$. A short calculation then shows that $\bar{g}(t) = \phi^*_t g(t)$ solves \eqref{eqn:AOF}. Uniqueness of solutions to the gauged flow \eqref{eqn:ODT} follows directly from the semigroup method that we invoke below. Uniqueness of solutions to the ungauged flow \eqref{eqn:AOF} requires more work. This is explained carefully in \cite{BahuaudHelliwell2}, but the main ideas are as follows. Given a particular solution $\overline{g}(t)$ to \eqref{eqn:AOF} and a choice of reference metric $(M,\widetilde{g})$, one may again use semigroup techniques to solve a higher order analogue of the harmonic map heat flow equation for a family of diffeomorphisms $\phi_t$ from $(M,\overline{g}(t))$ to $(M,\widetilde{g})$. This equation is chosen exactly so that pullback $g(t) = (\phi_t^{-1})^* \overline{g}(t)$ solves \eqref{eqn:ODT} with reference metric $\widetilde{g}$ and $U$ and $V$ as above. The various uniqueness statements then imply that $\overline{g}$ is uniquely determined. We now expand on both the existence and uniqueness statements. Taking the reference metric $\widetilde{g}$ equal to the initial metric, i.e.,\ $\widetilde{g} = g_0$, we define \[ F( g ) := \mathcal{O}_n(g) + c_n (-1)^{\frac{n}{2}} ( (-\Delta)^{\frac{n}{2}- 1} S ) g + L_U g. \] As proved in \cite{BahuaudHelliwell}, \begin{align} DF_{g_0} = \left. \frac{d}{ds} F( g_0 + s h) \right|_{s=0} &= (-1)^{\frac{n}{2}-1} A_{g_0} h + \mathcal{P}( \partial^{n-1} h, \partial^n g_0, g_0^{-1},\partial^n \widetilde{g}, \widetilde{g}^{-1} ), \end{align} where the leading term \begin{align} (A_{g_0} h)_{jk} := g_0^{r_1 s_1} g_0^{r_2 s_2} \cdots g_0^{r_{n/2} s_{n/2}} \partial_{r_1} \partial_{s_1} \ldots \partial_{r_{n/2}} \partial_{s_{n/2}} h_{jk} \end{align} is an operator of order $n$ and $\mathcal{P}$ is a polynomial expression in the input tensors and their derivatives of appropriate order. Note that $A_{g_0}$ is the leading term in $\Delta^{n/2}$ and is strongly elliptic. We are now ready to prove Theorem \ref{thm:main-C}, restated here for convenience. \begin{theorem} Let $(M^n,g)$ be a complete Riemannian manifold of bounded geometry of order $2n + \alpha'$, with even dimension $n = 2\ell$, and where $0 < \alpha < \alpha' < 1$. If $g_0$ is any smooth metric on $M$, then there exists $T > 0$ and a family of unique metrics $g: [0,T) \to \mathcal C^{n,\alpha}(M,g)$ solving the ambient obstruction flow \begin{align} \begin{cases} \partial_t g &= \mathcal{O}_n(g) + c_n (-1)^{\frac{n}{2}} ( (-\Delta)^{\frac{n}{2}- 1} S ) g \\ g(0) &= g_0. \end{cases} \end{align} \end{theorem} \begin{proof} Set $D = \mathcal C^{2n,\alpha} \subset X = \mathcal C^{n,\alpha}$ and observe that $F: D \longrightarrow X$ is smooth. Since $DF_g$ is constructed in terms of the metric tensor, the uniform geometry implies that $DF_g$ is admissible. Hence by Theorem \ref{thm:STE}, there is a short-time solution to the obstruction-DeTurck flow. As explained above, the equation \eqref{eqn:DT-trick} can then be solved to obtain the family of diffeomorphisms $\phi_t$, and we then deduce that $\bar{g}(t) = \phi_t^* g(t)$ is a short-time solution to the obstruction flow with initial condition $g_0$. To argue uniqueness, suppose that $\overline{g}_i(t)$, $i=1,2$ are two solutions to \eqref{eqn:AOF} with the same initial condition $g_0$. Again choose reference metric $\widetilde{g} = g_0$. Following Section 5.2 of \cite{BahuaudHelliwell2}, for each $i$ we set \[ E(\phi_i) := (-1)^{n/2} c \Delta_{\overline{g}_i, g}^{n/2}\phi_i + \mathcal P(\phi_i), \] where $\Delta_{\overline{g}_i, g}$ is the Laplacian associated to the `map covariant derivative' for the identity map $(M, \overline{g}_i(t)) \to (M, \widetilde{g})$, as described in \cite{BahuaudHelliwell2}, and where $\mathcal P$ is a nonlinear differential operator of order $n-1$ acting on $\phi$. Combining this with the ODE for $\phi$ itself, we arrive at the strictly parabolic equation \begin{equation} \partial_t \phi_i = E(\phi_i), \; (\phi_i)(0) = \mathrm{id}. \label{deq} \end{equation} Taking advantage of the explicit coordinate expression for $E$ in \cite{BahuaudHelliwell2}, and using the bounded geometry of $M$ with respect to either of the metrics $g(t)$ or $\overline{g}(t)$ (valid in some fixed time interval), we see that $DE$ is an admissible elliptic operator, and hence Theorem~\ref{thm:STE} may be applied to \eqref{deq} to conclude that this equation has a unique solution on some short time interval that remains a diffeomorphism. The remainder of the argument finishes exactly as in Section 5.3 of \cite{BahuaudHelliwell2}. \end{proof}
1,116,691,501,029
arxiv
\section{Introduction} \label{sec:introduction} Topology optimization finds the optimal distribution of material in a given design domain $D$ to minimize a cost function and satisfy constraint function inequalities. In the classic element-wise uniform density based method, which in this paper we refer to as element-wise uniform volume fraction method\footnote{We opt to use volume fraction over density to avoid confusion with the physical quantity density, which is often used in topology optimization applications such as elastodynamics, fluids, etc.}, the optimization algorithm places material in individual elements of a background mesh to define the geometry of the optimal design. Regions devoid of material are meaningless, hence the motivation to coarsen the mesh in these regions. By the same token, regions which contain material require a higher mesh resolution. This cost saving strategy that distributes elements with different sizes within the mesh is known as the Adaptive Mesh Refinement (AMR). If the elements have different sizes, as in AMR, it is intuitively wrong to think that all design variables have the same contribution to the design. In particular, we explain why it is necessary to accommodate the element size when calculating the inner products involving the design variables within the NLP algorithms. However, this is not done within most NLP algorithms used in the topology optimization community as they assume the design is a vector in the real vector space $\mathbb{R}^n$, i.e., simply a vector of length equal to the number of elements in the mesh, $n$, cf. IPOPT \citep{wachter2006}, SNOPT \citep{snoptmanual}, MMA \citep{mma}, FMINCON \citep{matlab} and Optimality Criteria \citep{ocmethod}. On the other hand, the NLP libraries Optizelle \citep{optizelle}, Moola \citep{moola}, ROL \citep{ridzal} and TAO \citep{tao-user-ref} contain to various extents the capacity of treating design fields as elements of their underlying function spaces. Related work by \cite{funke_book} compares mesh-independent and dependent versions of the steepest descent algorithms for an unconstrained problem and estimates their rates of convergence. It is also necessary to mention that in the PDE-constrained optimization community, NLP algorithms are inherently mesh-independent as they are implemented in the corresponding function space. For instance, \cite{ulbrich2009primal} implements an infinite-dimensional (inf-dim) primal-dual interior-point method with a Newton solver, \cite{ziems2011adaptive} implements an inexact sequential quadratic programming method with an adaptive multilevel mesh refinement scheme and \rojo{\cite{Blank2017} solves a phase field based topology optimization with a projected gradient method for cases where the cost function is only differentiable in $L^{\infty}$.} We choose the $L^2$ function space to represent the set of possible designs on the design domain $D$. It is equipped with an inner product and discretized to be piecewise uniform\footnote{We use uniform to describe functions that do not change in space and constant to describe functions that do not change in time.} over the finite elements. Other infinite-dimensional function spaces choices, e.g $H^1$, are possible and should be addressed in the future. The inconsistency of using a design field in $L^2$ with an NLP algorithm formulated in $\mathbb{R}^n$ is generally not a problem because most of the topology optimization studies use uniform meshes. However, when using meshes with different element sizes, as is the case in AMR, the $\mathbb{R}^n$ viewpoint yields mesh-dependent designs, whereas the $L^2$ approach does not. \hl{An immediate corollary is that restriction by filtration alone does not ensure mesh-independent designs, as is commonly accepted in the topology optimization community.} This work is laid out as follows: Section \ref{sec:mathprelim} presents the mathematical tools we need to implement mesh-independent NLP algorithms. We use these tools in Section \ref{sec:mmasec} to make one of the most popular NLP algorithms in the topology optimization community, the Globally Convergent Method of Moving Asymptotes (GCMMA/MMA) algorithm, mesh-independent. In Section \ref{sec:examples}, \hl{we first validate the NLP algorithm by solving three common problems in topology optimization with contrived meshes specifically built to increase the ill-conditioning of the optimization problem. We then apply the algorithm to a three dimensional problem with a uniform mesh and different levels of refinement to showcase the importance of our algorithm for large scale problems. Finally, we solve two design problems with different physics and AMR applied during the optimization.} Section \ref{sec:conclusions} briefly summarizes our findings and presents conclusions. The function spaces concepts used in this article require a rigorous mathematical discussion to be absolutely precise. Namely, it is necessary to show the differentiability of the convex approximation within the GCMMA to ensure that applying the Newton's method is mathematically sound. These details would quickly obscure our main focus. Our intent is to convey the differences between the $L^2$ and $\mathbb{R}^n$ NLP algorithms as simply as possible and to demonstrate their differences. We therefore opt to take a more pragmatic approach in our discussions at the expense of glossing over important mathematical details. \section{Mathematical Preliminaries} \label{sec:mathprelim} A topology optimization algorithm converges in a mesh-independent fashion by treating the design as a field, here a field in the $L^2$ space, using concepts from functional analysis. The design field is then discretized in a consistent manner using the finite element basis, i.e. \rojo{resulting in} the \rojo{widely} used element-wise uniform volume fraction field\footnote{ \rojo{\label{ft:elemwise}In this work, for simplicity, we focus on the most popular topology optimization approach, where the design is defined by an element-wise uniform material volume fraction in the Hilbert space $L^2$, but it could be extended to other parametrizations such as an $H^1$ nodal-based material volume fraction.}}. Notably, the norms in the NLP algorithm that check for convergence are discretized in this finite element space. To illustrate the proper discretization, consider the unconstrained minimization problem \begin{equation} \begin{aligned} \begin{split} & \underset{\nu \in V}{\text{min}} & & \theta(\nu)\,, \\ \end{split} \label{eq:orig_problem} \end{aligned} \end{equation} with the functional \begin{align} \theta & :V\rightarrow \mathbb{R}\,, \end{align} where $\nu: V \rightarrow \mathbb{R}$ is our volume fraction design field that belongs to $V$, a Hilbert space on domain $D$, equipped with an inner product $(\cdot, \cdot)_V$, which induces the primal norm $\norm{\cdot}_V$. For our topology optimization, $V=L^2(D)$, which is equipped with the norm \begin{align} \norm{\nu}_{L^2} = \sqrt{(\nu, \nu)_{L^2}} = \left( \int_{D} \nu^2 ~ dV \right)^{1/2}\,. \end{align} The space of all bounded linear functionals that map $V$ to $\mathbb{R}$ is the dual space $V^*$, which is a subset of $\mathscr{L}(V, \mathbb{R})$, i.e. the space of linear operators from $V$ to $\mathbb{R}$, i.e. $V^* \subset \mathscr{L}(V, \mathbb{R})$. Both the primal $\norm{\cdot}_V$ and dual $\norm{\cdot}_{V^*}$ norms can be used to check for convergence in NLP algorithms. To formulate NLP algorithms on the function space $V$, we need the Riesz map from the Riesz representation theorem: Let $V$ be a Hilbert space with inner product $(\cdot, \cdot)_V$ and dual space $V^*$. For every $\varphi \in V^*$ there is a unique element $u\in V$ such that $\varphi (v) = \left( u, v \right)_V$ for all $v\in V$. This one-to-one map is the Riesz map $\Phi:V \rightarrow V^*$ defined such that $\Phi(u) = \varphi$; it is an isometry between $V$ and $V^*$. The discretization of the primal and dual spaces follow from \cite{funke_book}. We approximate the volume fraction field $\nu \in V$ with $\nu_h \in V_h$, where $V_h$ is the span of basis functions $\mathscr{P} = \left\{\phi_1, ..., \phi_n \right\}$, $\phi_i \in \rojo{V_h}$ and $n$ is the dimension of $V_h$. Our approximation now reads \newcommand{\sumonn}[1]{\sum_{#1=1}^{n}} \begin{align} \nu(\mathbf{x}) \approx \nu_h(\mathbf{x}) & = \sumonn{i} \nu^i \phi_i(\mathbf{x}) = \boldsymbol{\nu}^T \boldsymbol{\phi}(\mathbf{x})\,. \label{eq:l2_disc} \end{align} We similarly approximate $\iota\in V$ with $\iota_h \in V_h$ so that the inner product definition yields \begin{equation} \begin{aligned} (\nu_h,\iota_h)_{V_h} & = \int_{D} (\boldsymbol{\nu}^T \boldsymbol{\phi}) (\boldsymbol{\iota}^T \boldsymbol{\phi}) dV \\ & = \boldsymbol{\nu}^T \int_{D} \boldsymbol{\phi} \boldsymbol{\phi}^T ~dV ~ \boldsymbol{\iota} \\ & = \boldsymbol{\nu}^T \mathbf{M} \boldsymbol{\iota} \,,\label{eq:disc_inner} \end{aligned} \end{equation} where \begin{align} \mathbf{M} & = \int_{D} \boldsymbol{\phi} \boldsymbol{\phi}^T ~dV \label{eq:mass_matrix} \end{align} is the mass matrix that reflects the mesh discretization. By construction, $\mathbf{M}$ is symmetric and invertible. The discretized design field $\nu_h$ is in the Hilbert space $V_h=\left(\mathbb{R}^n, (\cdot,\cdot)_{\mathbf{M}}\right)$, i.e. it is a vector in $\mathbb{R}^n$ of dimension $n$ with an $\mathbf{M}$ inner product. This inner product induces the norm $\norm{\nu_h}_{V_h} = \norm{\boldsymbol{\nu}}_{\mathbf{M}}= (\boldsymbol{\nu}^T \mathbf{M} \boldsymbol{\nu})^{1/2}$. Clearly the $L^2$ norm $\norm{\nu_h}_{L^2} = (\boldsymbol{\nu}^T \mathbf{M} \boldsymbol \nu)^{1/2}$ differs from the $\mathbb{R}^n$ norm $\norm{\nu_h}_{\mathbb{R}^n} = (\boldsymbol{\nu}^T \boldsymbol \nu)^{1/2}$. In topology optimization, $\nu_h$ is usually discretized via piecewise uniform functions over the individual elements so $\mathbf{M} = diag\left( |\Omega_1|, ..., |\Omega_n|\right)$ where $|\Omega_e|$ is the volume of the element $\Omega_e$. So if the mesh is uniform, $\mathbf{M} = |\Omega_e| \mathbf{I}$ and hence $\norm{\nu_h}_{L^2} = \sqrt{|\Omega_e|} \norm{\nu_h}_{\mathbb{R}^n}$. The basis $\mathscr{P}=\left\{\phi_1, ..., \phi_n \right\}$ induces a unique dual basis $\mathscr{P}^* = \left\{\phi^{*1}, ..., \phi^{*n} \right\}$ for $V_h^*$ defined such that $\phi^{*i} \in \rojo{V_h^*}$ and $\phi^{*i}(\phi_j)= \delta_{j}^i ~\forall ~i,j=1,...,n$. This $\mathscr{P}^*$ basis is used to discretize any $F\in V^*$ as $F_h \in V_h^*$ such that for all $\iota_h \in V_h$ \begin{align} F_h(\iota_h) = \sum^{n}_{i=1} F_i \phi^{*i} (\iota_h)\,, \end{align} \rojo{where $F_i=F(\phi_i)$ for $i=1,...,n$, i.e. the vector components $F_i$ are interpolated from $F\in V^*$.} In this way, $F_h(\iota_h)$ is computed as \begin{equation} \begin{aligned} F_h(\iota_h) & = \sumonn{i} F_i \phi^{*i} \left( \sumonn{j} \iota^j \phi_j \right) \,, \\ & = \sumonn{i} F_i \sumonn{j} \iota^j \phi^{*i}(\phi_j) \,, \\ & = \sumonn{i} F_i \iota^i \,, \\ & = \mathbf{F}^T \boldsymbol{\iota} \,,\label{eq:dual_norm_disc} \end{aligned} \end{equation} where we used the orthonormal property between the bases $\mathscr{P}$ and $\mathscr{P}^*$ and the linearity of $\phi^{*i}$. From the Riesz representation theorem, there exists $\nu_h\in V_h$ such that $\Phi(\nu_h) = F_h$ or $\Phi^{-1}(F_h) = \nu_h$, where \rojo{$\Phi^{-1}: V_h^* \rightarrow V_h$}. Therefore, \begin{equation} \begin{aligned} F_h(\iota_h) & = (\overbrace{\Phi^{-1}(F_h)}^{\nu_h}, \iota_h)_{V} \,, \\ & = (\nu_h, \iota_h)_{V} \,, \\ & = \boldsymbol{\nu}^T \mathbf{M} \boldsymbol{\iota} \,.\label{eq:dual_norm_disc2} \end{aligned} \end{equation} From Equations \eqref{eq:dual_norm_disc} and \eqref{eq:dual_norm_disc2} we can see that \begin{align} \mathbf{F}^T \boldsymbol{\iota} = \boldsymbol{\nu}^T \mathbf{M} \boldsymbol{\iota}\,. \end{align} Therefore, \begin{align} \mathbf{F} = \mathbf{M} \boldsymbol{\nu} \,, \end{align} and the discrete Riesz map and its inverse are defined such that \begin{align} \Phi_h(\boldsymbol{\nu})=\mathbf{M}\boldsymbol{\nu} \end{align} and \begin{align} \Phi_h^{-1}(\mathbf{F}) & = \mathbf{M}^{-1} \mathbf{F} \,. \label{eq:rieszmap} \end{align} Recalling that the Riesz map is an isometry between the spaces $V_h$ and $V_h^*$, we can now define and calculate the norm of an object $F_h \in V_h^*$ as \begin{equation} \begin{aligned} \norm{F_h}_{V^*_h} & = \norm{\Phi^{-1} (F_h)}_{V_h} \,, \\ & = \norm{\mathbf{M}^{-1} \mathbf{F}}_{\mathbf{M}} \,, \\ & = \sqrt{\mathbf{F}^T \mathbf{M}^{-1} \mathbf{M} \mathbf{M}^{-1} \mathbf{F} } \,, \\ & = \norm{\mathbf{F}}_{\mathbf{M}^{-1}} \,, \end{aligned} \end{equation} where we used the definition of the discrete inner product in Equation \eqref{eq:disc_inner}. In this work, we use the Fr\'echet derivative $D\theta(\nu) \in V^*$ of the function $\theta : V \rightarrow \mathbb{R}$ at $\nu$. If it exists, this derivative is defined such that\footnote{The ``little-$o$ notation'' $o(\norm{h}_V)$ for a functional $q: V \rightarrow \mathbb{R}$ means $\lim_{\norm{h}_V \to 0} \frac{q(h)}{ \norm{h}_V} =0$} \begin{align} \theta(\nu + h) - \theta(\nu) - D\theta(\nu)[h] = o( \norm{h}_V) \label{eq:frechet_deriv} \end{align} for all $h\in V$. By definition, $D\theta(\nu)\in V^*$, and hence the Riesz representation theorem tells us there is an object in $V$ that we will denote $\nabla \theta (\nu) \in V$, i.e. the gradient of $\theta$ at $\nu$ such that $D\theta(\nu)[h]=(\nabla \theta(\nu), h)_V$ for all $h\in V$. Using the Riesz map, $\Phi(\nabla \theta(\nu)) = D\theta(\nu)$ and because the Riesz map depends on the inner product $(\cdot, \cdot)_V$, so does $\nabla \theta(\nu) \in V$. This inner product dependence is crucial in our NLP algorithm as the inner product depends on the mesh discretization, notably from \eqref{eq:rieszmap} we have \begin{align} \boldsymbol{\nabla \theta} = \mathbf{M}^{-1} \boldsymbol{D \theta} \label{eq:rieszmap_grad} \end{align} where $\boldsymbol{\nabla \theta}$ and $\boldsymbol{D \theta}$ are the discrete counterparts of $\nabla \theta(\nu)$ and $D\theta(\nu)$. We are now in position to show how these functional analysis concepts apply to NLP algorithms in the inf-dim space. We start by examining the most basic NLP algorithm for the solution of the simple unconstrained minimization problem of \eqref{eq:orig_problem}. i.e, the steepest descent algorithm, for which the iterate $\nu^{(k)}$ is updated as \begin{align} \nu^{(k+1)} = \nu^{(k)} - \gamma \nabla \theta(\nu^{(k)}) \,. \label{eq:steepestdesc} \end{align} where $\gamma \geq 0$ is the step length. The discretized Equation \eqref{eq:steepestdesc} becomes \begin{equation} \begin{aligned} \boldsymbol{\nu}^{(k+1)} & = \boldsymbol{\nu}^{(k)} - \gamma \boldsymbol{\nabla \theta} \,. \\ & = \boldsymbol{\nu}^{(k)} - \gamma \mathbf{M}^{-1} \boldsymbol{D \theta} \,. \label{eq:steepestdescdisc} \end{aligned} \end{equation} When calculating the norm to check for convergence, we use $\norm{\nabla \theta (\nu)}_V$, which upon discretization is $\norm{\boldsymbol{\nabla \theta}}_{\mathbf{M}}$. It seems intuitive that $\nu$ and $\nabla \theta(\nu)$ must be in the same function space since they are added together. This motivates us to use the gradient $\nabla \theta(\nu)$ and not the derivative $D\theta(\nu)$ in \eqref{eq:steepestdesc}, which is contrary to most topology optimization algorithms. For a uniform mesh $\mathbf{M}=|\Omega_e|\mathbf{I}$ so $\boldsymbol{\nabla \theta} = \frac{1}{|\Omega_e|} \mathbf{I} \boldsymbol{D\theta} $ \rojo{ and hence $\boldsymbol{\nabla \theta}$ and $\boldsymbol{D\theta}$ are parallel and there is no difference in the search direction.} \rojo{However}, the number of iterations to convergence will be different due to the difference in the inner product. The second NLP algorithm uses Newton's method, wherein we iterate to find $\nu$ such that \begin{align} D\theta(\nu)[\delta \nu]= 0~ \forall \delta \nu \in V. \end{align} To do so, we linearize around $\nu^{(k)}$ and solve for the update $\Delta \nu^{(k)}$ via \begin{align} D^2\theta(\nu^{(k)})[\Delta \nu^{(k)},\delta \nu] =-D\theta(\nu^{(k)})[\delta \nu]~~ \forall \delta \nu \in V \,, \label{eq:newtonstep} \end{align} where $D^2\theta(\nu^{(k)})[ \cdot, \cdot]: V \rightarrow \mathscr{L} (V \times V, \mathbb{R})$ is the Hessian, i.e. second derivative of $\theta$ at $\nu^{(k)}$; it is a bilinear map from $(V \times V)$ to $\mathbb{R}$. The difference here is that we need to supply the NLP algorithm with the derivative $D\theta(\nu)$ (and the Hessian $D^2\theta(\nu)$) and not the gradient $\nabla \theta(\nu)$ as in the steepest descent algorithm. Upon discretization, Equation \eqref{eq:newtonstep} becomes \begin{align} \boldsymbol{D}^2\boldsymbol{\theta} \boldsymbol{\Delta \nu}^{(k)} =-\boldsymbol{D\theta} \end{align} When calculating the norm to check for convergence, we use $\norm{D \theta (\nu)}_{V^*}$, whose discretization is $\norm{\boldsymbol{D \theta}}_{\mathbf{M}^{-1}}$ { Inspired by \cite{funke_book}, we showcase the difference between $\nabla \theta(\nu)$ and $D\theta(\nu)$ with the following one dimensional convex unconstrained optimization problem\label{page:exampleproblem} \newcommand{\text{sin}\left(\frac{x}{4}\right)}{\text{sin}\left(\frac{x}{4}\right)} \begin{align} \underset{\nu\in V}{\text{min}} ~ \theta(\nu) = \frac{1}{2}( \nu, c \, \nu )_V - ( \nu, b )_V \,, \end{align} where $b$ and $c$ are given functions on $V=L^2(D)$ with $D=[1, 10]$, $c(x) = \text{sin}\left(\frac{x}{4}\right)$ and $b(x) = x$. The solution is trivially calculated by the stationary condition \begin{equation} \begin{aligned} D\theta(\nu) [v] = 0 & = ( \nabla \theta, v )_V \,, \\ & = ( c \, \nu - b,v )_V \,. \end{aligned} \label{eq:derivexample} \end{equation} which must hold for all $\nu \in V$ and hence $\nu(x)=\frac{b(x)}{c(x)}$. We proceed to discretize the function $\theta$ with piecewise uniform elements, resulting in the expression \newcommand{\boldsymbol{\nu}}{\boldsymbol{\nu}} \begin{align} \theta(\boldsymbol{\nu}) = \frac{1}{2} \boldsymbol{\nu}^T \mathbf{H} \boldsymbol{\nu} - \boldsymbol{\nu}^T \mathbf{M b} \,, \end{align} whose derivative is \newcommand{\mathbf{H}\nubold^{(k)} -\mathbf{M}\mathbf{b}}{\mathbf{H}\boldsymbol{\nu}^{(k)} -\mathbf{M}\mathbf{b}} \begin{align} \boldsymbol{D \theta}= \mathbf{H}\nubold^{(k)} -\mathbf{M}\mathbf{b} \,, \label{eq:discderiv} \end{align} where the Hessian matrix $\mathbf{H}$ is calculated as \begin{align} \mathbf{H} = \int_D c(x) \boldsymbol \phi(x) \boldsymbol \phi(x)^T ~dV \,, \end{align} \rojo{using one point quadrature per element. The vectors $\boldsymbol \nu$ and $\bf b$ represent the values of the functions $\nu(x)$ and $b(x)$ at the quadrature points.} Note that Equation \eqref{eq:discderiv} is the discretization of $D\theta(\nu)$ in Equation \eqref{eq:derivexample}. Applying the steepest descent algorithm in $\mathbb{R}^n$ yields \begin{equation} \begin{aligned} \boldsymbol{\nu}^{(k+1)} & = \boldsymbol{\nu}^{(k)} - \gamma \boldsymbol{D \theta} \,, \\ & = \boldsymbol{\nu}^{(k)} - \gamma(\mathbf{H}\nubold^{(k)} -\mathbf{M}\mathbf{b})\,. \label{eq:steepestexample} \end{aligned} \end{equation} The optimal step size $\gamma$ is calculated with the closed-form expression \newcommand{\mathbf{b}}{\mathbf{b}} \newcommand{\frac{(\gradient)^T(\gradient)}{(\gradient)^T \mathbf{H} (\gradient)}}{\frac{(\mathbf{H}\nubold^{(k)} -\mathbf{M}\mathbf{b})^T(\mathbf{H}\nubold^{(k)} -\mathbf{M}\mathbf{b})}{(\mathbf{H}\nubold^{(k)} -\mathbf{M}\mathbf{b})^T \mathbf{H} (\mathbf{H}\nubold^{(k)} -\mathbf{M}\mathbf{b})}} \begin{align} \gamma = \frac{(\gradient)^T(\gradient)}{(\gradient)^T \mathbf{H} (\gradient)} \,, \end{align} which we use in Equation \eqref{eq:steepestexample} to obtain the fixed point iteration \begin{align} \boldsymbol{\nu}^{(k+1)} = \boldsymbol{\nu}^{(k)} - \frac{(\gradient)^T(\gradient)}{(\gradient)^T \mathbf{H} (\gradient)} (\mathbf{H}\nubold^{(k)} -\mathbf{M}\mathbf{b}) \,. \label{eq:fixedsteepest} \end{align} \newcommand{\mathbf{M}^{-1}\mathbf{H}\nubold^{(k)} - \bbold}{\mathbf{M}^{-1}\mathbf{H}\boldsymbol{\nu}^{(k)} - \mathbf{b}} On the other hand, in the $L^2$ reformulation, we replace the descent direction $\boldsymbol{D \theta}$ with the gradient $\boldsymbol{\nabla \theta} = \mathbf{M}^{-1} \boldsymbol{D\theta}(\boldsymbol{\nu}) = \mathbf{M}^{-1}\mathbf{H}\nubold^{(k)} - \bbold$ in Equation \eqref{eq:steepestexample}.a and calculate the optimal step size \newcommand{\frac{(\mathbf{M}^{-1} \mathbf{H}\boldsymbol\nu^k - \mathbf{b})^T\mathbf{M} (\newgradient)}{(\newgradient)^T \mathbf{M} (\newgradient)}}{\frac{(\mathbf{M}^{-1} \mathbf{H}\boldsymbol\nu^k - \mathbf{b})^T\mathbf{M} (\mathbf{M}^{-1}\mathbf{H}\nubold^{(k)} - \bbold)}{(\mathbf{M}^{-1}\mathbf{H}\nubold^{(k)} - \bbold)^T \mathbf{M} (\mathbf{M}^{-1}\mathbf{H}\nubold^{(k)} - \bbold)}} \begin{align} \gamma = \frac{(\mathbf{M}^{-1} \mathbf{H}\boldsymbol\nu^k - \mathbf{b})^T\mathbf{M} (\newgradient)}{(\newgradient)^T \mathbf{M} (\newgradient)} = 1 \end{align} to obtain the fixed point iteration \begin{align} \boldsymbol{\nu}^{(k+1)} = \boldsymbol{\nu}^{(k)} - (\mathbf{M}^{-1}\mathbf{H}\nubold^{(k)} - \bbold)\,. \label{eq:newfixedsteepest} \end{align} We run both fixed point iteration Equations \eqref{eq:newfixedsteepest} and \eqref{eq:fixedsteepest} starting from $\boldsymbol \nu^{(0)}=\boldsymbol 0$. Convergence is declared when the error $e \leq 10^{-7}$, where $e=\norm{\boldsymbol \nu - \mathbf{b}\oslash \mathbf{c}}$ for $\mathbb{R}^n$ (the operator $\oslash$ is the \rojo{Hadamard} division) and $e=\norm{\nu - b / c}_{L^2}$ for $L^2$. Table \ref{tab:steeptable} denotes the iteration history of both methods with $n$ design variables over a one-dimensional mesh with nodes at positions $x_r=10^{y_r}$ where $y_r=\{\frac{r}{n+1}\}_{r=0}^{n+1}$. The convergence in the $\mathbb{R}^n$ NLP algorithm deteriorates with the number of elements, whereas the number of iterations in the $L^2$ NLP algorithm remains nearly constant. The slower convergence of the $\mathbb{R}^n$ NLP algorithm is attributed to the fact that we are adding members from different spaces ($\boldsymbol\nu \in V_h$ and $\boldsymbol{D\theta}\in V_h^*$). \begin{table}[h!] \begin{center} \begin{tabular}{c|c|c|c} \diagbox{Method}{$n$} & $10^1$ & $10^3$ & $10^5$ \\ \hline $\mathbb{R}^n$ & 195 & 270 & 302 \\ $L^2$ & 52 & 56 & 56 \\ \end{tabular} \caption{Iteration count for the discrete steepest descent of algorithms Equations \eqref{eq:fixedsteepest} and \eqref{eq:newfixedsteepest}.} \label{tab:steeptable} \end{center} \end{table} } \newcommand{\sum_{i=1}^m}{\sum_{i=1}^m} \section{GCMMA in function space} \label{sec:mmasec} We are now motivated to formulate the first-order GCMMA algorithm \citep{gcmma} in the $L^2$ space. GCMMA \citep{gcmma} and its non globally-convergent version MMA \citep{mma} are widely used NLP algorithms in the topology optimization community. Following the implementation given in \cite{svanbergmma}, we highlight here the necessary changes to make the GCMMA algorithm converge in a mesh-independent fashion. To begin, we consider the optimization problem \begin{equation} \begin{aligned} \label{eq:minproblem} \begin{split} & \underset{\nu \in V}{\text{min}} & & \theta_0(\nu) \,,\\ & \text{s.t.} & & \theta_i(\nu) \leq 0, \; i = 1, \ldots, m\,, \\ &&& \nu_{\text{min}} \leq \nu \leq \nu_{\text{max}} \; \text{a.e}\,. \end{split} \end{aligned} \end{equation} First, the artificial optimization variables $\mathbf{y} = (y_1,...,y_m)$ \hl{are added to ensure feasibility} and $z$ \hl{is added make certain subclasses of problems, like least squares or minmax problems, easier to formulate, i.e.} \begin{equation} \begin{aligned} & \underset{ \subalign{ \nu & \in V \\ \boldsymbol{y} & \in \mathbb{R}^m \\ z & \in \mathbb{R} } } {\text{min} } & & \theta_0(\nu) + a_0z + \sum\limits_{i=1}^m \left( c_i y_i + \frac{1}{2}d_i y_i^2 \right) \,, \\ & \text{s.t.} & & \theta_i(\nu) - a_iz - y_i\leq 0, \; i = 1, \ldots, m\,, \\ & & & \nu_{\text{min}} \leq \nu \leq \nu_{\text{max}}\; \text{a.e.}\,, \\ & & & \boldsymbol{y} \geq 0 \,, \\ & & & z\geq 0 \,. \end{aligned} \label{eq:mmaproblem} \end{equation} where $a_0, a_i, c_i$ and $d_i$ are real numbers which satisfy $a_0 > 0, a_i \geq 0, c_i \geq 0, d_i \geq 0$ and $c_i + d_i > 0$ for all $i$, and also $a_i c_i > a_0$ for all $i$ \citep{gcmma}. Note that we recover the original NLP algorithm for $z=0$ and $\boldsymbol{y}=0$. For each optimization iteration $k$, we solve the following convex approximate subproblem based on Equation \eqref{eq:mmaproblem}, the cost and constraint functions and their derivatives and the values at the current iterate $(\nu^{(k)},\boldsymbol{y}^{(k)},z^{(k)})$. Ultimately, we iterate by solving \begin{equation} \begin{aligned} (\nu^{(k+1)}, \boldsymbol{y}^{(k+1)}, & z^{(k+1)}) = \underset{ \subalign{ \nu & \in V \\ \boldsymbol{y} & \in \mathbb{R}^m \\ z & \in \mathbb{R} } }{\text{arg min}} & & \tilde{\theta}_0(\nu) + a_0z + \sum\limits_{i=1}^m \left( c_i y_i + \frac{1}{2}d_i y_i^2 \right) \,, \\ & \text{s.t.} & & \tilde{\theta}_i(\nu) - a_iz - y_i\leq 0, \; i = 1, \ldots, m. \,, \\ & & & \alpha \leq \nu \leq \beta\; \text{a.e} \,, \\ & & & \boldsymbol{y} \geq 0 \,, \\ & & & z\geq 0 \,. \end{aligned} \label{eq:mmasubproblem} \end{equation} where the newly introduced functions $\tilde{\theta}_0$ and $\tilde{\theta}_i$ and bounds $\alpha$ and $\beta$ are defined momentarily. In our formulation of the above subproblem, we replace the summations of the approximating functionals $\tilde{\theta}_i$ in \cite{svanbergmma} with integrals over the domain. \begin{align} \tilde{\theta}_i(\nu) & =\int_D \left(\frac{p_{i}}{U^{(k)} - \nu}+\frac{q_{i}}{\nu - L^{(k)}}\right)~dV + r_i , \; i = 0,1, \ldots, m \,, \label{eq:convex_approx} \\ r_i & = \theta_i(\nu^{(k)}) - \int_D \left(\frac{p_{i}}{U^{(k)} - \nu^{(k)}}+\frac{q_{i}}{\nu^{(k)} - L^{(k)}}\right)~dV \,, \label{eq:mma_integrals} \end{align} where \begin{align} p_{i} & = (U^{(k)} - \nu^{(k)})^2\left(1.001\left(\nabla \theta_i(\nu^{(k)})\right)^+ +0.001\left(\nabla \theta_i(\nu^{(k)})\right)^- +\frac{\rho_i^{(k,j)}}{\nu_{\text{max}} - \nu_{\text{min}}}\right) \label{eq:pfunc} \,, \\ q_{i} & = (\nu^{(k)} - L^{(k)})^2\left(0.001\left(\nabla \theta_i(\nu^{(k)})\right)^+ +1.001\left(\nabla \theta_i(\nu^{(k)})\right)^- +\frac{\rho_i^{(k,j)}}{\nu_{\text{max}} - \nu_{\text{min}}}\right) \label{eq:qfunc} \,. \end{align} and $U^{(k)}$ and $L^{(k)}$ are the soon to be defined moving upper and lower asymptotes; they are all elements of $V$. We emphasize here that the original GCMMA implementation does not make a distinction between gradients and derivatives when building the convex approximation in Equation \eqref{eq:convex_approx}. As shown in this paper, it is vital to use gradients $\nabla \theta_i(\nu^{(k)})$ for $i=0,1, \ldots,m$. It is important to warn readers that the convex approximations in Equation \eqref{eq:convex_approx} are not Frech\'et differentiable where $\nu=0$ in $L^2$. We do not allow this $\nu=0$ situation, but a more mathematically rigorous rederivation of the method to accept such cases should be considered in the future. To ensure the subproblem is convex, we use the ramp like functions $\left(a\right)^+ = \text{max} (0, a)$, and $\left(a\right)^-= \text{max} (0 ,-a)$. The bounds (now fields in $V$) $\alpha$ and $\beta$ are taken as \begin{equation} \begin{aligned} \alpha & = \text{max} \{\nu_{\text{min}}, L^{(k)} + 0.1 (\nu^{(k)} - L^{(k)}), \nu^{(k)} - 0.5 (\nu_{\text{max}} -\nu_{\text{min}} )\} \\ \beta & = \text{min} \{\nu_{\text{max}}, U^{(k)} - 0.1 (U^{(k)} - \nu^{(k)}), \nu^{(k)} + 0.5 (\nu_{\text{max}} -\nu_{\text{min}} )\} \end{aligned} \label{eq:alphabeta} \end{equation} The GCMMA differs from the MMA in its attempt to achieve global convergence by controlling the parameter $\rho_i^{(k,j)}$ (which in the MMA is a fixed small positive value, usually lower than $10^{-5}$) in Equations \eqref{eq:pfunc} and \eqref{eq:qfunc}. Here, the added superscript $j$ corresponds to the inner iteration within the GCMMA. For the initial $j=0$ inner iteration, the solution $(\nu^{(k,0)},\boldsymbol{y}^{(k,0)},z^{(k,0)})$ of the Equation \eqref{eq:mmasubproblem} subproblem, whose details are explained later, is accepted if \begin{align} \tilde \theta_i(\nu^{(k,j)}) \geq \theta_i(\nu^{(k,j)}) ;~ ~ i=0,...,m \,, \label{eq:condition_check} \end{align} whereupon the outer iteration $k+1$ commences with the initial iterate $(\nu^{(k+1)},\boldsymbol{y}^{(k+1)},z^{(k+1)}) = (\nu^{(k,0)},\boldsymbol{y}^{(k,0)},z^{(k,0)})$. Otherwise, the $j+1$ subproblem \eqref{eq:mmasubproblem} is solved with a more conservative convex approximation by replacing $\rho_i^{(k,j)}$ with $\rho_i^{(k,j+1)} > \rho_i^{(k,j)}$ and the Equation \eqref{eq:condition_check} inequality is rexamined. If Equation \eqref{eq:condition_check} is satisfied, we begin the outer iteration $k+1$ with the initial iterate $(\nu^{(k+1,0)},\boldsymbol{y}^{(k+1,0)},z^{(k+1,0)}) = (\nu^{(k, j+1)}, \boldsymbol{y}^{(k, j+1)}, z^{(k, j+1)})$, otherwise, the inner $j+2$ subproblem is solved and so on, cf. Figure \ref{fig:gcmmainner}. The termination criteria will be explained in detail later. \begin{figure*} \centering \tikzstyle{block} = [rectangle, draw, fill=blue!20, text width=3cm, text centered, rounded corners, minimum height=0.5cm] \tikzstyle{input} = [draw, ellipse, fill=green!20, text width=3cm, text centered, minimum height=0.8cm] \tikzstyle{output} = [draw, ellipse, fill=red!20, text width=3cm, text centered, minimum height=0.8cm] \tikzstyle{decision} = [diamond, aspect= 3, draw, fill=orange!20, inner sep=-6pt, text width=5cm, text centered, minimum height=0.5cm] \tikzstyle{decision2} = [diamond, aspect= 3, draw, fill=orange!20, inner sep=-6pt, text width=5cm, text centered, minimum height=0.4cm] \tikzstyle{line} = [draw, -latex] \begin{tikzpicture}[node distance = 1.2cm, auto] \sffamily \node [input] (nu_k) {$\nu^{(0)}$}; \node [block, below of = nu_k] (obj_grad) {Calculate $\theta_i(\nu^{(k)}), ~\nabla \theta_i(\nu^{(k)})$}; \node [block, below of = obj_grad] (param_gc) {Obtain $\rho^{(k,0)}$}; \node [block, below of = param_gc, node distance = 1.2cm] (fapp) {Build $\tilde \theta_i^{(k,j)}(\nu^{(k)})$}; \node [block, text width=5cm, below of = fapp] (nu_new) {Solve for $\nu^{(k,j)}$ in subproblem Equation \eqref{eq:mmasubproblem}}; \node [block, text width=5cm, below of = nu_new] (new_f) {Calculate $\theta_i (\nu^{(k,j)}) ,\tilde{\theta}_i(\nu^{(k,j)})$}; \node [decision, below of = new_f, node distance = 2.5cm] (f_condition) { \begin{align*} \text{If} ~ \tilde \theta_i(\nu^{(k,j)}) & \geq \theta_i(\nu^{(k,j)}) \\ \forall i=0,...,m & \end{align*} }; \node [output, below of = f_condition, node distance = 3cm] (new_opti) {$\nu^{(k+1, 0)} = \nu^{(k,j)}$}; \node [block, right of = f_condition, node distance = 6cm] (repeat) {Obtain $\rho^{(k,j+1)}$}; \path [line] (repeat) |- node[near start]{$j=j+1$} (fapp); \path [line] (f_condition) -- node{No} (repeat); \path [line] (f_condition) -- node{Yes} (new_opti); \path [line] (nu_k) -- (obj_grad); \path [line] (obj_grad) -- (param_gc); \path [line] (param_gc) -- node {$j=0$} (fapp); \path [line] (fapp) -- (nu_new); \path [line] (nu_new) -- (new_f); \path [line] (new_f) -- (f_condition); \node [decision2, below of = new_opti, node distance = 1.5cm] (stopping) {Termination criteria}; \path [line] (stopping) -- node{No} ($(stopping) + (-5,0)$) |- node[near start]{$k=k+1$} (obj_grad); \path [line] (new_opti) -- (stopping); \node [block, below of = stopping, node distance = 1.5cm] (final) {Solution}; \path [line] (stopping) -- node{Yes} (final); \end{tikzpicture} \caption{GCMMA algorithm.} \label{fig:gcmmainner} \end{figure*} The parameters $\rho_i^{(k,j)}$ are calculated following \cite{gcmma}, but with integrals over $D$ replacing summations. For subproblem $(k,0)$ \begin{align} \rho^{(k,0)}_i = \frac{0.1}{\hat V} \int_D \lvert \nabla \theta_i (\nu^{(k,0)}) \rvert \left( \nu_{\text{max}} - \nu_{\text{min}} \right) ~dV ~\text{for}\; i = 0,1, \ldots, m \,, \label{eq:rho_mechanism} \end{align} where $\hat V$ is the volume (area) of $D$. For the subsequent $(k,j+1)$ subproblems \begin{equation} \begin{aligned} \rho^{(k, j+1)}_i & = \text{min} \left\{ 1.1 \left(\rho^{(k, j)}_i + \delta_i^{(k,j)} \right) , 10 \rho_i^{(k,j)} \right\} & \text{if} ~\delta_i^{(k,j)} > 0 \,, \\ \rho^{(k, j+1)}_i & = \rho^{(k, j)}_i & \text{if} ~\delta_i^{(k,j)} \leq 0\,, \end{aligned} \end{equation} where \begin{align} \delta^{(k,j)}_i = \frac{\theta_i\left(\nu^{(k,j)}\right) - \tilde{\theta}_i\left(\nu^{(k,j)}\right)}{d\left(\nu^{(k,j)}\right)} \,. \end{align} with \begin{align} d (\nu) = \int_D \frac{\left( U^{(k)} - L^{(k)} \right) \left(\nu - \nu^{(k)} \right)^2}{\left(U^{(k)} - \nu\right) \left( \nu - L^{(k)} \right) \left( \nu_{\text{max}} - \nu_{\text{min}} \right)} ~dV \,, \label{eq:d_mechanism} \end{align} The moving asymptote fields $L \in V$ and $U \in V$ are updated via heuristic rules. For iterations $k=1$ and $k=2$ \begin{equation} \begin{aligned} L^{(k)} = \nu^{(k)} - 0.5(\nu_{\text{max}} - \nu_{\text{min}}) \,, \\ U^{(k)} = \nu^{(k)} + 0.5(\nu_{\text{max}} - \nu_{\text{min}}) \,. \end{aligned} \label{eq:LUupdate1} \end{equation} For iterations $k \geq 3 $, \begin{equation} \begin{aligned} L^{(k)} = \nu^{(k)} - \gamma^{(k)}(\nu^{(k-1)} - L^{(k-1)}) \,, \\ U^{(k)} = \nu^{(k)} + \gamma^{(k)}(U^{(k-1)} - \nu^{(k-1)}) \,. \end{aligned} \label{eq:asympt} \end{equation} The field $\gamma$ that appears in Equations \eqref{eq:asympt} is determined by the values of $\nu$ in the last three outer iterations. When there is no oscillation in $\nu$, we reduce the convexity by pushing the asymptotes further apart by choosing a larger $\gamma$ to accelerate convergence. Otherwise, we use a smaller value to move the asymptotes closer together. Specifically, we assign \begin{equation} \begin{aligned} \gamma^{(k)} = \begin{cases} 0.7 & \text{if}~(\nu^{(k)} - \nu^{(k-1)})(\nu^{(k-1)} - \nu^{(k-2)}) < 0 \,, \\ 1.2 & \text{if} ~(\nu^{(k)} - \nu^{(k-1)})(\nu^{(k-1)} - \nu^{(k-2)}) > 0 \,, \\ 1 & \text{if} ~(\nu^{(k)} - \nu^{(k-1)})(\nu^{(k-1)} - \nu^{(k-2)}) = 0\,, \end{cases} \end{aligned} \label{eq:gammaupdate} \end{equation} subject to the inequalities \begin{equation} \begin{aligned} L^{(k)} & \leq \nu^{(k)} - 0.01(\nu_{\text{max}} - \nu_{\text{min}}) \,, \\ L^{(k)} & \geq \nu^{(k)} - 10(\nu_{\text{max}} - \nu_{\text{min}}) \,, \\ U^{(k)} & \geq \nu^{(k)} + 0.01(\nu_{\text{max}} - \nu_{\text{min}}) \,, \\ U^{(k)} & \leq \nu^{(k)} + 10(\nu_{\text{max}} - \nu_{\text{min}}) \,. \end{aligned} \label{eq:LUrestrict} \end{equation} \rojo{We remark that the pointwise Equations \eqref{eq:alphabeta}, \eqref{eq:LUupdate1} - \eqref{eq:LUrestrict} are discretized directly by using their element-wise counter parts, which is consistent with our $L^2$ element-wise piecewise uniform parameterization of $\nu$. We do not worry here about the existence of these discretized counterparts as it is outside of the scope of this paper.} \rojo{From here, one can solve the MMA subproblem following similar steps to those in the original article \citep{svanbergmma} with the exception of the calculation of the norms in their corresponding function spaces. We provide these details in \ref{app:appendixA}, where we also summarize all the necessary changes to the original GCMMA. } \section{Numerical examples} \label{sec:examples} To illustrate the effectiveness of incorporating the $L^2$ function space approach in the GCMMA, we solve six topology optimization problems, five in two (2D) and one in three (3D) dimensions. For the 2D cases, we use triangular elements, and hexahedral elements for 3D, both with first order Lagrange basis functions to represent the displacement. The volume fraction field discretization uses the typical topology optimization approach with element-wise uniform basis functions. All of the examples were solved using the finite element library Firedrake \citep{Rathgeber2016} \citep{Luporini2016} \citep{Homolya2017}, which uses PETSc \citep{petsc-efficient} \citep{ petsc-user-ref} \citep{Dalcin2011} as the backend for the linear algebra. We use the direct solver MUMPS \citep{MUMPS01, MUMPS02} in 2D and the PETSc GAMG preconditioner in 3D. The 2D optimizations ran on a single 2.60 GHz Intel XeonE5-2670 processor. We employed up to 36 processors for the 3D cases. The modified GCMMA is an adaptation of a Python implementation of the original MMA algorithm from the GetDP finite element library \citep{dular1998general}. It was rewritten for better performance in parallel and to include an interface for the Firedrake-adjoint library \citep{Mitusch2019}. We use the MMA parameters $a_0 = 1$, $c_i = 10000$ and $a_i = d_i = 0$ for all $i \geq 1$. All results are visualized with ParaView \citep{paraview} and the graphs are plotted with Matplotlib \citep{Hunter:2007}. To launch all the simulations, we use Signac \citep{signac_commat} and Signac-flow \citep{signac_scipy_2018}. \subsection{Ill-conditioned meshes} \hl{In this subsection, we deliberately use meshes with highly refined regions which at first glance, can be deemed as cherry-picked to validate our approach. However, these meshes render ill-conditioned optimization problems and it is precisely this issue that we wish to highlight and resolve.} We first solve three common topology optimization problems in linear elasticity. The topology optimization problem is formulated as \begin{equation} \begin{aligned} \underset{\nu \in \rojo{V}}{\text{min}}~ \theta_0(\nu) & = \int_{D} \pi(\hat{\nu}, \mathbf{u})~ dV \,, \\ \text{such that} ~ \mathbf{u} \in \rojo{W} \text{satisfies} ~ a(\nu; \mathbf{u},\mathbf{v}) & = L(\mathbf{v}) ~ \text{for all}~ \mathbf{v} \in \rojo{W} \,, \\ \theta_i(\nu) & = \int_{D} g_i(\hat{\nu}, \mathbf{u}) dV \leq 0 & i=1,2...m\,, \end{aligned} \label{eq:elastic_optimization} \end{equation} where \begin{equation} a(\nu; \mathbf{u},\mathbf{v}) =\int_{D} r(\hat\nu) \mathbb{C}[\nabla \mathbf{u}] \cdot \nabla \mathbf{v} ~dV \,, \label{eq:weak_form_simp_mma} \end{equation} and \begin{equation} L(\mathbf{v}) =\int_{\Gamma_N} \mathbf{t} \cdot \mathbf{v} ~da \,. \end{equation} \rojo{ The function spaces used are \begin{equation} V = \{\nu \in L^2(D) ~|~ 0 \leq \nu \leq 1\} \end{equation} and \begin{equation} W = \{\mathbf{u} \in [H^1(D)]^3 ~|~ \mathbf{u}|_{\Gamma_D} = 0\} \end{equation} } \rojo{The domain boundary $\Gamma$ is comprised of three complementary regions: $\Gamma_D$, $\Gamma_N$ and $\Gamma_F$ over which the Dirichlet, non-homogeneous Neumann and the homogeneous Neumann boundary conditions are applied. The functionals $\theta_i:L^2 \to \mathbb{R}$, $i=0,1,...$ are assumed to be Fr\'echet differentiable. \label{page:costfunction}} The filtered volume fraction $\hat\nu$ in the above is obtained from the PDE-based filter \citep{boyanfilter} to generate a well-posed topology optimization problem \rojo{ \begin{equation} \begin{aligned} -\kappa \nabla^2 \hat{\nu} + \hat\nu & = \nu ~ & \text{in} ~ D & \,, \\ \kappa \nabla \hat{\nu} \cdot \mathbf{n} & = 0~ & \text{on} ~ \Gamma & \end{aligned} \label{eq:pdefiltermma} \end{equation} } where $\kappa$ determines the minimum length scale of the design such that a small (large) $\kappa$ allows for fine (coarse) scale design fluctuations. We solve Equation \eqref{eq:pdefiltermma} with a finite volume scheme to maintain the volume fraction values between 0 and 1. We use the SIMP penalization \citep{bendsoe2013topology} to encourage 0-1 designs, i.e. designs where 0 or $\nu=1$ almost everywhere. As such, \begin{align} r(\hat\nu) = \epsilon_{\nu} + (1 - \epsilon_{\nu})\hat{\nu}^3 \,, \label{eq:simp} \end{align} where $\epsilon_{\nu}=10^{-5}$ ensures the stiffness matrix in the finite element analysis is nonsingular. Finally, $\mathbb{C}$ is the elasticity tensor corresponding to an isotropic material with Young modulus $E=1$ and Poisson ratio $\upnu=0.3$ and $\mathbf{t}$ is the applied traction on the surface $\Gamma_N$. As usual, a reduced space approach is taken wherein we account for dependence of $\mathbf{u}$ on $\nu$, i.e. $\mathbf{u} \rightarrow \mathbf{u}(\nu)$ and the adjoint method is used to calculate the derivatives of the cost and constraint functions $\theta_i$. The first problem we study is the proverbial structural compliance minimization subject to a \rojo{maximum} volume constraint $\hat V = 0.3 |D|$ i.e. \begin{align} \theta_0 & = \int_{\Gamma} \mathbf{t} \cdot \mathbf{u} ~da \label{eq:compliance} \,, \\ \theta_1 & = \int_{D} \hat \nu ~dV - \hat V \label{eq:volume_comp} \,. \end{align} The design domain $D$, cf. Figure \ref{fig:compliancedomain}, is subject to the traction $\mathbf{t}=-1.0\mathbf{e}_2$ on $\Gamma_N$, the length scale parameter is $\kappa = 0.2$ and the initial design is a uniform field $\nu(\mathbf{x}) = 0.1$. We perform four different optimizations corresponding to uniform and nonuniform meshes with optimizations in $\mathbb{R}^n$ and $L^2$. The uniform mesh contains 128,000 elements. Our non-uniform mesh is illustrated in Figure \ref{fig:amr_compliance} and contains 80,577 elements. It is important to have meshes that are sufficiently refined so the infinite-dimensional response $\mathbf{u}$ is well approximated. We also include a highly refined arbitrary region on the top to clearly illustrate the deficiency of the NLP algorithm in $\mathbb{R}^n$. Both meshes are included as \textit{beam\_uniform.geo} and \textit{beam\_amr.geo} in the files to reproduce the results. In all cases, we run the optimization problems until the number of iterations reaches 200, although some designs converge sooner. The optimized designs in Table \ref{tab:complianceresults} show that the $\mathbb{R}^n$ NLP algorithm is mesh dependent as opposed to the $L^2$ NLP algorithm. Figures \ref{fig:compliancecostfunction} - \ref{fig:compliancekktfunction} show the evolution of the cost and constraint functions and the convergence metric, cf. Equations \eqref{eq:kkt_stopping_final}. It is well known that the topology optimization problem is not convex and hence different initial designs might lead to different local minima. The examples in Table \ref{tab:complianceresults} start from the same initial design. We therefore attribute the difference in the optimized designs to the mesh dependency of the NLP algorithm in $\mathbb{R}^n$. \newlength{\RUno} \newlength{\BeamW} \setlength{\BeamW}{5cm} \newlength{\BeamL} \setlength{\BeamL}{2.875\BeamW} \begin{figure}[!h] \center \tikzset{>=latex} \begin{tikzpicture}[ spring/.style = {decorate,decoration={zigzag,amplitude=6pt,segment length=4pt}} , scale=0.1, every node/.style={inner sep=0pt}] \node (SW) at (0,0) {}; \node (SE) at (100,0) {}; \node (NE) at (100,40) {}; \node (NW) at (0,40) {}; \draw[fill=gray!50!white] (0,0) -- (100,0) -- (100,40) -- (0,40) -- (0,0); \node (D) at (50, 20) {$D$}; \fill[pattern=north west lines] (SW) rectangle ($(NW) - (5.0, 0.0)$); \node (GammaD) at ($(NW) !.5! (SW) + (7.0, 0)$) {$\Gamma_D$}; \node (loadP) at ($0.5*(NE) + 0.5*(SE)$) {}; \draw[ultra thick] ($(loadP) + (0.0, 2.5)$) -- ($(loadP) + (0.0, -2.5)$); \node (Gamma) at ($(loadP) - (5, 0)$) {$\Gamma$}; \draw[->, ultra thick] (0,0) -- (0, 20) node[right]{$y$}; \draw[->, ultra thick] (0,0) -- (20, 0) node[right]{$x$}; \node (A) at ($(SW) - (0.0, 5.0)$) {}; \node (B) at ($(SE) - (0.0, 5.0)$) {}; \dimline[line style = {line width=0.7}, extension start length=-5cm, extension end length=-5cm]{(A)}{(B)}{$L$}; \node (C) at ($(NW) - (15.0, 0.0)$) {}; \node (D) at ($(SW) - (15.0, 0.0)$) {}; \dimline[line style = {line width=0.7}, extension start length=15cm, extension end length=15cm]{(D)}{(C)}{$\frac{2L}{5}$}; \dimline[line style = {line width=0.7}, extension start length=10cm, extension end length=10cm]{($(loadP) + (10, 2.5)$)}{($(loadP) + (10, -2.5)$)}{$\frac{4L}{50}$}; \end{tikzpicture} \caption{Compliance domain. $L=100$} \label{fig:compliancedomain} \end{figure} \definecolor{naranja}{RGB}{255, 165,53} \definecolor{rojo}{RGB}{193,64,77} \definecolor{granate}{RGB}{139,0,63} \begin{table}[h!] \centering \begin{tabular}{|M{0.5cm}|M{5.2cm}|M{5.2cm}|} \hline & Optimization in $\mathbb{R}^n$ & Optimization in $L^2$ \\ \hline \rotatebox{90}{Uniform mesh} & \includegraphics[scale=0.12]{\MyPath/figures-compliance-uniform_euclidean.png} & \includegraphics[scale=0.12]{\MyPath/figures-compliance-uniform_L2.png} \\ \hline \rotatebox{90}{Non-uniform mesh} & \includegraphics[scale=0.12]{\MyPath/figures-compliance-non_uniform_euclidean.png} & \includegraphics[scale=0.12]{\MyPath/figures-compliance-non_uniform_L2.png} \\ \hline \end{tabular} \caption{Optimized designs for the compliance problem.} \label{tab:complianceresults} \end{table} \setlength\figureheight{8cm} \setlength\figurewidth{12cm} \begin{figure} \centering \input{\MyPath/tex_figures-compliance-convergence_plots1.tex} \caption{Cost function evolution for the compliance problem.} \label{fig:compliancecostfunction} \end{figure} \begin{figure} \centering \input{\MyPath/tex_figures-compliance-convergence_plots2.tex} \caption{Constraint function evolution for the compliance problem.} \label{fig:complianceconstraintfunction} \end{figure} \begin{figure} \centering \input{\MyPath/tex_figures-compliance-convergence_plots3.tex} \caption{Convergence metric evolution for the compliance problem.} \label{fig:compliancekktfunction} \end{figure} We next benchmark our $L^2$ algorithm with the compliant mechanism design problem, cf. Figure \ref{fig:mechanismdomain} where we minimize the horizontal displacement $u=\mathbf{u}\cdot\mathbf{e}_1$ in the output port $\Gamma_2$, subject to a maximum volume constraint $\hat V = 0.3 |D|$, viz. \begin{align} \theta_0 & = \int_{\Gamma_2} u ~da \label{eq:compliant} \,, \\ \theta_1 & = \int_{D} \hat \nu ~dV - \hat V \,.\label{eq:volume_mech} \end{align} Consistent with \cite{de2015stress}, we introduce Robin boundary conditions into the formulation \eqref{eq:elastic_optimization}. \begin{equation} \begin{aligned} \mathbf{n} \cdot \mathbb{C}[\nabla \mathbf{u}] \mathbf{n} & = -k_{in} (\mathbf{u}\cdot \mathbf{n}) + f_x & ~ \text{on} ~\Gamma_1 \,, \\ \mathbf{n} \cdot \mathbb{C}[\nabla \mathbf{u}] \mathbf{n} & = -k_{out} (\mathbf{u}\cdot \mathbf{n}) & ~ \text{on} ~\Gamma_2 \,, \end{aligned} \end{equation} where $f_x=10$ and the spring coefficients are $k_{in}=\frac{1}{3}$ and $k_{out}=\frac{0.001}{3}$. Figures \ref{fig:mechanismdomain} and \ref{fig:amr_mechanism} illustrate the design domain and the non-uniform mesh. The number of elements in the uniform (\textit{mechanism\_uniform.geo}) and non-uniform (\textit{mechanism\_amr.geo}) meshes are 57,600 and 199,404. The initial design is again $\nu(\mathbf{x}) = 0.1$. We run all the optimization problems until the number of iterations reaches 200, for which almost all the problems converged to optimized designs. The optimized designs for the length scale $\kappa=0.8$, summarized and illustrated in Table \ref{tab:mechanismresults} and Figures \ref{fig:mechanismcostfunction} - \ref{fig:mechanismkktfunction}, again show that the original $\mathbb{R}^n$ NLP algorithm is mesh dependent. Notably, the design on the non-uniform mesh with the $\mathbb{R}^n$ algorithm does not even converge in 200 iterations. \begin{figure}[!h] \center \tikzset{>=latex} \begin{tikzpicture}[ spring/.style = {decorate,decoration={zigzag,amplitude=6pt,segment length=4pt}} , scale=0.07, every node/.style={inner sep=0pt}] \small \node (NW) at (0,50) {}; \draw[fill=gray!50!white] (0,0) -- (100,0) -- (100,50) -- (0,50) -- (0,0); \foreach \x in {0,4,...,100} \draw [thick] (NW) ++ (\x,2.0cm) circle (2.0cm); \draw[thick] (0,54.5) -- (100, 54.5); \fill[pattern=north west lines] (0,54.5) rectangle (100, 57); \draw ($(NW) - (0.0, 5.0)$) -- ($(NW) - (5.0, 5.0)$); \draw[spring] ($(NW) - (5.0, 5.0)$) -- ($(NW) - (15.0, 5.0)$); \draw ($(NW) - (15.0, 5.0)$) -- ($(NW) - (20.0, 5.0)$); \node (Gamma1) at ($(NW) - (5, 10)$) {$\Gamma_1$}; \draw ($(NW) - (20.0, 10.0)$) -- ($(NW) - (20.0, 0.0)$); \draw[very thick] (NW) -- ($(NW) + (0.0, -10.0)$); \fill[pattern=north west lines] ($(NW) - (25.0, 10.0)$) rectangle ($(NW) - (20.0, 0.0)$); \node (NE) at (100,50) {}; \draw ($(NE) - (0.0, 5.0)$) -- ($(NE) + (5.0, -5.0)$); \draw[spring] ($(NE) + (5.0, -5.0)$) -- ($(NE) + (15.0, -5.0)$); \draw ($(NE) + (15.0, -5.0)$) -- ($(NE) + (20.0, -5.0)$); \node (Gamma2) at ($(NE) + (5, -10)$) {$\Gamma_2$}; \draw ($(NE) + (20.0, 0.0)$) -- ($(NE) + (20.0, -10.0)$); \draw[very thick] (NE) -- ($(NE) + (0.0, -10.0)$); \fill[pattern=north west lines] ($(NE) + (20.0, 0.0)$) rectangle ($(NE) + (25.0, -10.0)$); \node (SW) at (0,0) {}; \node (SE) at (100,0) {}; \node (GammaD) at ($(SW) + (5, 2)$) {$\Gamma_D$}; \fill[pattern=north west lines] ($(SW) + (0.0, 2.0)$) rectangle ($(SW) - (5.0, 0.0)$); \dimline[line style = {line width=0.7}, extension start length=10cm, extension end length=10cm]{($(SW) - (10cm, 0.0)$)}{($(SW) + (-10cm, 2)$)}{$2$}; \foreach \y in {0, 2.5, 5.0, 7.5, 10.0} \draw [-{Latex[width=1mm]}] ($(NW) - (0.0, \y)$) -- ($(NW) + (15.0, -\y)$); \normalsize \node at ($(NW) + (20.0, -5.0)$) {$f_{x}$}; \foreach \y in {0, 2.5, 5.0, 7.5, 10.0} \draw [color=red, -{Latex[width=1mm]}] ($(NE) - (0.0, \y)$) -- ($(NE) - (15.0, \y)$); \node at ($(NE) - (25.0, 5.0)$) {$u_{out}$}; \small \node (A) at ($(SW) - (0.0, 5.0)$) {}; \node (B) at ($(SE) - (0.0, 5.0)$) {}; \dimline[line style = {line width=0.7}, extension start length=-5cm, extension end length=-5cm]{(A)}{(B)}{120}; \node (C) at ($(NE) + (50.0, 0.0)$) {}; \node (D) at ($(SE) + (50.0, 0.0)$) {}; \dimline[line style = {line width=0.7}, extension start length=-50cm, extension end length=-50cm]{(D)}{(C)}{60}; \node (E) at ($(C) + (-10.0, 0.0)$) {}; \node (F) at ($(E) + (0.0, -10.0)$) {}; \dimline[line style = {line width=0.7}, extension start length=-40cm, extension end length=-40cm]{(F)}{(E)}{10}; \node (G) at ($(NW) + (-30.0, 0.0)$) {}; \node (H) at ($(G) + (0.0, -10.0)$) {}; \dimline[line style = {line width=0.7}, extension start length=-30cm, extension end length=-30cm]{(G)}{(H)}{10}; \end{tikzpicture} \caption{Design domain and boundary conditions for the compliant mechanism problem. Domain symmetry is used whereby only the lower half of the structure is analyzed.} \label{fig:mechanismdomain} \normalsize \end{figure} \newlength{\Long} \setlength{\Long}{12cm} \newlength{\RWidth} \setlength{\RWidth}{0.5\Long} \begin{table}[h] \centering \begin{tabular}{|M{0.5cm}|M{6.0cm}|M{6.0cm}|} \hline & Optimization in $\mathbb{R}^n$ & Optimization in $L^2$ \\ \hline \rotatebox{90}{Uniform mesh} & \includegraphics[scale=0.12]{\MyPath/figures-mechanism-uniform_euclidean.png} & \includegraphics[scale=0.12]{\MyPath/figures-mechanism-uniform_L2.png} \\ \hline \rotatebox{90}{Non-uniform mesh} & \includegraphics[scale=0.12]{\MyPath/figures-mechanism-non_uniform_euclidean.png} & \includegraphics[scale=0.12]{\MyPath/figures-mechanism-non_uniform_L2.png} \\ \hline \end{tabular} \caption{Optimized designs for the compliant mechanism problem.} \label{tab:mechanismresults} \end{table} \setlength\figureheight{8cm} \setlength\figurewidth{12cm} \begin{figure} \centering \input{\MyPath/tex_figures-mechanism-convergence_plots1.tex} \caption{Cost function evolution for the compliant mechanism problem.} \label{fig:mechanismcostfunction} \end{figure} \begin{figure} \centering \input{\MyPath/tex_figures-mechanism-convergence_plots2.tex} \caption{Constraint function evolution for the compliant mechanism problem.} \label{fig:mechanismconstraintfunction} \end{figure} \begin{figure} \centering \input{\MyPath/tex_figures-mechanism-convergence_plots3.tex} \caption{Convergence metric evolution for the compliant mechanism problem.} \label{fig:mechanismkktfunction} \end{figure} Our next example is the stress-constrained problem where the goal is to minimize the volume of an L-bracket subject to a \rojo{maximum} pointwise constraint in the Von Mises stress field \rojo{$\sigma_{VM} \leq \sigma_y$}, cf. Figure \ref{fig:stressdomain}. We follow the formulation in \cite{novotny}, where the stress constraint is imposed via a penalty method, i.e. our unconstrained problem uses the cost function \begin{align} \theta_0 & = \int_{D} \hat \nu ~da+ \gamma \norm{\sigma_{VM} - \sigma_y}_+ \,, \end{align} \adapt{where the penalty parameter is $\gamma=10$,} \begin{align} \norm{\sigma_{VM} - \sigma_y}_+ = \int_{D} R_p\left(\frac{\sigma_{VM}}{\sigma_y}\right)dV \,, \end{align} \begin{align} R_p(x) = (1 + (x)^p)^{\frac{1}{p}} - 1\,, \label{eq:penalty_func} \end{align} \adapt{$\sigma_y=1.5$ and $p=8$, for the first 300 iterations and $p=20$ for the remaining 100.} The relaxed stress formulation \cite{chau} uses \begin{align} \sigma_{\eta} = \eta_c \mathbb{C}[\nabla \mathbf{u}] \,, \end{align} to calculate the Von Mises stress $\sigma_{VM} = \sqrt{\frac{2}{3} \sigma_{\eta}^{'}:\sigma_{\eta}^{'}}$, where $\sigma_{\eta}^{'} = \sigma_{\eta} - \frac{2}{3} \text{tr}(\sigma_{\eta}) \textbf{I}$ and \begin{align} \eta_c (\hat\nu) = \hat\nu^{0.5}\,, \end{align} The filter parameter is $\kappa=1.2$ and the initial design is $\nu(\mathbf{x}) = 0.5$. To obtain a better design we extend the domain by adding a region $\Omega_i$ of finite elements near the reentrant corner, cf. Figure \ref{fig:amr_stress}. This region however, is excluded in the design by enforcing the constraint $\int_{\Omega_i} \hat{\nu}~dV \leq 0$. This added region lessens the boundary effect of the filter operation in the reentrant corner region which otherwise adversely affects our results \cite{wallin2020consistent}. The non-uniform mesh (file \textit{lbracket\_amr.geo}) contains 53,122 elements, whereas the uniform mesh (\textit{lbracket\_uniform.geo}) contains 139,264 elements. The optimized designs in Table \ref{tab:stressresults} again illustrate the mesh dependence of the original $\mathbb{R}^n$ NLP algorithm. This time however, the difference in the cost function values between the $\mathbb{R}^n$ and $L^2$ designs is barely noticeable, cf. the log plot in Figure \ref{fig:stresscostfunction}. Due to the high oscillation in the convergence metric, we plot them separately in Figures \ref{fig:stresskktfunction1} - \ref{fig:stresskktfunction4}. \newlength{\ChauWidth} \setlength{\ChauWidth}{10cm} \newlength{\ChauHeight} \setlength{\ChauHeight}{\ChauWidth} \newlength{\ChauCorner} \setlength{\ChauCorner}{0.4\ChauWidth} \begin{figure}[!h] \center \tikzset{>=latex} \begin{tikzpicture}[every node/.style={inner sep=0pt}] \fill[pattern=north west lines] (-0.05\ChauCorner,\ChauHeight) rectangle (1.05\ChauCorner, 1.05\ChauHeight); \draw[very thick] (-0.05\ChauCorner,\ChauHeight) -- (1.05\ChauCorner,\ChauHeight); \draw[fill=gray!50!white] (0,0) -- (\ChauWidth,0) -- (\ChauWidth,\ChauCorner) -- (\ChauCorner,\ChauCorner) -- (\ChauCorner,\ChauHeight) -- (0,\ChauHeight) -- (0,0); \node (GammaD) at (0.5\ChauCorner, 0.95\ChauHeight) {$\Gamma_D$}; \node (A) at (0, -0.5) {}; \node (B) at (\ChauWidth, -0.5) {}; \dimline[line style = {line width=0.7}, extension start length=-1cm, extension end length=-1cm]{(A)}{(B)}{$100$}; \node (C) at (-0.5, 0) {}; \node (D) at (-0.5, \ChauHeight) {}; \dimline[line style = {line width=0.7}, extension start length=1cm, extension end length=1cm]{(C)}{(D)}{$100$}; \node (F) at (0, 1.1\ChauHeight) {}; \node (E) at (\ChauCorner, 1.1\ChauHeight) {}; \dimline[line style = {line width=0.7}, extension start length=1cm, extension end length=1cm]{(F)}{(E)}{$40$}; \node (G) at (1.05\ChauWidth, 0) {}; \node (H) at (1.05\ChauWidth, \ChauCorner) {}; \dimline[line style = {line width=0.7}, extension start length=-1cm, extension end length=-1cm]{(G)}{(H)}{$40$}; \node (Gamma) at (0.97\ChauWidth, 0.95\ChauCorner) {$\Gamma_N$}; \node (L1) at (\ChauWidth, \ChauCorner) {}; \node (L1p) at (\ChauWidth, 1.2\ChauCorner) {}; \node (L2) at (0.97\ChauWidth, \ChauCorner) {}; \node (L2p) at (0.97\ChauWidth, 1.2\ChauCorner) {}; \node (L3) at (0.94\ChauWidth, \ChauCorner) {}; \node (L3p) at (0.94\ChauWidth, 1.2\ChauCorner) {}; \draw [thick,->] (L1p) -- (L1); \draw [thick,->] (L2p) -- (L2); \draw [thick,->] (L3p) -- (L3); \node (L4p) at (0.94\ChauWidth, 1.4\ChauCorner) {}; \node (L5p) at (\ChauWidth, 1.4\ChauCorner) {}; \dimline[line style = {line width=0.7}, extension start length=1.6cm, extension end length=1.6cm]{(L4p)}{(L5p)}{$5$}; \end{tikzpicture} \caption{Intended design domain for the stress-constrained problem.} \label{fig:stressdomain} \end{figure} \setlength\figureheight{8cm} \setlength\figurewidth{12cm} \begin{figure} \centering \input{\MyPath/tex_figures-stress-convergence_plots1.tex} \caption{Cost function evolution for the stress-constrained problem.} \label{fig:stresscostfunction} \end{figure} \begin{figure} \centering \input{\MyPath/tex_figures-stress-convergence_plots_uniform_L2.tex} \caption{Convergence metric evolution for the stress-constrained problem with uniform mesh in $L^2$.} \label{fig:stresskktfunction1} \end{figure} \begin{figure} \centering \input{\MyPath/tex_figures-stress-convergence_plots_uniform_R.tex} \caption{Convergence metric evolution for the stress-constrained problem with uniform mesh in $\mathbb{R}^n$.} \label{fig:stresskktfunction2} \end{figure} \begin{figure} \centering \input{\MyPath/tex_figures-stress-convergence_plots_nonuni_R.tex} \caption{Convergence metric evolution for the stress-constrained problem with non-uniform mesh in $\mathbb{R}^n$.} \label{fig:stresskktfunction3} \end{figure} \begin{figure} \centering \input{\MyPath/tex_figures-stress-convergence_plots_nonuni_L2.tex} \caption{Convergence metric evolution for the stress-constrained problem with non-uniform mesh in $L^2$.} \label{fig:stresskktfunction4} \end{figure} \begin{table} \centering \begin{tabular}{|M{0.5cm}|M{5.6cm}|M{5.6cm}|} \hline & Optimization in $\mathbb{R}^n$ & Optimization in $L^2$ \\ \hline \rotatebox{90}{Uniform mesh} & \includegraphics[scale=0.1]{\MyPath/figures-stress-uniform_euclidean.png} & \includegraphics[scale=0.1]{\MyPath/figures-stress-uniform_L2.png} \\ \hline \rotatebox{90}{Non-uniform mesh} & \includegraphics[scale=0.1]{\MyPath/figures-stress-non_uniform_euclidean.png} & \includegraphics[scale=0.1]{\MyPath/figures-stress-non_uniform_L2.png} \\ \hline \end{tabular} \caption{Optimized designs for the stress-constrained problem.} \label{tab:stressresults} \end{table} \subsection{Uniform refinement} We return to the compliance problem for our next example, but in three dimensions, cf. Figure \ref{fig:three_dim_domain}. The volume constraint is \hl{$\hat{V}= 0.15 |D|$}, the surface $\Gamma_N$ is subject to a traction $\mathbf{t} = -1.0 \mathbf{e}_2$, the length scale parameter $\kappa=5 \times 10^{-5}$ and the initial design is \hl{$\nu(\mathbf{x})=0.15$.} \hl{To prevent the iterative solver from diverging, we set $\epsilon_{\nu}$ to $10^{-4}$.} \begin{figure} \centering \includegraphics[scale=0.5]{./figures-uniform_mesh.pdf} \caption{Design domain for the three dimensional compliance problem.} \label{fig:three_dim_domain} \end{figure} Instead of comparing two different meshes, we compare the influence of the mesh refinement starting from the same uniform mesh with 20x10x10 hexahedral elements. \hl{As such, we perform an optimization with the initial mesh uniformly refined once, twice, three and four times, for a total of 16,000, 128,000, 1,024,000 and 8,192,000 elements.} The construction and refinement of the mesh is done by the utility meshing functions in Firedrake within the code shared along with this paper. Each hexahedral element is split in eight at each refinement level. We plot the evolution of the cost function \hl{for 1000 iterations} corresponding to the $L^2$ and $\mathbb{R}^n$ NLP algorithms for each refinement level in Figure \ref{fig:three_dim_cost_evolution}. The behavior of the $\mathbb{R}^n$ NLP algorithm clearly depends on the refinement level, whereas the $L^2$ does not. Notably, after 1000 iterations, the optimized design with four levels of refinement take compliances values of 10.06 and 2.91 for the $\mathbb{R}^n$ and $L^2$ NLP algorithms. \begin{figure} \centering \includegraphics[scale=0.5]{./uniform_convergence_plots.pdf} \caption{Cost function evolution of the compliance problem in three dimensions with 1, 2, 3 and 4 uniform refinements. Results plotted in log-log scale to highlight the differences.} \label{fig:three_dim_cost_evolution} \end{figure} We compare the four level of refinement designs obtained with the $\mathbb{R}^n$ and the $L^2$ NLP algorithms in Figure \ref{fig:complianceresults3D}. \hl{Upon inspection of the designs, we noticed that the $\mathbb{R}^n$ algorithm fails to reach the lower bound of the volume fraction (0); it never dips below $\nu=10^{-2}$ for the void phase. Due to the volume constraint and the ``heavier'' void phase, the $\mathbb{R}^n$ optimizer cannot add more mass to the structure and therefore, the compliance is higher. We conjecture that the reason behind this is due to the difference between the derivative and the gradient. The $\mathbb{R}^n$ algorithm uses $D\theta$ whereas the $L^2$ uses $\nabla\theta = \mathbf{M}^{-1}D\theta$. Thus, although $D\theta$ and $\nabla \theta$ are parallel since $\mathbf{M} = |\Omega_e| \mathbf{I}$, the sensitivity of the cost and constraint function of small elements is less influential in the $\mathbb{R}^n$ vs $L^2$ algorithm. We further conjecture that using an interpolation scheme with nonzero derivative values for $\nu=0$, such as RAMP \mbox{\citep{ramp_penal}}, could alleviate this issue.} \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[scale=0.35]{\MyPath/3D_uniform_compliance_L2.pdf} \caption{$L^2$} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[scale=0.35]{\MyPath/3D_uniform_compliance_euclidean.pdf} \caption{$\mathbb{R}^n$} \end{subfigure} \caption{Optimized designs for the compliance problem in three dimensions with four levels of refinement, thresholded with volume fraction $\hat\nu$ greater than 0.5} \label{fig:complianceresults3D} \end{figure} \subsection{\hl{Adaptive mesh refinement}} Our $L^2$ GCMMA implementation is specially suitable for use with AMR strategies during the optimization \citep{de2020three}. In our next two examples, we show the algorithm utility when applying AMR during the optimization and compare it with the original $\mathbb{R}^n$ implementation. There are three questions to address when applying AMR in a topology optimization problem: where in the domain we apply AMR, what kind of AMR (coarsening, refinement or both) and when to apply the AMR during the optimization. We use an element-based error quantity to determine the regions subject to AMR. The details are explained in \ref{sec:amr_appendix}. We use two AMR strategies: only refinement or only coarsening. We apply the AMR after a pre-determined number of iterations during the optimization\footnote{We are not concerned with applying an optimal strategy for AMR during the optimization. See \citep{ziems2011adaptive} for a more sophisticated scheme.}. Table \ref{tab:ref_strategy} summarizes four AMR schemes wherein it is seen that refinement occurs either during early iterations as the design evolves or later once the design is well defined. Coarsening only occurs late in the design evolution as well. All problem/algorithm combinations are run for 600 iterations. \begin{table} \centering \begin{tabular}{l|ll} \hline \diagbox{Strategy}{AMR type}& Coarsening & Refinement \\ \hline A & 100, 150 & 10, 80 \\ B & 150, 200 & 100, 150 \\ \end{tabular} \caption{Four AMR strategies. Iteration number after which AMR is applied.} \label{tab:ref_strategy} \end{table} Our first AMR example is a repeat of our cantilever example, but applying coarsening and refinement to the Table \ref{tab:ref_strategy} strategies. The optimized designs, shown in Tables \ref{fig:compliance_refinement} and \ref{fig:compliance_coarsening} for refinement and coarsening respectively, are different when using the $\mathbb{R}^n$ algorithm. The cost function evolution in Figures \ref{fig:compliance_amr_coarsen} and \ref{fig:compliance_amr_refine} further reflect their difference. The meshes for the optimized designs illustrated in the Appendix, cf. Figures \ref{fig:compliance_amr_refinement_grid}, \ref{fig:compliance_amr_coarsening_grid}, highlight the mesh independence of the the $L^2$ versus $\mathbb{R}^n$ algorithm. The cost function evolution for coarsening, cf. Figure \ref{fig:compliance_amr_coarsen}, is not affected by the different refinement strategies even with the $\mathbb{R}^n$ optimization. Most likely, this is because both $A$ and $B$ strategies are applied at similar iteration numbers, after which the designs have almost converged. It is important to highlight that the algorithm in $L^2$ still performs better. \begin{table} \centering \begin{tabular}{|M{0.5cm}|M{5.6cm}|M{5.6cm}|} \hline & Optimization in $\mathbb{R}^n$ & Optimization in $L^2$ \\ \hline \rotatebox{90}{Strategy A} & \includegraphics[scale=0.2]{compliance_design_A_euclidean_refine_02.png} & \includegraphics[scale=0.2]{compliance_design_A_L2_refine_02.png} \\ \hline \rotatebox{90}{Strategy B} & \includegraphics[scale=0.2]{compliance_design_B_euclidean_refine_02.png} & \includegraphics[scale=0.2]{compliance_design_B_L2_refine_02.png} \\ \hline \end{tabular} \caption{Optimized designs for the compliance problem with AMR refinement only.} \label{fig:compliance_refinement} \end{table} \begin{table} \centering \begin{tabular}{|M{0.5cm}|M{5.6cm}|M{5.6cm}|} \hline & Optimization in $\mathbb{R}^n$ & Optimization in $L^2$ \\ \hline \rotatebox{90}{Strategy A} & \includegraphics[scale=0.2]{./compliance_design_A_euclidean_coarsen_02.png} & \includegraphics[scale=0.2]{./compliance_design_A_L2_coarsen_02.png} \\ \hline \rotatebox{90}{Strategy B} & \includegraphics[scale=0.2]{./compliance_design_B_euclidean_coarsen_02.png} & \includegraphics[scale=0.2]{./compliance_design_B_L2_coarsen_02.png} \\ \hline \end{tabular} \caption{Optimized designs for the compliance problem with AMR coarsening only.} \label{fig:compliance_coarsening} \end{table} \begin{figure} \centering \includegraphics[scale=0.5]{\MyPath/convergence_plots_compliance_coarsen.pdf} \caption{Cost function evolution for the compliance problem with AMR coarsening.} \label{fig:compliance_amr_coarsen} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{\MyPath/convergence_plots_compliance_refine.pdf} \caption{Cost function evolution for the compliance problem with AMR refinement.} \label{fig:compliance_amr_refine} \end{figure} Next we solve a coupled thermal flow problem similar than in \cite{thermal_flow} to demonstrate the algorithm's application to more complex physical phenomena. The domain in Figure \ref{fig:thermal_flow_domain} is the cross section of a heat exchanger, where the design variable $\nu$ represents the volume fraction of a heat generating solid material that needs to be distributed to control the temperature and the fluid flow. The goal of the optimization is to maximize the heat generated in the solid while keeping the pressure drop in the flow from the inlet $\Gamma_1$ to the outlet $\Gamma_2$ lower than a fixed value $P_{\text{drop}}$. As such, the optimization problem reads \begin{figure}[!h] \center \tikzset{>=latex} \begin{tikzpicture}[ every node/.style={inner sep=0pt}, spring/.style = {decorate,decoration={zigzag,amplitude=6pt,segment length=4pt}}, scale=5.0 ] \node (A) at (-0.1, 0.0) {}; \node (B) at (-0.1, -0.1) {}; \node (C) at (0, -0.1) {}; \node (D) at (0.0, -0.5) {}; \node (E) at (1.0, -0.5) {}; \node (F) at (1.0, -0.1) {}; \node (G) at (1.1, -0.1) {}; \node (H) at (1.1, 0.0) {}; \node (J) at (0.0, 0.0) {}; \draw ($(A)$) -- ($(B)$) -- ($(C)$) -- ($(D)$) -- ($(E)$) -- ($(F)$) -- ($(G)$) -- ($(H)$) -- ($(A)$); \dimline[line style = {line width=0.7}, extension start length=-0.1cm, extension end length=-0.1cm]{($(A) + (-0.1, 0)$)}{($(B) + (-0.1, 0.0)$)}{$\frac{L}{10}$}; \dimline[line style = {line width=0.7}, extension start length=0.1cm, extension end length=0.1cm]{($(D) + (-0.2, 0)$)}{($(B) + (-0.1, 0.0)$)}{$\frac{2L}{5}$}; \dimline[line style = {line width=0.7}, extension start length=0.1cm, extension end length=0.1cm]{($(E) + (0.0, -0.1)$)}{($(D) + (0.0, -0.1)$)}{$L$}; \dimline[line style = {line width=0.7}, extension start length=-0.1cm, extension end length=-0.1cm]{($(A) + (0.0, -0.2)$)}{($(J) + (0.0, -0.2)$)}{$\frac{L}{10}$}; \node (GammaD) at ($(A) !.5! (B) + (0.1, 0)$) {$\Gamma_1$}; \node (GammaF) at ($(G) !.5! (H) + (0.1, 0)$) {$\Gamma_2$}; \node (Domain) at (0.5, -0.1) {$D$}; \draw[ultra thick, draw=black, fill=black, opacity=0.4] (0.5, -0.3) ellipse (0.2cm and 0.1cm); \node (Domain) at (0.5, -0.31) {$\nu=1$}; \end{tikzpicture} \caption{Thermal flow domain. $L$=1.} \label{fig:thermal_flow_domain} \end{figure} \newcommand{\mathbf{w}}{\mathbf{w}} \newcommand{\mathbf{v}}{\mathbf{v}} \newcommand{\VV}[1]{\mathbf{V}_{#1}} \begin{subequations}\label{eq:thermal_flow_problem} \begin{align} \underset{\nu \in V}{\text{max}}~ J(\nu) & = \int_{D} \nu B (1 - T)~dV \,, \\ \text{s.t.} ~\left(\mathbf{w}, p, T \right) \in \VV{} \times Q \times W ~\text{satisfy} ~ \\ F(\nu, \mathbf{w}, p; \mathbf{v}, q) & = 0 \label{eq:flow_eq}\,, \\ a_T(\mathbf{w}; T, m) + c_T(\mathbf{w}; T, m) & = 0 \label{eq:thermal_eq} \,, \\ \text{for all} ~\left(\mathbf{v}, q, m \right) \in \VV{0} \times Q \times W_0 \\ G(\nu) & = \int_{\Gamma_{1}} p ~dA - \int_{\Gamma_{2}} p ~dA \leq P_{\text{drop}} \,, \end{align} \end{subequations} where $(\mathbf{w}, p, T)$ are the flow velocity, pressure and temperature and $(\mathbf{v}, q, m)$ are their admissible counterparts. Equation \eqref{eq:flow_eq} is the weak form of the Navier-Stokes equation wherein \begin{equation} \begin{aligned} F(\nu, \mathbf{w}, p; \mathbf{v}, q) & = \int_{D}\left( \left(\mathbf{w} \cdot {\nabla} \mathbf{w} \right) \cdot \mathbf{v} + \frac{1}{Re} {\nabla} \mathbf{w} : {\nabla} \mathbf{v} + \frac{1}{Da} r(\nu) \mathbf{w} \cdot \mathbf{v} \right) ~dV\\ &+ \int_D \left( p \nabla \cdot \mathbf{v} + q \nabla \cdot \mathbf{w} \right)~dV \,, \\ \end{aligned} \label{eq:stokes_expanded} \end{equation} with Reynolds number $Re=1.0$ and Darcy number $Da=10^{-6}$. The RAMP function \citep{ramp_penal} with $q_{\text{RAMP}} = 20.0$, i.e. \begin{align} r(\nu) = \frac{\nu}{1+q_{\text{RAMP}}(1-\nu)} \end{align} is used to obtain discrete 0-1 designs. At the inlet $\Gamma_1$, the Dirichlet condition $\mathbf{w} = \mathbf{w}_1$ is a horizontal parabolic profile with a maximum non-dimensional velocity $W_{\text{max}}=1$. At the outlet $\Gamma_2$, a traction-free condition is imposed. The remaining boundary $\Gamma \setminus(\Gamma_1 \cup \Gamma_2)$ has no-slip condition $\mathbf{w} = 0$. Equation \eqref{eq:thermal_eq} is the weak form of the advection-diffusion equation wherein \newcommand{\mathbf{n}}{\mathbf{n}} \begin{equation} \begin{aligned} a_T(\nu; T, m) & = \int_{D} m \mathbf{w} \cdot {\nabla} T + \frac{1}{Pe} {\nabla} T : {\nabla} m - \nu B (1 - T) m ~dV \,, \\ \label{eq:temp_gls} \end{aligned} \end{equation} We assume the same thermal conductivities in the solid and and the fluid, a Peclet number $Pe=10^4$. The heat source in the solid $\nu B (1 - T)$ is proportional to the difference between a reference temperature 1 and the local temperature $T$, and the non-dimensional heat generation coefficient $B=0.01$ (more details on the heat source are in \cite{thermal_flow}). The Galerkin Least Squares (GLS) stabilization term \begin{equation} \begin{aligned} c_T(\mathbf{w}; T, m) & = \int_D \tau_{GLS}\mathcal{L}_T(T) \cdot \mathcal{L}_T(m) ~dV \end{aligned} \end{equation} stabilizes the otherwise highly oscillatory boundary layers due to the fluid velocity field, where \begin{equation} \begin{aligned} \tau_{GLS} = \beta_{GLS} \left( \frac{4 \mathbf{w} \cdot \mathbf{w}}{h^2} + \left(9 \frac{4 }{h^2Pe} \right)^2 \right)^{-0.5} \,, \end{aligned} \end{equation} $h$ is the element cell size and \begin{align} \mathcal{L}_T(T) = \mathbf{w} \cdot {\nabla} T + \frac{1}{Pe} \nabla T - \nu B (1 - T)\,. \end{align} is the residual. We use $\beta_{GLS} = 0.9$ in our examples. We apply a constant temperature $T=0$ on $\Gamma_1$ and adiabatic boundary conditions over all surfaces with the exception of $\Gamma_1$ and $\Gamma_2$. The finite element discretization of $(\mathbf{w}, p, T)$ uses linear Lagrange elements. The cost function $J(\nu)$ aims to maximize the heat generation in the design domain and the constraint $G(\nu) < P_{\text{drop}}=70.0$ limits the pressure drop in the system and serves to regularize the problem as it imposes an upper bound on the fluid-solid interface where $\nabla \nu \neq 0$. Optimized designs are shown in Tables \ref{tab:thermal_flow_refinement} and \ref{tab:thermal_flow_coarsening} for refinement and coarsening strategies. Again, the designs obtained using the $\mathbb{R}^n$ algorithm differ. The cost function evolution in Figures \ref{fig:thermal_amr_refine} and \ref{fig:thermal_amr_coarsen} reflect this dependency as well. The meshes for the optimized designs illustrated in the Appendix, cf. Figures \ref{fig:thermal_flow_amr_refinement_grid} and \ref{fig:thermal_flow_amr_coarsening_grid} highlight the mesh independence of the $L^2$ algorithm. \begin{table} \centering \begin{tabular}{|M{0.5cm}|M{5.6cm}|M{5.6cm}|} \hline & Optimization in $\mathbb{R}^n$ & Optimization in $L^2$ \\ \hline \rotatebox{90}{Strategy A} & \includegraphics[scale=0.2]{./thermal_flow_design_A_euclidean_refine_02.png} & \includegraphics[scale=0.2]{./thermal_flow_design_A_L2_refine_02.png} \\ \hline \rotatebox{90}{Strategy B} & \includegraphics[scale=0.2]{./thermal_flow_design_B_euclidean_refine_02.png} & \includegraphics[scale=0.2]{./thermal_flow_design_B_L2_refine_02.png} \\ \hline \end{tabular} \caption{Optimized designs for the thermal flow problem with AMR refinement only.} \label{tab:thermal_flow_refinement} \end{table} \begin{table} \centering \begin{tabular}{|M{0.5cm}|M{5.6cm}|M{5.6cm}|} \hline & Optimization in $\mathbb{R}^n$ & Optimization in $L^2$ \\ \hline \rotatebox{90}{Strategy A} & \includegraphics[scale=0.2]{./thermal_flow_design_A_euclidean_coarsen_02.png} & \includegraphics[scale=0.2]{./thermal_flow_design_A_L2_coarsen_02.png} \\ \hline \rotatebox{90}{Strategy B} & \includegraphics[scale=0.2]{./thermal_flow_design_B_euclidean_coarsen_02.png} & \includegraphics[scale=0.2]{./thermal_flow_design_B_L2_coarsen_02.png} \\ \hline \end{tabular} \caption{Optimized designs for the thermal flow problem with AMR coarsening only.} \label{tab:thermal_flow_coarsening} \end{table} \begin{figure} \centering \includegraphics[scale=0.5]{\MyPath/convergence_plots_thermal_flow_coarsen.pdf} \caption{Cost function evolution for the thermal flow problem with AMR coarsening.} \label{fig:thermal_amr_refine} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{\MyPath/convergence_plots_thermal_flow_refine.pdf} \caption{Cost function evolution for the thermal flow problem with AMR refinement.} \label{fig:thermal_amr_coarsen} \end{figure} \section{Conclusion} \label{sec:conclusions} In this work, we presented the necessary mathematical concepts to understand the relationship between the domain discretization and the NLP algorithm and applied them to the GCMMA algorithm. \hl{Our $L^2$ GCMMA implementation is benchmarked with several problems in topology optimization and is able to obtain mesh independent designs starting from the same initial designs, while the original $\mathbb{R}^n$ NLP algorithm is not. We first showed how the new algorithm solves ill-conditioned optimization problems where the $\mathbb{R}^n$ algorithm fails. Then we showed its efficiency when solving large scale problems over uniform meshes. Lastly, we illustrated the effectiveness of the $L^2$ algorithm when applying AMR during the optimization for two problems with different physics.} \hl{For future work, }the algorithm can be extended to handle design fields in other common spaces in topology optimization such as $H^1$, i.e. for nodal design variables or for B-splines. Lastly, a rigorous mathematical proof is necessary to ensure the NLP algorithm is mathematical sound for all corner cases. \section{Replication of results} \label{sec:replication} The scripts used in this article are archived in \cite{firedrake_zenodo_2021_5526481} and require the pyMMAopt library \citep{miguel_salazar_de_troya_2021_4687573} \section{Acknowledgements} This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. The author thanks the Livermore Graduate Scholar Program for its support. On behalf of all authors, the corresponding author states that there is no conflict of interest. LLNL-JRNL-820905. \bibliographystyle{elsarticle-num}
1,116,691,501,030
arxiv
\section{Introduction} \newcommand{0.82\linewidth}{0.82\linewidth} The interaction of heat current with electron spins is at the heart of spin caloritronics~\cite{spincat_review_Bauer}. It leads to thermal spin-transfer torques (STTs) on the magnetization in spin valves, magnetic tunnel junctions, and domain walls when a temperature gradient is applied~\cite{thermally_driven_dwm_fe_w110,thermal_stt_yuan,landau_lifshitz_theory_thermomagnonic_torque,thermal_stt_hatami,thermal_stt_femgofe_jia,evidence_thermal_stt,parameter_space_thermal_stt}. While the thermal STT does not require spin-orbit interaction (SOI), it only exists in noncollinear magnets. In spin valves and magnetic tunnel junctions this noncollinearity arises when the magnetizations of the free and fixed layers are not parallel, while in domain walls it arises from the continuous rotation of magnetization across the wall. In the presence of SOI electric currents and heat currents can generate torques also in collinear magnets: In ferromagnets with broken inversion symmetry the so-called spin-orbit torque (SOT) acts on the magnetization when an electric current is applied (Figure~1a)~\cite{manchon_zhang_2008,torque_macdonald, chernyshov_2009,current_induced_switching_using_spin_torque_from_spin_hall_buhrman, symmetry_spin_orbit_torques, semi_classical_modelling_stiles,antidamping_kurebayashi,ibcsoit,rashba_review} . The inverse spin-orbit torque (ISOT) consists in the production of an electric current due to magnetization dynamics (Figure~1b)~\cite{invsot, spin_motive_force_hals_brataas,charge_pumping_Ciccarelli}. The application of a temperature gradient results in the thermal spin-orbit torque (TSOT) (Figure~1c)~\cite{fept_guillaume}. TSOT and SOT are related by a Mott-like expression~\cite{mothedmisot}. \begin{figure} \flushright \includegraphics[width=0.82\linewidth]{familysoteffects.png} \caption{\label{figuresotfamily}Family of SOT-related effects in a Mn/W magnetic bilayer with broken structural inversion symmetry. (a) SOT: An applied electric field $\vn{E}$ generates a torque $\vn{\tau}$ on the magnetization. $\hat{\vn{n}}$ is the magnetization direction. (b) ISOT: Magnetization dynamics $\partial \hat{\vn{n}}/\partial t$ drives an electric current $\vn{J}$. (c)~TSOT: The application of a temperature gradient $\vn{\nabla}T$ generates a torque $\vn{\tau}$. (d) ITSOT: Magnetization dynamics drives a heat current $\mathscrbf{J}^{\rm Q}$. } \end{figure} In this work we discuss the inverse effect of TSOT, i.e., the generation of heat current due to magnetization dynamics in ferromagnets with broken inversion symmetry and SOI (Figure~1d). We refer to this effect as inverse thermal spin-orbit torque (ITSOT). While the SOT is given directly by the linear response of the torque to an applied electric field~\cite{ibcsoit}, expressions for the ITSOT are more difficult to derive because the energy current obtained from the Kubo formalism contains also a ground-state contribution that does not contribute to the heat current. Analogous difficulties are known from the case of the inverse anomalous Nernst effect, i.e., the generation of a heat current transverse to an applied electric field $\vn{E}$~\cite{ane_niu}. In this case the energy current obtained from the Kubo formalism contains besides the heat current also the material-dependent part $-\vn{E}\times\vn{M}^{\rm orb}$ of the Poynting vector, where $\vn{M}^{\rm orb}$ is the orbital magnetization. This energy magnetization does not contribute to the heat current and needs to be subtracted from the Kubo linear response~\cite{ane_niu,thermoelectric_response_cooper,energy_magnetization_qin_niu_shi}. When inversion symmetry is broken in magnets with SOI the expansion of the free energy $F$ in terms of the magnetization direction $\hat{\vn{n}}(\vn{r})$ and its gradients contains a term linear in the gradients of magnetization, the so-called Dzyaloshinskii-Moriya interation (DMI)~\cite{dmi_moriya,dmi_dzyalo}: \begin{equation}\label{eq_first_order_free_energy} F^{\rm DMI}(\vn{r})= \sum_{j} \vn{D}_{j}(\hat{\vn{n}}(\vn{r})) \cdot \left( \hat{\vn{n}}(\vn{r})\times\frac{\partial \hat{\vn{n}}(\vn{r})}{\partial r_{j}} \right), \end{equation} where $\vn{r}$ is the position and the index $j$ runs over the three cartesian directions, i.e., $r_{1}=x, r_{2}=y, r_{3}=z$. The DMI coefficients $\vn{D}_{j}$ can be expressed in terms of mixed Berry phases~\cite{mothedmisot,phase_space_berry}. DMI does not only affect the magnetic structure by energetically favoring spirals of a certain handedness but also enters spin caloritronics effects~\cite{magnetization_pumping_dm_magnet, magnon_mediated_dmi_torque}. Here, we will show that DMI gives rise to the ground-state energy current $\mathscr{J}^{\rm DMI}_{j}=-\vn{D}_{j} \cdot \left( \hat{\vn{n}}\times\frac{\partial \hat{\vn{n}}}{\partial t} \right)$ when magnetization precesses. This DMI energy current needs to be subtracted from the linear response of the energy current in order to obtain the ITSOT heat current. This work is structured as follows. In section~\ref{sec_ground_state_energy_current} we show that magnetization dynamics drives a ground-state energy current associated with DMI and we highlight its formal similarities with the material-dependent part of the Poynting vector. In section~\ref{sec_itsot} we develop the theory of ITSOT. We derive the energy current based on the Kubo linear-response formalism and subtract $\mathscrbf{J}^{\rm DMI}$ in order to extract the heat current. In section~\ref{sec_time_dependent} we show that the expressions of DMI and orbital magnetization can also be derived elegantly by equating the energy currents obtained from linear response theory to $\mathscrbf{J}^{\rm DMI}$ and $-\vn{E}\times\vn{M}^{\rm orb}$, respectively. In section~\ref{section_ab_initio} we present \textit{ab-initio} calculations of TSOT and ITSOT in Mn/W(001) magnetic bilayers. \section{Ground-state energy current associated with the Dzyaloshinskii-Moriya interaction} \label{sec_ground_state_energy_current} To be concrete, we consider a flat cycloidal spin spiral propagating along the $x$ direction. The magnetization direction is given by \begin{equation}\label{eq_spin_spiral_cycloid} \hat{\vn{n}}_{\rm c}(\vn{r})=\hat{\vn{n}}_{\rm c}(x)= \begin{pmatrix} \sin(qx)\\ 0\\ \cos(qx) \end{pmatrix},\\ \end{equation} where $q$ is the spin-spiral wavenumber, i.e., the inverse wavelength of the spin spiral multiplied by $2\pi$. The free energy contribution $F^{\rm DMI}(\vn{r})$ given in \eqref{eq_first_order_free_energy} simplifies for the spin spiral of \eqref{eq_spin_spiral_cycloid} as follows: \begin{equation}\label{eq_first_order_free_energy_cycloidal_along_x} \begin{aligned} F^{\rm DMI}(\vn{r})&=F^{\rm DMI}(x)= \vn{D}_{x}(\hat{\vn{n}}_{\rm c}(x)) \!\cdot\! \left[ \hat{\vn{n}}_{\rm c}(x)\!\times\!\frac{\partial \hat{\vn{n}}_{\rm c} (x)}{\partial x} \right]=\\ &=q \vn{D}_{x}(\hat{\vn{n}}_{\rm c} (x)) \cdot \hat{\vn{e}}_{y} =q \mathscr{D}_{xy}(\hat{\vn{n}}_{\rm c} (x)), \end{aligned} \end{equation} where $\hat{\vn{e}}_{y}$ is the unit vector pointing in $y$ direction and we defined $\mathscr{D}_{ij}(\hat{\vn{n}})=\vn{D}_{i}(\hat{\vn{n}})\cdot\hat{\vn{e}}_{j}$. Whether $\mathscr{D}_{xy}$ is nonzero or not depends on crystal symmetry. The tensor $\mathscr{D}_{ij}(\hat{\vn{n}})$ is axial and of second rank like the SOT torkance tensor~\cite{mothedmisot}. Additionally, it is even under magnetization reversal, i.e., $\mathscr{D}_{ij}(\hat{\vn{n}})=\mathscr{D}_{ij}(-\hat{\vn{n}})$. Therefore, $\mathscr{D}_{ij}(\hat{\vn{n}})$ has the same symmetry properties as the even SOT torkance~\cite{ibcsoit}. According to \eqref{eq_first_order_free_energy_cycloidal_along_x} the cycloidal spiral of \eqref{eq_spin_spiral_cycloid} is affected by DMI if $\mathscr{D}_{xy}$ is nonzero. This is the case e.g.\ for magnetic bilayers such as Mn/W(001) and Co/Pt(111) (the interface normal points in $z$ direction), where also the component $t_{yx}$ of the even SOT torkance tensor is nonzero~\cite{ibcsoit,mothedmisot, dmi_mnw_ferriani,dmi_copt_thiaville}. \begin{figure} \flushright \includegraphics[width=0.82\linewidth,trim=4cm 0cm 1cm 0cm,clip]{figure1_dmi_cycloidal_domainwall.pdf} \caption{\label{figure1} Illustration of a Neel-type domain wall that moves into the negative $x$ direction. Arrows represent the magnetization direction $\hat{\vn{n}}(x,t)$. $\hat{\vn{n}}(x_0,t)$ is highlighted by oval boxes. (a) $\hat{\vn{n}}(x,t_0)=\hat{\vn{n}}_{0}(x)$ is locally collinear at $x_0$ and therefore $F^{\rm DMI}(x_0,t_0)=0$. (b) $\hat{\vn{n}}(x,t_1)=\hat{\vn{n}}_{0}(x-wt_1)$ starts to become noncollinear at $x_0$ and therefore $F^{\rm DMI}(x_0,t_1)\ne 0$. (c) $\hat{\vn{n}}(x,t_2)$ is strongly noncollinear at $x_0$. } \end{figure} We consider now a Neel-type domain wall that moves with velocity $w<0$ in $x$ direction. The magnetization direction at time $t_{0}=0$, which we denote by $\hat{\vn{n}}_{0}(x)$, is illustrated in Figure~\ref{figure1}a. $\hat{\vn{n}}_{0}(x)$ can be interpreted as a modification of $\hat{\vn{n}}_{\rm c}(x)$ (\eqref{eq_spin_spiral_cycloid}), where the $q$-vector depends on position: \begin{equation} \hat{\vn{n}}_{0}(x)= \begin{pmatrix} \sin(q(x)x)\\ 0\\ \cos(q(x)x)\\ \end{pmatrix}. \end{equation} Since the domain wall moves with velocity $w$, the magnetization direction $\hat{\vn{n}}(x,t)$ at position $x$ and time $t$ is given by \begin{equation} \hat{\vn{n}}(x,t)=\hat{\vn{n}}_{0}(x-wt). \end{equation} In Figure~\ref{figure1} we discuss the magnetization direction at position $x_0$ at the three times $t_0=0$, $t_1>t_0$ and $t_2>t_1$. At time $t_0=0$ the domain wall is far away from $x_0$. Therefore, the magnetization is collinear at $x_0$ and $F^{\rm DMI}(x_0,t_0)=0$. At time $t_1$ the domain wall starts to arrive at $x_0$. Consequently, the magnetization gradient $\partial \hat{\vn{n}}(x_0,t_1)/\partial x_0$ becomes nonzero and thus $F^{\rm DMI}(x_0,t_1)\neq 0$. Due to the motion of the domain wall the DMI contribution $F^{\rm DMI}(x,t)$ to the free energy is time dependent: How much DMI free energy is stored at a given position in the magnetic structure is determined by the local gradient of magnetization, which moves together with the magnetic structure. The partial derivative of $F^{\rm DMI}(x,t)$ with respect to time is given by \begin{gather}\label{eq_dmi_current_continuity_x} \begin{aligned} &\frac{\partial F^{\rm DMI}(x,t)}{\partial t} = \\ &=\! \vn{D}_{x}(\hat{\vn{n}}_0(x\!-\!wt)) \cdot \left[ \hat{\vn{n}}_0(x\!-\!wt) \! \times \! \frac{\partial^2 \hat{\vn{n}}_0(x-wt)}{\partial x\partial t} \right]+ \\&\quad+\! \frac{\partial\vn{D}_{x}(\hat{\vn{n}}_0(x\!-\!wt))}{\partial t} \cdot \left[ \hat{\vn{n}}_0(x\!-\!wt) \! \times \! \frac{\partial \hat{\vn{n}}_0(x-wt)}{\partial x} \right]=\\ &=\! \vn{D}_{x}(\hat{\vn{n}}_0(x\!-\!wt)) \cdot \left[ \hat{\vn{n}}_0(x\!-\!wt) \! \times \! \frac{\partial^2 \hat{\vn{n}}_0(x-wt)}{\partial x\partial t} \right]+\\ &\quad+\! \frac{\partial\vn{D}_{x}(\hat{\vn{n}}_0(x\!-\!wt))}{\partial x} \cdot \left[ \hat{\vn{n}}_0(x\!-\!wt) \! \times \! \frac{\partial \hat{\vn{n}}_0(x-wt)}{\partial t} \right]=\\ &= \frac{\partial}{\partial x} \left\{ \vn{D}_{x}(\hat{\vn{n}}_0(x-wt)) \cdot \left[ \hat{\vn{n}}_0(x-wt) \!\times\! \frac{\partial \hat{\vn{n}}_0(x-wt)}{\partial t} \right] \right\}=\\ &=- \frac{\partial }{\partial x}\mathscr{J}_{x}^{\rm DMI}, \end{aligned}\raisetag{6\baselineskip} \end{gather} where $\mathscr{J}_{x}^{\rm DMI}$ in the last line is the $x$ component of the DMI energy current density \begin{equation}\label{eq_dmi_energy_current} \begin{aligned} \mathscrbf{J}^{\rm DMI} =&-\sum_{ij} \hat{\vn{e}}_{j} \mathscr{D}_{ji}(\hat{\vn{n}}) \left[\hat{\vn{e}}_{i} \cdot \left( \hat{\vn{n}} \times \frac{\partial \hat{\vn{n}}}{\partial t} \right) \right]\\ =&-\mathscrbf{D}(\hat{\vn{n}}) \left( \hat{\vn{n}} \times \frac{\partial \hat{\vn{n}}}{\partial t} \right). \end{aligned} \end{equation} By considering additionally spirals propagating in $y$ and $z$ direction we find that the general form of \eqref{eq_dmi_current_continuity_x} is the continuity equation \begin{equation}\label{eq_dmi_energy_current_continuity} \frac{\partial F^{\rm DMI}}{\partial t} + \vn{\nabla}\cdot \mathscrbf{J}^{\rm DMI} =0 \end{equation} of the DMI energy current $\mathscrbf{J}^{\rm DMI}$. According to \eqref{eq_dmi_energy_current} and \eqref{eq_dmi_energy_current_continuity} the energy current $\mathscrbf{J}^{\rm DMI}$ is driven by magnetization dynamics and its sources and sinks signal the respective decrease and increase of DMI energy density. When we compute the energy current driven by magnetization dynamics in section~\ref{sec_itsot} we therefore need to be aware that this energy current contains $\mathscrbf{J}^{\rm DMI}$ in addition to the ITSOT heat current that we wish to determine. Thus, we need to subtract $\mathscrbf{J}^{\rm DMI}$ from the energy current in order to extract the ITSOT heat current. It is reassuring to verify that the material-dependent part $\mathscrbf{J}^{\rm orb}=-\vn{E}\times\vn{M}^{\rm orb}$ of the Poynting vector, which needs to be subtracted from the energy current to obtain the heat current in the case of the inverse anomalous Nernst effect~\cite{ane_niu}, can be identified by arguments analogous to the above. We sketch this in the following. The energy density due to the interaction between orbital magnetization $\vn{M}^{\rm orb}$ and magnetic field $\vn{B}$ is given by \begin{equation}\label{eq_forb} F^{\rm orb}(\vn{r},t)=-\vn{M}^{\rm orb}(\vn{r},t)\cdot \vn{B}(\vn{r},t). \end{equation} We assume that the magnetic field is of the form \begin{equation} \vn{B}(\vn{r},t)=B_{0}(x-wt)\hat{\vn{e}}_{z}, \end{equation} i.e., the magnetic field at time $t$ can be obtained from the magnetic field at time $t_0=0$ by shifting it by $wt$, as illustrated in Figure~\ref{figure2}. Additionally, we assume that the orbital magnetization is of the same form, i.e., $\vn{M}^{\rm orb}_{\phantom{0}}(\vn{r},t)=M^{\rm orb}_{0}(x-wt)\hat{\vn{e}}_{z}$. Consequently, also $F^{\rm orb}_{\phantom{0}}(\vn{r},t)=F^{\rm orb}_{0}(x-wt)$. $\vn{B}(\vn{r},t)$ can be expressed as $\vn{B}(\vn{r},t)=\vn{\nabla}\times \vn{A}(\vn{r},t)$ in terms of the vector potential \begin{equation} \vn{A}(\vn{r},t)=\hat{\vn{e}}_{y}\int_{0}^{x-wt} B_{0}(x')dx'. \end{equation} Due to the motion of the profile of $\vn{B}(\vn{r},t)$ the energy density in \eqref{eq_forb} changes as a function of time. The partial derivative of $F^{\rm orb}_{\phantom{0}}(\vn{r},t)$ with respect to time is \begin{equation} \begin{aligned} \frac{\partial F^{\rm orb}}{\partial t} &=-\frac{\partial\vn{M}^{\rm orb}}{\partial t}\cdot \vn{B} -\vn{M}^{\rm orb}\cdot \frac{\partial\vn{B}}{\partial t}\\ &=w\frac{\partial\vn{M}^{\rm orb}}{\partial x}\cdot [\vn{\nabla}\times\vn{A}] +\vn{M}^{\rm orb}\cdot [\vn{\nabla}\times\vn{E}]\\ &=w\frac{\partial\vn{M}^{\rm orb}}{\partial x}\cdot \hat{\vn{e}}_{z}\frac{\partial A_y}{\partial x} +\vn{M}^{\rm orb}\cdot [\vn{\nabla}\times\vn{E}]\\ &=-\frac{\partial\vn{M}^{\rm orb}}{\partial x}\cdot \hat{\vn{e}}_{z}\frac{\partial A_y}{\partial t} +\vn{M}^{\rm orb}\cdot [\vn{\nabla}\times\vn{E}]\\ &=-\vn{E}\cdot[\vn{\nabla}\times\vn{M}^{\rm orb}] +\vn{M}^{\rm orb}\cdot [\vn{\nabla}\times\vn{E}]\\ &=\vn{\nabla}\cdot[\vn{E}\times\vn{M}^{\rm orb}], \end{aligned} \end{equation} where we used the Maxwell equation $\vn{\nabla}\times\vn{E}+\frac{\partial\vn{B}}{\partial t}=0$ and $\vn{E}=-\frac{\partial \vn{A}}{\partial t}$ valid in Weyl's temporal gauge with scalar potential set to zero. Thus, \begin{equation}\label{eq_conti_orb} \frac{\partial F^{\rm orb}}{\partial t}+\vn{\nabla}\cdot \mathscrbf{J}^{\rm orb}=0 \end{equation} with \begin{equation}\label{eq_curr_orb} \mathscrbf{J}^{\rm orb}=-\vn{E}\times\vn{M}^{\rm orb}, \end{equation} as expected. \begin{figure} \flushright \includegraphics[width=0.82\linewidth,trim=0cm 0cm 6.2cm 0cm,clip]{figure2_ramps_of_b.pdf} \caption{\label{figure2} Illustration of a magnetic field ramp that moves into the negative $x$ direction. Arrows represent the magnetic field $B_{0}(x-wt)\hat{\vn{e}}_{z}$ at position $x$ and time $t$. $B_{0}(x)$ describes a ramp that increases linearly with $x$. The magnetic field at position $x_0$ is highlighted by an oval box. (a) Snapshot at time $t_1$. (b) At time $t_2$ the magnetic field at position $x_0$ has increased because the ramp has moved to the left since $t_1$. Consequently, also the energy density $F_0^{\rm orb}(x_0-wt_2)$ is now different. (c) At time $t_3$ the magnetic field at position $x_0$ has increased further. } \end{figure} In the following we discuss several additional formal analogies and similarities between DMI, classical electrodynamics and orbital magnetization. We introduce the tensors $\mathscrbf{C}(\vn{r})$ and $\bar{\mathscrbf{C}}(\vn{r})$ with elements \begin{equation}\label{eq_noco_tensor} \mathscr{C}_{ij}(\vn{r})= \hat{\vn{e}}_{i} \cdot \left[ \hat{\vn{n}}(\vn{r}) \times \frac{\partial \hat{\vn{n}}(\vn{r})}{\partial r_{j}} \right] \end{equation} and \begin{equation} \bar{\mathscr{C}}_{ij}(\vn{r})= \frac{\partial \hat{n}_{i}(\vn{r})}{\partial r_{j}} \end{equation} to quantify the noncollinearity of $\hat{\vn{n}}(\vn{r})$. $\mathscrbf{C}$ and $\bar{\mathscrbf{C}}$ are related through the matrix \begin{equation} \mathscrbf{K}(\hat{\vn{n}})= \begin{pmatrix} 0 &-\hat{n}_3 &\hat{n}_2\\ \hat{n}_3 &0 &-\hat{n}_1\\ -\hat{n}_2 &\hat{n}_1 &0 \end{pmatrix} \end{equation} as $\mathscrbf{C}=\mathscrbf{K}\bar{\mathscrbf{C}}$. The free energy $F^{\rm DMI}(\vn{r})$ can be expressed in terms of $\mathscrbf{C}$ and $\mathscrbf{D}$ as follows: \begin{equation}\label{eq_dmi_free_energy_c_tensor} \begin{aligned} &F^{\rm DMI}(\vn{r})=\sum_{j} \vn{D}_{j}(\vn{r}) \cdot \left[ \hat{\vn{n}}(\vn{r}) \times \frac{\partial \hat{\vn{n}}(\vn{r})}{\partial r_{j}} \right]\\ &\quad=\sum_{ij} \mathscr{D}_{ji}(\vn{r})\hat{\vn{e}}_{i} \cdot \left[ \hat{\vn{n}}(\vn{r}) \times \frac{\partial \hat{\vn{n}}(\vn{r})}{\partial r_{j}} \right]\\ &\quad=\sum_{ij} \mathscr{D}_{ji}(\vn{r}) \mathscr{C}_{ij}(\vn{r}) = {\rm Tr}[\mathscrbf{D}(\vn{r})\mathscrbf{C}(\vn{r})]=\\ &\quad= {\rm Tr}[ \mathscrbf{D}(\vn{r}) \mathscrbf{K}(\hat{\vn{n}}(\vn{r})) \bar{\mathscrbf{C}}(\vn{r})] = {\rm Tr}[ \bar{\mathscrbf{D}}(\vn{r}) \bar{\mathscrbf{C}}(\vn{r})], \\ \end{aligned} \end{equation} where we defined $\bar{\mathscrbf{D}}=\mathscrbf{D}\mathscrbf{K}$. Similarly, $\mathscrbf{J}^{\rm DMI}$ in \eqref{eq_dmi_energy_current} can be expressed in terms of $\bar{\mathscrbf{D}}$ as \begin{equation}\label{eq_dmi_energy_current_bar} \begin{aligned} \mathscrbf{J}^{\rm DMI} =&-\mathscrbf{D} \left( \hat{\vn{n}} \times \frac{\partial \hat{\vn{n}}}{\partial t} \right) = -\bar{\mathscrbf{D}} \frac{\partial \hat{\vn{n}}}{\partial t} . \end{aligned} \end{equation} The energy density $F^{\rm orb}=-\vn{M}^{\rm orb}\cdot(\vn{\nabla}\times\vn{A})$ in \eqref{eq_forb} involves the curl of the vector potential $\vn{A}$, while the material-dependent part of the Poynting vector, i.e., $\mathscrbf{J}^{\rm orb} \!=\! -\vn{E}\!\times\!\vn{M}^{\rm orb}=\frac{\partial\vn{A}}{\partial t}\!\times\!\vn{M}^{\rm orb}$, involves the time-derivative of $\vn{A}$. Similarly, the spatial derivatives $\partial\hat{\vn{n}}/\partial r_{i}$ enter $F^{\rm DMI}$ in \eqref{eq_dmi_free_energy_c_tensor} via the tensor $\bar{\mathscrbf{C}}$ while the temporal derivative $\partial\hat{\vn{n}}/\partial t$ enters $\mathscrbf{J}^{\rm DMI}$ in \eqref{eq_dmi_energy_current_bar}. Thus, in the theory of DMI the magnetization direction $\hat{\vn{n}}$ plays the role of an effective vector potential. The curl of orbital magnetization constitutes a bound current $\vn{J}^{\rm mag}=\vn{\nabla}\times \vn{M}^{\rm orb}$ that does not contribute to electronic transport. Hence it needs to be subtracted from the linear response electric current driven by gradients in temperature or chemical potential in order to obtain the measurable electric current~\cite{ane_niu}. Similarly, the spatial derivatives $\tau^{\rm bound}_{j} =\sum_{i}\frac{\partial}{\partial r_{i}}\mathscr{D}_{ij} = \vn{\nabla}\cdot\left[ \mathscrbf{D}\hat{\vn{e}}_{j} \right] $ that result from the presence of gradients in temperature or chemical potential constitute torques that are not measurable and need to be subtracted from the total linear response to temperature or chemical potential gradients in order to obtain the measurable torque~\cite{mothedmisot}. Table~\ref{table_compare_om_dmi} summarizes the formal analogies and similarities between the orbital magnetization and DMI. \renewcommand{\arraystretch}{1.5} \begin{table} \caption{\label{table_compare_om_dmi} Formal analogies between the theories of orbital magnetization (OM) and Dzyaloshinskii-Moriya interaction (DMI). The vector potential $\vn{A}$ is assumed to satisfy Weyl's temporal gauge, hence the scalar potential is set to zero.} \begin{indented} \item[]\begin{tabular}{@{} lll } \br &OM & DMI \\ \mr 'vector potential' &$\vn{A}$ &$\hat{\vn{n}}$ \\ \hline 'magnetic' field &$\vn{B}=\vn{\nabla}\times\vn{A}$ &$\bar{\mathscr{C}}_{ij}=\frac{\partial \hat{n}_{i}}{\partial r_{j}}$ \\ \hline energy density &$F^{\rm orb} \!=\! -\vn{M}^{\rm orb}\!\cdot\!\vn{B}$ &$F^{\rm DMI}={\rm Tr}[\bar{\mathscrbf{D}}\bar{\mathscrbf{C}}]$ \\ \hline 'electric' field &$\vn{E}= -\frac{\partial\vn{A}}{\partial t}$ &$\frac{\partial \hat{\vn{n}}}{\partial t}$ \\ \hline energy current &$ \mathscrbf{J}^{\rm orb} \!\!=\!\! -\vn{M}^{\rm orb} \!\!\times\!\!\frac{\partial \vn{A}}{\partial t}$ &$\mathscrbf{J}^{\rm DMI} \!=\! - \bar{\mathscrbf{D}}\frac{\partial \hat{\vn{n}} }{\partial t}$ \\ \hline bound property & $\vn{J}^{\rm mag}=\vn{\nabla}\times \vn{M}^{\rm orb}$ & $\tau^{\rm bound}_{j} = \vn{\nabla}\cdot\left[ \mathscrbf{D}\hat{\vn{e}}_{j} \right] $ \\ \br \end{tabular} \end{indented} \end{table} \section{Inverse thermal spin-orbit torque (ITSOT)} \label{sec_itsot} In ferromagnets with broken inversion symmetry and SOI, a gradient in temperature $T$ leads to a torque $\vn{\tau}$ on the magnetization, the so-called thermal spin-orbit torque (TSOT)~\cite{mothedmisot,fept_guillaume}: \begin{equation}\label{eq_tsot_define_beta} \vn{\tau}=-\vn{\beta}\vn{\nabla} T. \end{equation} The inverse thermal spin-orbit torque (ITSOT) consists in the generation of heat current by magnetization dynamics in ferromagnets with broken inversion symmetry and SOI. The effect of magnetization dynamics can be described by the time-dependent perturbation $\delta H$ to the Hamiltonian $H$~\cite{ibcsoit} \begin{equation} \delta H= \frac{\sin(\omega t)}{\omega} \left[ \hat{\vn{n}} \times \frac{\partial\hat{\vn{n}}}{\partial t} \right]\cdot\vn{\mathcal{T}}, \end{equation} where $\vht{\mathcal{T}}(\vn{r})=\vn{m}\times \hat{\vn{n}}\Omega^{\rm xc}(\vn{r})$ is the torque operator. $\Omega^{\rm xc}(\vn{r})=\frac{1}{2\mu_{\rm B}}\left[V^{\rm eff}_{\rm minority}(\vn{r})-V^{\rm eff}_{\rm majority}(\vn{r}) \right]$ is the exchange field, i.e., the difference between the potentials of minority and majority electrons. $\vn{m}=-\mu_{\rm B}\vht{\sigma}$ is the spin magnetic moment operator, $\mu_{\rm B}$ is the Bohr magneton and $\vht{\sigma}=(\sigma_x,\sigma_y,\sigma_z)^{\rm T}$ is the vector of Pauli spin matrices. The energy current $\mathscrbf{J}^{E}$ driven by magnetization dynamics is thus given by \begin{equation}\label{eq_energy_curr_kubo_mag_dyn2} \mathscrbf{J}^{E}=-\mathscrbf{B}(\hat{\vn{n}}) \left[ \hat{\vn{n}} \times \frac{\partial\hat{\vn{n}}}{\partial t} \right], \end{equation} where the tensor $\mathscrbf{B}$ with elements \begin{equation}\label{eq_kubo_thermal_B} \mathscr{B}_{ij}(\hat{\vn{n}})= \lim_{\omega\to 0}\! \frac{{\rm Im} G_{\mathcal{J}^{E}_{i}\!\!, \mathcal{T}_{j}^{\phantom{\alpha}}}^{\rm R} \!(\hbar\omega,\hat{\vn{n}})}{\hbar\omega} \end{equation} describes the Kubo linear response of the energy current operator \begin{equation}\label{eq_energy_curr_opera} \vn{\mathcal{J}}^{E}=\frac{1}{2V}[(H-\mu)\vn{v}+\vn{v}(H-\mu)] \end{equation} to magnetization dynamics. $\mu$ is the chemical potential, $\vn{v}$ is the velocity operator and the retarded energy-current torque correlation-function is given by \begin{equation} G_{\mathcal{J}^{E}_{i}\!\!, \mathcal{T}_{j}^{\phantom{\alpha}}}^{\rm R} \!(\hbar\omega,\hat{\vn{n}})= -i\int\limits_{0}^{\infty}dt e^{i\omega t} \left\langle [ \mathcal{J}^{E}_{i}(t),\mathcal{T}^{\phantom{\alpha}}_{j}(0) ]_{-} \right\rangle. \end{equation} In \eqref{eq_kubo_thermal_B} we take the limit frequency $\omega\rightarrow 0$, which is justified when the frequency is small compared to the inverse lifetime of electronic states, which is satisfied for magnetic bilayers at room temperature and frequency $\omega/(2\pi)$ in the GHz range. Within the independent particle approximation \eqref{eq_kubo_thermal_B} becomes $ \mathscr{B}^{\phantom{I}}_{ij} \!= \mathscr{B}^{\rm I(a)}_{ij} \!+ \mathscr{B}^{\rm I(b)}_{ij} \!+ \mathscr{B}^{\rm II}_{ij} $, with \begin{gather}\label{eq_kubo_linear_response_pumped_energy_current} \begin{aligned} \mathscr{B}^{\rm I(a)\phantom{I}}_{ij} &\!\!\!=\!\phantom{-}\frac{1}{h}\int_{-\infty}^{\infty} \!\!\!d\mathcal{E}\frac{df(\mathcal{E})}{d\mathcal{E}} \,{\rm Tr} \left\langle \mathcal{J}^{E}_{i} G^{\rm R}(\mathcal{E}) \mathcal{T}_{j} G^{\rm A}(\mathcal{E}) \right\rangle \\ \mathscr{B}^{\rm I(b)\phantom{I}}_{ij} &\!\!\!=\!-\frac{1}{h}\int_{-\infty}^{\infty} \!\!\!d\mathcal{E}\frac{df(\mathcal{E})}{d\mathcal{E}} \,{\rm Re} \,{\rm Tr} \left\langle \mathcal{J}^{E}_{i} G^{\rm R}(\mathcal{E}) \mathcal{T}_{j} G^{\rm R}(\mathcal{E}) \right\rangle \\ \mathscr{B}^{\rm II\phantom{(a)}}_{ij} &\!\!\!=\!- \frac{1}{h}\int_{-\infty}^{\infty} d\mathcal{E}f(\mathcal{E}) \,{\rm Re} \,{\rm Tr} \left\langle \mathcal{J}^{E}_{i}G^{\rm R}(\mathcal{E})\mathcal{T}_{j} \frac{dG^{\rm R}(\mathcal{E})}{d\mathcal{E}}\right.\\ &\quad\quad\quad\quad\quad\quad\quad\quad\,-\left. \mathcal{J}^{E}_{i}\frac{dG^{\rm R}(\mathcal{E})}{d\mathcal{E}} \mathcal{T}_{j} G^{\rm R}(\mathcal{E}) \right\rangle, \end{aligned}\raisetag{6.2\baselineskip} \end{gather} where $G^{\rm R}(\mathcal{E})$ and $G^{\rm A}(\mathcal{E})$ are the retarded and advanced single-particle Green functions, respectively. $f(\mathcal{E})$ is the Fermi function. $\mathscrbf{B}$ contains scattering-independent intrinsic contributions and, in the presence of disorder, additional disorder-driven contributions. The intrinsic Berry-curvature contribution is given by \begin{gather}\label{eq_b_intrinsic} \begin{aligned} \mathscr{B}^{\rm int}_{ij} \!&=\!\frac{2\hbar}{\mathcal{N}} \!\sum_{\vn{k}n} \!\!\sum_{m\neq n} \!\!f_{\vn{k}n} \text{Im} \frac{ \langle \psi_{\vn{k}n} |\mathcal{T}_{j}| \psi_{\vn{k}m} \rangle \langle \psi_{\vn{k}m} |\mathcal{J}^{E}_{i}| \psi_{\vn{k}n} \rangle } { (\mathcal{E}_{\vn{k}m}-\mathcal{E}_{\vn{k}n})^2 } \\ &=\frac{1}{\mathcal{N}V} \!\!\sum_{\vn{k}n} f_{\vn{k}n} \left[ A_{\vn{k}nji}-(\mathcal{E}_{\vn{k}n}-\mu)B_{\vn{k}nji} \right], \end{aligned}\raisetag{2\baselineskip} \end{gather} where \begin{equation}\label{eq_akn_kubo} A_{\vn{k}nij}=\hbar\sum_{m\neq n}\text{Im} \left[ \frac{ \langle \psi_{\vn{k}n} |\mathcal{T}_{i}| \psi_{\vn{k}m} \rangle \langle \psi_{\vn{k}m} |v_{j}| \psi_{\vn{k}n} \rangle } { \mathcal{E}_{\vn{k}m}-\mathcal{E}_{\vn{k}n} } \right] \end{equation} and \begin{equation}\label{eq_bkn_kubo} B_{\vn{k}nij} =-2\hbar\sum_{m\neq n}\text{Im} \left[ \frac{ \langle \psi_{\vn{k}n} |\mathcal{T}_{i}| \psi_{\vn{k}m} \rangle \langle \psi_{\vn{k}m} |v_{j}| \psi_{\vn{k}n} \rangle } { (\mathcal{E}_{\vn{k}m}-\mathcal{E}_{\vn{k}n})^2 } \right] \end{equation} and $| \psi_{\vn{k}n} \rangle$ are the Bloch wavefunctions with corresponding band energies $\mathcal{E}_{\vn{k}n}$, $f_{\vn{k}n}=f(\mathcal{E}_{\vn{k}n})$, and $\mathcal{N}$ is the number of $\vn{k}$ points. As discussed in section~\ref{sec_ground_state_energy_current} we subtract $\mathscrbf{J}^{\rm DMI}$ (\eqref{eq_dmi_energy_current}) from $\mathscrbf{J}^{E}$ in order to obtain the heat current $\mathscrbf{J}^{Q}$: \begin{equation}\label{eq_subtract_dmi_for_heat} \mathscrbf{J}^{Q}=\mathscrbf{J}^{E}-\mathscrbf{J}^{\rm DMI} =-\tilde{\vn{\beta}} \left[ \hat{\vn{n}} \times \frac{\partial\hat{\vn{n}}}{\partial t} \right], \end{equation} with \begin{equation}\label{eq_beta_bmind} \tilde{\vn{\beta}}= \mathscrbf{B}- \mathscrbf{D}. \end{equation} Inserting the Berry-curvature expression of DMI~\cite{mothedmisot,phase_space_berry} \begin{equation}\label{eq_dmi_finite_temperature} \mathscr{D}_{ij} \!=\! \frac{1}{\mathcal{N}V} \!\!\sum_{\vn{k}n} \!\left\{ f_{\vn{k}n}A_{\vn{k}nji} \!+\!\frac{1}{\beta} \ln \!\left[ 1\!+\!e^{-\beta(\mathcal{E}_{\vn{k}n}-\mu)} \right] \!\!B_{\vn{k}nji} \!\right\}, \end{equation} we obtain for the intrinsic contribution \begin{equation} \begin{aligned}\label{eq_thermo_beta_int} \tilde{\beta}^{\rm int}_{ij}=& \mathscr{B}^{\rm int}_{ij}- \mathscr{D}^{\phantom{int}}_{ij}=\\ =&\frac{1}{\mathcal{N}V} \sum_{\vn{k}n} \bigg\{ f_{\vn{k}n} \left[ A_{\vn{k}nji}-(\mathcal{E}_{\vn{k}n}-\mu)B_{\vn{k}nji} \right]\\ -& \!\left[ f_{\vn{k}n}A_{\vn{k}nji} \!+\!\frac{1}{\beta} \ln \!\left[ 1\!+\!e^{-\beta(\mathcal{E}_{\vn{k}n}-\mu)} \right] \!\!B_{\vn{k}nji} \!\right]\bigg\}\\ =&-\frac{1}{\mathcal{N}V} \sum_{\vn{k}n} B_{\vn{k}nji} \bigg\{ f_{\vn{k}n} (\mathcal{E}_{\vn{k}n}-\mu) +\\ &\quad\quad\quad\quad+\frac{1}{\beta} \ln \left[ 1+e^{-\beta(\mathcal{E}_{\vn{k}n}-\mu)} \right] \bigg\}, \end{aligned} \end{equation} where $\beta=(k_{\rm B}T)^{-1}$. Using \begin{gather} \begin{aligned}\label{eq_convert_log} &f_{\vn{k}n} (\mathcal{E}_{\vn{k}n}-\mu)+\frac{1}{\beta} \ln \left[ 1+e^{-\beta(\mathcal{E}_{\vn{k}n}-\mu)} \right]=\\ &=-\int_{-\infty}^{\mu} d \mathcal{E} f'(\mathcal{E}_{\vn{k}n}+\mu-\mathcal{E}) (\mathcal{E}_{\vn{k}n}-\mathcal{E})=\\ &=-\int_{-\infty}^{\mu} \!\!\!\!d \mathcal{E} \int_{-\infty}^{\infty} \!\!\!\!d \mathcal{E}' f'(\mathcal{E}'\!+\!\mu\!-\!\mathcal{E}) (\mathcal{E}'\!-\!\mathcal{E}) \delta(\mathcal{E}'\!-\!\mathcal{E}_{\vn{k}n})=\\ &=- \int_{-\infty}^{\infty} d \mathcal{E}' f'(\mathcal{E}') (\mathcal{E}'-\mu) \Theta(\mathcal{E}'-\mathcal{E}_{\vn{k}n}),\\ \end{aligned}\raisetag{5\baselineskip} \end{gather} where $\Theta$ is the Heaviside unit step function, we can rewrite \eqref{eq_thermo_beta_int} as \begin{equation}\label{eq_beta_itsot} \tilde{\beta}_{ij}^{\rm int}(\hat{\vn{n}})= -\frac{1}{eV}\int_{-\infty}^{\infty} d\mathcal{E} f'(\mathcal{E}) (\mathcal{E}-\mu) t_{ji}^{\rm int}(\hat{\vn{n}},\mathcal{E}). \end{equation} Here, \begin{equation} t_{ij}^{\rm int}(\hat{\vn{n}},\mathcal{E}) =-\frac{e}{\mathcal{N}} \sum_{\vn{k}n} \Theta(\mathcal{E}-\mathcal{E}_{\vn{k}n}) B_{\vn{k}nij} \end{equation} is the intrinsic SOT torkance tensor~\cite{ibcsoit,mothedmisot} at zero temperature as a function of Fermi energy $\mathcal{E}$ and $e=|e|$ is the elementary positive charge. The intrinsic TSOT and ITSOT are even in magnetization, i.e., $\tilde{\beta}_{ij}^{\rm int}(\hat{\vn{n}})=\tilde{\beta}_{ij}^{\rm int}(-\hat{\vn{n}})$. \eqref{eq_kubo_linear_response_pumped_energy_current} contains an additional contribution which is odd in magnetization, i.e., $\tilde{\beta}_{ij}^{\rm odd}(\hat{\vn{n}})=-\tilde{\beta}_{ij}^{\rm odd}(-\hat{\vn{n}})$, and which is given by \begin{equation}\label{eq_odd_itsot} \tilde{\beta}^{\rm odd}_{ij}(\hat{\vn{n}})=\frac{1}{eV}\int_{-\infty}^{\infty} d\mathcal{E} f'(\mathcal{E}) (\mathcal{E}-\mu) t_{ji}^{\rm odd}(\hat{\vn{n}},\mathcal{E}), \end{equation} where $t_{ji}^{\rm odd}(\hat{\vn{n}},\mathcal{E})$ is the odd contribution to the SOT torkance tensor as a function of Fermi energy~\cite{ibcsoit}. The total $\tilde{\beta}_{ij}(\hat{\vn{n}})$ coefficient, i.e., the sum of all contributions, is related to the total torkance $t_{ji}(-\hat{\vn{n}},\mathcal{E})$ for magnetization in $-\hat{\vn{n}}$ direction by \begin{equation}\label{eq_total_itsot} \tilde{\beta}_{ij}(\hat{\vn{n}})=-\frac{1}{eV}\int_{-\infty}^{\infty} d\mathcal{E} f'(\mathcal{E}) (\mathcal{E}-\mu) t_{ji}(-\hat{\vn{n}},\mathcal{E}), \end{equation} which contains \eqref{eq_beta_itsot} and \eqref{eq_odd_itsot} as special cases. It is instructive to verify that the ITSOT described by \eqref{eq_total_itsot} is the Onsager-reciprocal of the TSOT (\eqref{eq_tsot_define_beta}), where~\cite{mothedmisot} \begin{equation}\label{eq_beta_tsot} \beta_{ij}(\hat{\vn{n}})= \frac{1}{e}\int_{-\infty}^{\infty} d\mathcal{E} f'(\mathcal{E}) \frac{(\mathcal{E}-\mu)}{T} t_{ij}(\hat{\vn{n}},\mathcal{E}). \end{equation} Comparison of \eqref{eq_total_itsot} and \eqref{eq_beta_tsot} yields \begin{equation}\label{eq_betatilde_from_beta} \vn{\beta}(\hat{\vn{n}})=-\frac{V}{T}[\tilde{\vn{\beta}}(-\hat{\vn{n}})]^{\rm T} \end{equation} and thus \begin{equation} \begin{pmatrix} -\mathscrbf{J}^{Q}\\ \vn{\tau}/V \end{pmatrix} = \begin{pmatrix} T\vn{\lambda}(\hat{\vn{n}}) & \tilde{\vn{\beta}}(\hat{\vn{n}})\\ [\tilde{\vn{\beta}}(-\hat{\vn{n}})]^{\rm T} & -\vn{\Lambda}(\hat{\vn{n}})\\ \end{pmatrix} \begin{pmatrix} \frac{\vn{\nabla}T}{T}\\ \hat{\vn{n}}\times\frac{\partial\hat{\vn{n}}}{\partial t} \end{pmatrix}, \end{equation} where $\vn{\lambda}$ is the thermal conductivity tensor and $\vn{\Lambda}$ describes Gilbert damping and gyromagnetic ratio~\cite{invsot}. As expected, the response matrix \begin{equation} \mathscrbf{A}(\hat{\vn{n}})= \begin{pmatrix} T\vn{\lambda}(\hat{\vn{n}}) & \tilde{\vn{\beta}}(\hat{\vn{n}})\\ [\tilde{\vn{\beta}}(-\hat{\vn{n}})]^{\rm T} & -\vn{\Lambda}(\hat{\vn{n}})\\ \end{pmatrix} \end{equation} satisfies the Onsager symmetry $\mathscrbf{A}(\hat{\vn{n}})= [\mathscrbf{A}(-\hat{\vn{n}})]^{\rm T}$. \eqref{eq_total_itsot} and \eqref{eq_subtract_dmi_for_heat} are the central result of this section. Together, these two equations provide the recipe to compute the heat current $\mathscrbf{J}^{\rm Q}$ driven by magnetization dynamics $\partial\hat{\vn{n}}/\partial t$. We discuss applications in section~\ref{section_ab_initio}. \section{Using the ground-state energy currents to derive expressions for DMI and orbital magnetization} \label{sec_time_dependent} The expression \eqref{eq_dmi_finite_temperature} for the DMI-spiralization tensor $\mathscrbf{D}$ was derived both from semiclassics~\cite{phase_space_berry} and static quantum mechanical perturbation theory~\cite{mothedmisot}. Alternatively, the $T=0$ expression of $\mathscrbf{D}$ can also be obtained elegantly and easily by invoking the third law of thermodynamics: For $T\rightarrow 0$ the ITSOT must vanish, $\tilde{\vn{\beta}}\rightarrow 0$, because otherwise we could pump heat at zero temperature and thereby violate Nernst's theorem. Hence, $\mathscrbf{D}\rightarrow \mathscrbf{B}$ according to \eqref{eq_beta_bmind}. In other words, at $T=0$ the energy current density $\mathscrbf{J}^{E}$ in \eqref{eq_energy_curr_kubo_mag_dyn2} is identical to the DMI energy current density $\mathscrbf{J}^{\rm DMI}=- \mathscrbf{D} \left( \hat{\vn{n}} \times \frac{\partial \hat{\vn{n}}}{\partial t} \right)$ because the heat current is zero. Thus, at $T=0$ we obtain from \eqref{eq_b_intrinsic} \begin{equation} \mathscr{D}_{ij}=\mathscr{B}^{\rm int}_{ij}=\frac{1}{\mathcal{N}V} \!\!\sum_{\vn{k}n} f_{\vn{k}n} \left[ A_{\vn{k}nji}-(\mathcal{E}_{\vn{k}n}-\mu)B_{\vn{k}nji} \right], \end{equation} which agrees with \eqref{eq_dmi_finite_temperature} at $T=0$. Similarly, we can derive the $T=0$ expression of orbital magnetization from the energy current $\mathscrbf{J}^{\rm orb}=-\vn{E}\times\vn{M}^{\rm orb}$ discussed in \eqref{eq_curr_orb}: For $T\rightarrow 0$ the inverse anomalous Nernst effect (i.e., the generation of a transverse heat current by an applied electric field) has to vanish according to the third law of thermodynamics. Hence, the energy current driven by an applied electric field at $T=0$ does not contain any heat current and is therefore identical to $\mathscrbf{J}^{\rm orb}$. We introduce the tensor $\mathscrbf{R}$ to describe the linear response of the energy current $\mathscrbf{J}$ to an applied electric field $\vn{E}$, i.e., $\mathscrbf{J}=\mathscrbf{R}\vn{E}$. We describe the effect of the electric field by the vector potential $\vn{A}=-\vn{E}\sin(\omega t)/\omega$ and take the limit $\omega\rightarrow 0$ later. The Hamiltonian density describing the interaction between electric current density $\vn{J}$ and vector potential is $-\vn{J}\cdot \vn{A}$, from which we obtain the time-dependent perturbation \begin{equation} \delta H=-\frac{\sin(\omega t)}{\omega}e\vn{E}\cdot\vn{v}. \end{equation} Introducing the retarded energy-current velocity correlation-function \begin{equation} G_{\mathcal{J}^{E}_{i}\!\!,v_{j}^{\phantom{\alpha}}}^{\rm R} (\hbar\omega) = -i\int\limits_{0}^{\infty}dt e^{i\omega t} \left\langle [ \mathcal{J}^{E}_{i}(t),v^{\phantom{\alpha}}_{j}(0) ]_{-} \right\rangle \end{equation} we can write the elements of the tensor $\mathscrbf{R}$ as \begin{equation} \mathscr{R}_{ij}= e \lim_{\omega\to 0}\! \frac{{\rm Im} G_{\mathcal{J}^{E}_{i}\!\!, v_{j}^{\phantom{\alpha}}}^{\rm R} \!(\hbar\omega)}{\hbar\omega}. \end{equation} This allows us to determine $\mathscrbf{J}^{\rm orb}$ as $\mathscrbf{J}^{\rm orb}=\mathscrbf{R}^{\rm int}\vn{E}$, where the intrinsic Berry-curvature contribution to the response tensor $\mathscrbf{R}$ is given by \begin{equation} \begin{aligned} \mathscr{R}_{ij}^{\rm int}=& -\frac{2e\hbar}{\mathcal{N}} \!\sum_{\vn{k}n} f_{\vn{k}n}\!\! \sum_{m\ne n} \!\text{Im} \frac{ \langle u_{\vn{k}n} | \mathcal{J}^{E}_{i} | u_{\vn{k}m} \rangle \langle u_{\vn{k}m} | v_{j}^{\phantom{i}} | u_{\vn{k}n} \rangle } { (\mathcal{E}_{\vn{k}m}-\mathcal{E}_{\vn{k}n})^2 }\\ =& \frac{1}{\mathcal{N}V}\sum_{\vn{k}n}f_{\vn{k}n} \left[ \mathscr{M}_{\vn{k}nij} - (\mathcal{E}_{\vn{k}n}-\mu) \mathscr{N}_{\vn{k}nij} \right] , \end{aligned} \end{equation} with \begin{equation}\label{eq_om_m_kn_kubo} \mathscr{M}_{\vn{k}nij}=e\hbar\sum_{m\neq n}\text{Im} \frac{ \langle u_{\vn{k}n} |v_{i}| u_{\vn{k}m} \rangle \langle u_{\vn{k}m} |v_{j}| u_{\vn{k}n} \rangle } { \mathcal{E}_{\vn{k}n}-\mathcal{E}_{\vn{k}m} } \end{equation} and \begin{equation}\label{eq_om_n_kn_kubo} \mathscr{N}_{\vn{k}nij} =2e\hbar\sum_{m\neq n}\text{Im} \frac{ \langle u_{\vn{k}n} |v_{i}| u_{\vn{k}m} \rangle \langle u_{\vn{k}m} |v_{j}| u_{\vn{k}n} \rangle } { (\mathcal{E}_{\vn{k}m}-\mathcal{E}_{\vn{k}n})^2 }. \end{equation} From $\vn{M}^{\rm orb}\times\vn{E}=\mathscrbf{R}^{\rm int}\vn{E}$ we obtain \begin{equation}\label{eq_orb_mag_tdpt} \vn{M}^{\rm orb}= -\frac{1}{2} \hat{\vn{e}}_{k}^{\phantom{i}} \epsilon^{\phantom{ij}}_{kij} \mathscr{R}^{\rm int}_{ij}. \end{equation} It is straightforward to verify that $\vn{M}^{\rm orb}$ given by \eqref{eq_orb_mag_tdpt} agrees to the $T=0$ expressions for orbital magnetization derived from quantum mechanical perturbation theory~\cite{shi_quantum_theory_orbital_mag}, from semiclassics~\cite{ane_niu}, and within the Wannier representation~\cite{om_insulators_mlwfs,om_crystals_mlwfs}. Combining the third law of thermodynamics with the continuity equations \eqref{eq_dmi_energy_current_continuity} and \eqref{eq_conti_orb} provides thus an elegant way to derive expressions for $\mathscrbf{D}$ and $\vn{M}^{\rm orb}$ at $T=0$. We can extend these derivations to $T>0$ if we postulate that the linear response to thermal gradients is described by Mott-like expressions. In the case of the TSOT this Mott-like expression is \eqref{eq_beta_tsot}, while it is~\cite{ane_niu,ane_weischenberg,thermogalvanomagnetics_ebert} \begin{equation}\label{eq_alpha_ane} \alpha_{xy}=\frac{1}{e} \int_{-\infty}^{\infty} d\mathcal{E} f'(\mathcal{E}) \frac{\mathcal{E}-\mu}{T} \sigma_{xy}(\mathcal{E}) \end{equation} in the case of the anomalous Nernst effect, where $\sigma_{xy}(\mathcal{E})$ is the zero-temperature anomalous Hall conductivity as a function of Fermi energy $\mathcal{E}$ and the anomalous Nernst current due to a temperature gradient in $y$ direction is $j_x=-\alpha_{xy}\partial T/\partial y$. While \eqref{eq_beta_tsot} and \eqref{eq_alpha_ane} were, respectively, derived in the previous section and in \cite{ane_niu}, we now instead consider it an axiom that within the range of validity of the independent particle approximation the linear response to thermal gradients is always described by Mott-like expressions. Thereby, the derivation in the present section becomes independent from the derivation in the preceding section. Applying the Onsager reciprocity principle to \eqref{eq_beta_tsot} and \eqref{eq_alpha_ane} we find that the ITSOT and the inverse anomalous Nernst effect are, respectively, described by \eqref{eq_total_itsot} and by \begin{equation}\label{eq_iane_heat_curr} \mathscr{J}^{Q}_{y}=T \alpha^{\phantom{y}}_{xy} E^{\phantom{y}}_{x}. \end{equation} Employing the general identity \eqref{eq_convert_log} (but in contrast to section~\ref{sec_itsot} we now use it backwards) we obtain \begin{equation}\label{eq_beta_tilde_intrinsic} \begin{aligned} \tilde{\beta}^{\rm int}_{ij}=&-\frac{1}{\mathcal{N}V} \sum_{\vn{k}n} B_{\vn{k}nji} \bigg\{ f_{\vn{k}n} (\mathcal{E}_{\vn{k}n}-\mu) +\\ &\quad\quad\quad\quad+\frac{1}{\beta} \ln \left[ 1+e^{-\beta(\mathcal{E}_{\vn{k}n}-\mu)} \right] \bigg\} \end{aligned} \end{equation} from \eqref{eq_beta_itsot} and, similarly, \eqref{eq_iane_heat_curr} can be written as \begin{equation}\label{eq_iane_heat_curr_curvature} \begin{aligned} \mathscr{J}^{Q}_{y}&= -\frac{1}{\mathcal{N}V} \sum_{\vn{k}n} \mathscr{N}_{\vn{k}nyx} \bigg\{ f_{\vn{k}n} (\mathcal{E}_{\vn{k}n}-\mu) +\\ &\quad\quad\quad\quad+\frac{1}{\beta} \ln \left[ 1+e^{-\beta(\mathcal{E}_{\vn{k}n}-\mu)} \right] \bigg\}E_{x}^{\phantom{y}}. \end{aligned} \end{equation} The finite-$T$ expressions of $\mathscrbf{D}$ and $\vn{M}^{\rm orb}$ are now easily obtained, respectively, by subtracting the ITSOT heat current given by \eqref{eq_beta_tilde_intrinsic} from the energy current \eqref{eq_b_intrinsic} and by subtracting the heat current \eqref{eq_iane_heat_curr_curvature} from $\mathscr{J}^{\phantom{x}}_{y}=\mathscr{R}^{\rm int}_{yx}E^{\phantom{x}}_x$. This leads to \eqref{eq_dmi_finite_temperature} for the DMI spiralization tensor and to \begin{equation}\label{eq_orb_mag_finite_t} M_{z}^{\rm orb} \! = \! \frac{1}{\mathcal{N}V} \! \sum_{\vn{k}n} \! f_{\vn{k}n} \! \left\{ \! \mathscr{M}_{\vn{k}nyx} \! + \! \frac{1}{\beta} \mathscr{N}_{\vn{k}nyx} \ln \! \left[ 1 \! + \! e^{-\beta(\mathcal{E}_{\vn{k}n}-\mu)} \right] \! \right\} \end{equation} for the orbital magnetization. \eqref{eq_orb_mag_finite_t} agrees to the finite-$T$ expressions of $M_{z}^{\rm orb}$ derived elsewhere~\cite{shi_quantum_theory_orbital_mag,ane_niu}. \section{Ab-initio calculations} \label{section_ab_initio} We investigate TSOT and ITSOT in a Mn/W(001) magnetic bilayer composed of one monolayer of Mn deposited on 9 layers of W(001). The ground state of this system is magnetically noncollinear and can be described by the cycloidal spin spiral \eqref{eq_spin_spiral_cycloid}~\cite{dmi_mnw_ferriani}. Based on phenomenological grounds~\cite{symmetry_considerations_PhysRevB.86.094406, spin_motive_force_hals_brataas} we can expand torkance as well as TSOT and ITSOT coefficients locally at a given point in space in terms of $\hat{\vn{n}}$ and $\bar{\mathscrbf{C}}$: \begin{equation} \begin{aligned} t^{\phantom{i}}_{ij}(\hat{\vn{n}},\bar{\mathscrbf{C}})& =\sum_{k}t_{ijk}^{(1,0)} \hat{n}_{k}^{\phantom{i}} + \sum_{kl}t_{ijkl}^{(0,1)}\bar{\mathscr{C}}_{kl}^{\phantom{i}} +\\ &+\sum_{klm}t_{ijklm}^{(1,1)}\hat{n}^{\phantom{i}}_{k} \bar{\mathscr{C}}^{\phantom{i}}_{lm} +\sum_{k}t_{ijkl}^{(2,0)}\hat{n}^{\phantom{i}}_{k}\hat{n}^{\phantom{i}}_{l} +\cdots. \end{aligned} \end{equation} The coefficients $t_{ijk}^{(1,0)}$, $t_{ijkl}^{(0,1)}$, $t_{ijklm}^{(1,1)}$,\dots in this expansion can be extracted from magnetically collinear calculations. Analogous expansions of the TSOT and ITSOT coefficients are of the same form. Here, we consider only $t_{ijk}^{(1,0)}$ and $t_{ijkl}^{(2,0)}$, which give rise to the following contribution to the torque $\vn{\tau}$: \begin{equation} \begin{aligned} \vn{\tau}&= t_{xx}^{\rm odd}(\hat{\vn{e}}_{z}) \hat{\vn{n}}\times (\vn{E}\times \hat{\vn{e}}_{z})+\\ &+ t_{yx}^{\rm even}(\hat{\vn{e}}_{z}) \hat{\vn{n}} \times [ \hat{\vn{n}}\times (\vn{E}\times \hat{\vn{e}}_{z}) ], \end{aligned} \end{equation} where we used that for magnetization direction $\hat{\vn{n}}$ along $z$ it follows from symmetry considerations that $t^{\phantom{x}}_{xx}=t^{\phantom{x}}_{yy}$, $t^{\phantom{x}}_{xy}=-t^{\phantom{x}}_{yx}$, $t^{\rm even}_{xx}=0$ and $t^{\rm odd}_{yx}=0$. The SOT in this system has already been discussed by us~\cite{ibcsoit}. In order to obtain TSOT and ITSOT, we calculate the torkance for the magnetically collinear ferromagnetic state with magnetization direction $\hat{\vn{n}}$ set along $z$ as a function of Fermi energy and use \eqref{eq_beta_tsot} and \eqref{eq_total_itsot} to determine the TSOT and ITSOT coefficients $\vn{\beta}$ and $\tilde{\vn{\beta}}$, respectively. Computational details of the density-functional theory calculation of the electronic structure as well as technical details of the torkance calculation are given in \cite{ibcsoit}. The torkance calculation is performed with the help of Wannier functions~\cite{wannier90,WannierPaper} and a quasiparticle broadening of $\Gamma=25$~meV is applied. \begin{figure} \flushright \includegraphics[width=0.82\linewidth,trim=0cm 0cm 7cm 18cm,clip]{evenandoddtsot_vs_temp.pdf} \caption{\label{figuretsot} Thermal torkance $\beta$ vs.\ temperature of a Mn/W(001) magnetic bilayer for magnetization in $z$ direction. Solid line: Even component $\beta_{yx}^{\rm even}$ of the thermal torkance. Dashed line: Odd component $\beta_{xx}^{\rm odd}$ of the thermal torkance. $\beta$ is plotted in units of $\mu eVa_{0}/K=8.478\times 10^{-36}$Jm/K, where $a_{0}$ is Bohr's radius. } \end{figure} Due to symmetry it suffices to discuss the TSOT coefficients $\beta_{yx}^{\rm even}$ and $\beta_{xx}^{\rm odd}$, which are shown in Figure~\ref{figuretsot} as a function of temperature. For small temperatures we find $\beta_{ij}\propto T$ as expected from \begin{equation}\label{eq_low_temp_beta} \beta_{ij}\simeq-\frac{\pi^2 k^2_{\rm B}T}{3e} \frac{\partial\, t_{ij}} {\partial\,\mu}, \end{equation} which is obtained from \eqref{eq_beta_tsot} using the Sommerfeld expansion. Slightly above 100K both $\beta_{yx}^{\rm even}$ and $\beta_{xx}^{\rm odd}$ stop following the linear behavior of the low temperature expansion \eqref{eq_low_temp_beta}: After reaching a maximum both $\beta_{yx}^{\rm even}$ and $\beta_{xx}^{\rm odd}$ decrease and finally change sign. At $T=300$K the thermal torkances are $\beta_{yx}^{\rm even}=5.24\times 10^{-35}$Jm/K and $\beta_{xx}^{\rm odd}=-3.21\times 10^{-36}$Jm/K. Thermal torkances of comparable magnitude have been determined in calculations on FePt/Pt magnetic bilayers~\cite{fept_guillaume}. Using \eqref{eq_betatilde_from_beta} and the volume of the unit cell of $V=1.58\times 10^{-28}$m$^3$ to convert the TSOT coefficients into ITSOT coefficients, we obtain $\tilde{\beta}_{yx}^{\rm even}=-99.49\mu$J/m$^2$ and $\tilde{\beta}_{xx}^{\rm odd}=-6.09\mu$J/m$^2$ at $T=300$K. When the magnetization precesses around the $z$ axis in ferromagnetic resonance (this situation is sketched in Figure~1d) with frequency $\omega$ and cone angle $\theta$ according to \begin{equation} \hat{\vn{n}}(t)= [ \sin(\theta)\cos(\omega t), \sin(\theta)\sin(\omega t), \cos(\theta) ]^{\rm T}, \end{equation} the following ITSOT heat current is obtained from \eqref{eq_subtract_dmi_for_heat} in the limit of small $\theta$: \begin{equation}\label{eq_fmr_heat_curr_z} \begin{aligned} \mathscr{J}_{x}^{Q}&= \omega \theta \left[ \tilde{\beta}_{xx}^{\rm odd}\cos(\omega t) - \tilde{\beta}_{yx}^{\rm even}\sin(\omega t) \right] \\ \mathscr{J}_{y}^{Q}&= \omega \theta \left[ \tilde{\beta}_{yx}^{\rm even}\cos(\omega t) + \tilde{\beta}^{\rm odd}_{xx}\sin(\omega t) \right], \end{aligned} \end{equation} where we made use of $\tilde{\beta}^{\phantom{\rm od}}_{xx}= \tilde{\beta}^{\phantom{\rm od}}_{yy}=\tilde{\beta}^{\rm odd}_{xx}$ and $-\tilde{\beta}^{\phantom{\rm ev}}_{xy}= \tilde{\beta}^{\phantom{\rm ev}}_{yx}=\tilde{\beta}^{\rm even}_{yx}$, which follows from symmetry considerations. Using the ITSOT coefficients $\tilde{\beta}^{\rm even}_{yx}$ and $\tilde{\beta}^{\rm odd}_{xx}$ determined above at $T=300$K we can determine the amplitudes of $\mathscr{J}_{x}^{Q}$ and $\mathscr{J}_{y}^{Q}$. Assuming a cone angle of $1^{\circ}$ and a frequency of $\omega=2\pi\cdot$5GHz we find that the amplitude of the oscillating heat current density $\mathscr{J}_{x}^{Q}$ is \begin{equation} \omega\theta \sqrt{ \left( \tilde{\beta}^{\rm even}_{yx} \right)^2 + \left( \tilde{\beta}^{\rm odd}_{xx} \right)^2 }\approx 55\frac{\rm kW}{{\rm m}^2}. \end{equation} The heat current density $\mathscr{J}_{y}^{Q}$ has the same amplitude. We can use the thermal conductivity of bulk W of $\lambda_{xx}$=174~W/(Km)~\cite{ho_powell_liley} at $T$=300~K to estimate the temperature gradient needed to drive a heat current of this magnitude: (55kW/m$^2$)/$\lambda_{xx}$=316~K/m. The thickness of the Mn/W(001) film is 1.58~nm. The amplitude of the heat current per length flowing in $x$ direction is thus 55~kW/m$^2\cdot$1.58~nm$\approx 87\mu$W/m. These estimates suggest that $\mathscr{J}^{Q}_{\phantom{y}}$ is measurable in ferromagnetic resonance experiments. According to \eqref{eq_fmr_heat_curr_z} the heat current can be made larger by increasing the cone angle. However, in ferromagnetic resonance experiments the cone angle $\theta$ is small. Therefore, we estimate the heat current driven by a flat cycloidal spin spiral that moves with velocity $w$ in $x$ direction. Its magnetization direction is given by \begin{equation} \hat{\vn{n}}_{\rm c}(\vn{r},t)=\hat{\vn{n}}_{\rm c}(x,t)= \begin{pmatrix} \sin(qx-wt)\\ 0\\ \cos(qx-wt) \end{pmatrix}.\\ \end{equation} With $\hat{\vn{n}}_{\rm c}(\vn{r},t)\times \partial \hat{\vn{n}}_{\rm c}(\vn{r},t)/\partial t=wq\hat{\vn{e}}_{y}$ we get \begin{equation} \mathscr{J}^{\rm Q}_{x}=-\tilde{\beta}^{\rm even}_{xy}wq \end{equation} from \eqref{eq_subtract_dmi_for_heat}, i.e., a constant-in-time heat current in $x$ direction. Using $\tilde{\beta}_{xy}^{\rm even}=99.49\mu$J/m$^2$ determined above and a spin-spiral wavelength of 2.3nm~\cite{dmi_mnw_ferriani} we obtain a heat current density of $\mathscr{J}^{\rm Q}_{x}$=-270kW/m$^2$ for a spin spiral moving with a speed of $w$=1ms$^{-1}$. This estimate suggests that fast domain walls moving at a speed of the order of 100ms$^{-1}$ drive significant heat currents that correspond to temperature gradients of the order of 0.1K/($\mu$m). \section{Summary} Magnetization dynamics drives heat currents in magnets with broken inversion symmetry and SOI. This effect is the inverse of the thermal spin-orbit torque. We use the Kubo linear-response formalism to derive equations suitable to calculate the inverse thermal spin-orbit torque (ITSOT) from first principles. We find that a ground-state energy current associated with the Dzyaloshinskii-Moriya interaction (DMI) is driven by magnetization dynamics and needs to be subtracted from the linear response of the energy current in order to extract the heat current. We show that the ground-state energy currents obtained from the Kubo linear-response formalism can also be used to derive expressions for DMI and for orbital magnetization. The ITSOT extends the picture of phenomena associated with the coupling of spin to electrical currents and heat currents in magnets with broken inversion symmetry and SOI. Based on \textit{ab-initio} calculations we estimate the heat currents driven by magnetization precession and moving spin-spirals in Mn/W(001) magnetic bilayers. Our estimates suggest that fast domain walls in magnetic bilayers drive significant heat currents. \ack We gratefully acknowledge computing time on the supercomputers of J\"ulich Supercomputing Center and RWTH Aachen University as well as financial support from the programme SPP 1538 Spin Caloric Transport of the Deutsche Forschungsgemeinschaft.\\
1,116,691,501,031
arxiv
\section*{Abstract} We prove a conjecture of Helleseth that claims that for any $n \geq 0$, a pair of binary maximal linear sequences of period $2^{2^n}-1$ can not have a three-valued cross-correlation function. \section{Introduction} The binary maximal linear sequences of period $2^m-1$ are the sequences of elements in $\GF(2)$ of the form $\{\Tr(\alpha^{d i + t})\}_{i \in {\mathbb Z}}$ where $\alpha$ is a generator of $\GF(2^m)^*$, $\Tr\colon \GF(2^m) \to \GF(2)$ is the absolute trace, and $d$ and $t$ are integers (or integers modulo $2^m-1$) with $\gcd(d,2^m-1)=1$. (See the Introduction of \cite{Helleseth}.) The cross-correlation of any two binary sequences $a=\{a_i\}$ and $b=\{b_i\}$ whose periods are divisors of $2^m-1$ is the function $C_{a,b}(t)=\sum_{i=0}^{2^m-2} (-1)^{a_{i-t}+b_i}$. In this note, we shall take $a=\{a_i\}=\{\Tr(\alpha^{i})\}$ and $b=\{b_i\}=\{\Tr(\alpha^{d i})\}$, where the {\it decimation} d has $\gcd(d,2^m-1)=1$. We call decimations with $d \equiv 1, 2, \ldots, 2^{m-1} \pmod{2^m-1}$ {\it trivial decimations} because $\{\Tr(\alpha^{2^k i})\}$ is the same sequence as $\{\Tr(\alpha^i)\}$. One readily shows that $C_{a,b}(t)$ is the same as $$C_d(t)=\sum_{x \in \GF(2^m)^*} (-1)^{\Tr(\alpha^{-t} x + x^d)}.$$ For a fixed $d$, we are interested in how many different values $C_d(t)$ takes as $t$ varies over ${\mathbb Z}/(2^m-1){\mathbb Z}$. We say that $C_d(t)$ is {\it $v$-valued} to mean that $|\{C_d(t)\colon t\in {\mathbb Z}/(2^m-1){\mathbb Z}\}|=v$. Helleseth gave the following criterion for determining whether $C_d(t)$ is two-valued. \begin{theorem}[Helleseth \cite{Helleseth}, Theorem 3.1(d),(g), Theorem 4.1]\label{HellesethsTheorem} If $d \equiv 1, 2, \ldots, 2^{m-1} \pmod{2^m}$, then $C_d(t) \in \{-1, 2^m-1\}$ for all $t$. Otherwise, $C_d(t)$ takes at least three different values. \end{theorem} In the same paper, Helleseth conjectured the following. \begin{conjecture}[Cf.~Helleseth \cite{Helleseth}, Conjecture 5.2]\label{HellesethsConjecture} If $m$ is a power of $2$, $C_d(t)$ is not three-valued. \end{conjecture} In view of Theorem \ref{HellesethsTheorem}, this conjecture says that if $m$ is a power of $2$, then $C_d(t)$ is either two-valued (if $d$ is a trivial decimation) or takes four or more values (if $d$ is nontrivial). We prove this conjecture in this note. Feng \cite{Feng} recently proved the following weaker form of Conjecture \ref{HellesethsConjecture}. \begin{theorem}[Feng \cite{Feng}, Theorem 2]\label{FengsTheorem} If $m$ is a power of $2$ and $C_d(t)=-1$ for some value of $t$, then $C_d(t)$ cannot be three-valued. \end{theorem} We prove Conjecture \ref{HellesethsConjecture} by proving the following. \begin{theorem}\label{OurTheorem} If $C_d(t)$ is three-valued, then $C_d(t)=-1$ for at least one value of $t$. \end{theorem} This, combined with Theorem \ref{FengsTheorem}, immediately implies Conjecture \ref{HellesethsConjecture}. \begin{remark} One should note that our theorem does not assume $m$ is a power of $2$, so it is much more general in scope that what is needed. In fact, one can prove the same theorem for maximal linear sequences derived from fields $\GF(p^m)$ with $p$ odd: this (and more) is done in \cite{Katz}. \end{remark} \section{Proof of Theorem \ref{OurTheorem}} We shall prefer to work in terms of the {\it Walsh transform}, defined as $$W_d(a)=\sum_{x \in \GF(2^m)} (-1)^{\Tr(x^d+ax)},$$ and it is straightforward to show that $$W_d(\alpha^{-t})=1+C_d(t).$$ Thus the values of $W_d$ on $\GF(2^m)^*$ are just the values of $C_d$ shifted by $1$. So $C_d$ is three-valued if and only if $W_d$ is three valued on $\GF(2^m)^*$. We need to establish a few well-known facts before proceeding to the proof of Theorem \ref{OurTheorem}. First, we need a simple result which, in rough terms, states that a sequence cannot be perfectly correlated or anti-correlated to a nontrivial decimation of itself. \begin{lemma}\label{Magnitude} If $d\not\equiv 1,2,\ldots,2^{m-1} \pmod{2^m-1}$, then $|W_d(a)| < 2^m$. \end{lemma} \begin{proof} From the definition of $W_d(a)$ as the sum $\sum_{x \in \GF(2^m)} (-1)^{\Tr(x^d+a x)}$ of $2^m$ terms in $\{1,-1\}$, it suffices to prove that the said terms are not all of the same sign. The $x=0$ term is $1$, and so the only way that all the terms can have the same sign is if $$\Tr(x^d+a x)=(x^d+x^{2 d} + \cdots + x^{2^{m-1} d}) + a(x+x^2+\cdots+x^{2^{m-1}})$$ equals $0$ for all $x \in \GF(2^m)$, i.e., if and only if this polynomial is zero modulo $x^{2^m} - x$. Given our assumption on $d$, all the exponents of $x$ that appear in the polynomial as expressed above are distinct modulo $2^m-1$, so this cannot happen. \end{proof} We consider the first few power moments of $W_d$, with the $r$th power moment defined to be $$P_r = \sum_{a \in \GF(2^m)^*} W_d(a)^r,$$ where we use the convention $0^0=1$ in evaluating $P_0$. The power moments of $C_d$ have been calculated by Helleseth, whence it is easy to obtain those of $W_d$. \begin{proposition}[See Helleseth \cite{Helleseth}]\label{Moments} We have \begin{enumerate}[(a)] \item $P_0=2^m-1$, \item $P_1=2^m$,\label{FirstMoment} \item $P_2=2^{2 m}$, and\label{SecondMoment} \item $P_3=2^{2m} |V|$, \end{enumerate} where $V$ is the set of roots of $1+x^d+(1+x)^d$ in $\GF(2^m)$. \end{proposition} From these one can readily deduce the following, which also appears as calculations in \cite{Feng}. \begin{proposition}\label{Counts} Suppose that $W_d(a)$ is three-valued on $\GF(2^m)^*$ with values $A$, $B$, and $C$, and that $W_d(a)=C$ for $N_C$ values of $a \in \GF(2^m)^*$. Then $$N_C = \frac{2^{2 m} - 2^m (A+B) + (2^m-1) A B}{(C-A)(C-B)}$$ and $$2^{2 m} |V| = 2^{2 m} (A+B+C) - 2^m(A B + B C + C A) + (2^m-1) A B C,$$ where $V$ is the set of roots of $1+x^d+(1+x)^d$ in $\GF(2^m)$. \end{proposition} \begin{proof} To get $N_C$, compute $\sum_{a \in \GF(2^m)^*} (W_d(a)-A)(W_d-B)$. On the one hand, $W_d(a) \in \{A,B,C\}$ implies that the sum is $N_C(C-A)(C-B)$. On the other hand, one can also calculate the sum in terms of power moments as $P_2 - (A+B) P_1 + A B P_0$, and then use the values given in Proposition \ref{Moments}. To get $|V|$, one can employ the same approach, this time with the sum $\sum_{a \in \GF(2^m)^*} (W_d(a)-A)(W_d(a)-B)(W_d(a)-C)$: on the one hand, it is zero, and on the other, it can be expressed in terms of $P_0$, $P_1$, $P_2$, and $P_3$. \end{proof} This can be used to prove an interesting result about the $2$-divisibility of the values assumed by $W_d(a)$. \begin{lemma}\label{Divisibility} Suppose that $W_d(a)$ takes precisely three values $A$, $B$, and $C$ for $a \in \GF(2^m)^*$. If all three values are non-zero, then $2^{m+1} \mid A B$. \end{lemma} \begin{proof} From Proposition \ref{Counts} we have \begin{equation}\label{Vequation} 2^{2 m} |V| = 2^{2 m} (A+B+C) - 2^m(A B + B C + C A) + (2^m-1) A B C, \end{equation} where $V$ is the set of roots of $1+x^d+(1+x)^d$ in $\GF(2^m)$. Suppose that $A,B,C\not=0$; then Lemma \ref{Magnitude} shows that $A,B,C\not\equiv 0 \pmod{2^m}$. (We clearly have a nontrivial decimation by Theorem \ref{HellesethsTheorem} since $W_d$ is three-valued on $\GF(2^m)^*$, and hence $C_d$ is three-valued.) Then the term $(2^m-1) A B C$ is divisible by fewer powers of $2$ than the other terms on the right hand side of \eqref{Vequation}, so $2^{2 m} |V|$ and $A B C$ have exactly the same power of $2$ in their respective prime factorizations, and so $2^{2 m} | A B C$. Since $C\not\equiv 0 \pmod{2^m}$, this means that $2^{m+1} | A B$. \end{proof} Now we are ready to prove Theorem \ref{OurTheorem}. We assume that $C_d$ is three-valued and that none of these values is $-1$ in order to show a contradiction. Then $W_d(a)$ is three-valued for $a \in \GF(2^m)^*$ with the three nonzero values $A$, $B$, $C$. Note that Proposition \ref{Moments}\eqref{FirstMoment} shows that $$\sum_{a \in \GF(2^m)^*} W_d(a)=2^m,$$ so we cannot have $A,B,C < 0$. Furthermore, by parts \eqref{FirstMoment} and \eqref{SecondMoment} of the same proposition, $$\left(\sum_{a \in \GF(2^m)^*} W_d\right)^2 = 2^{2 m} = \sum_{a \in \GF(2^m)^*} W_d(a)^2,$$ so we cannot have $A,B,C > 0$. Then without loss of generality, we may take $A < 0 < B$ and $C$ not between $A$ and $B$. Then by Proposition \ref{Counts}, the number $N_C$ of $a \in \GF(2^m)^*$ such that $W_d(a)=C$ is $$N_C = \frac{2^{2 m} - 2^m (A+B) + (2^m-1) A B}{(C-A)(C-B)}.$$ Since $C$ is not between $A$ and $B$, the denominator is positive, so $$2^{2 m} - 2^m (A+B) + (2^m-1) A B > 0.$$ We use Lemma \ref{Magnitude} and the fact that $A < 0$ and $B > 0$ to see that $$2^{2 m} - 2^m (-(2^m-1)+1) + (2^m-1) A B > 0,$$ so that $A B > -2^{m+1}$. But by Lemma \ref{Divisibility} and the fact that $A < 0 < B$, we have that $A B \leq -2^{m+1}$, which gives the contradiction that completes the proof of Theorem \ref{OurTheorem}. \section*{Acknowledgements} The author gives warm thanks to Robert Calderbank and Jonathan Jedwab. He thanks Robert Calderbank for introducing him to this problem, for stimulating discussions, and for unflagging encouragement. Jonathan Jedwab has also provided great encouragement, and must also be thanked for drawing the author's attention to \cite{Feng} when it appeared.
1,116,691,501,032
arxiv
\section{Introduction} The sphere packing problem in $\mathbb{R}^n$ is concerned with maximizing the proportion of Euclidean space covered by a set of balls of equal radius and disjoint interiors. We will mostly be concerned with the \emph{lattice} sphere packing problem, where the balls are required to be centered at points on an $n$-dimensional lattice $\Lambda$. The proportion achieved by a particular lattice, called the packing density of $\Lambda$, is then given by $$\Delta(\Lambda):=\frac{\mathrm{Vol}(\mathbb{B}_n(\lambda_1(\Lambda)))}{2^n\mathrm{Vol}(\Lambda)},$$ where $\lambda_1(\Lambda)$ denotes the shortest vector length in $\Lambda$, $\mathbb{B}_n(r)$ a ball of radius $r$ and $\mathrm{Vol}(\Lambda)$ denotes the covolume of the lattice. We also denote by $\Delta_n$ the supremum of lattice packing densities that can be achieved in $n$ dimensions. Its value is only known in a handful of dimensions, see for instance the summary \cite[1.5.]{Splag} as well as \cite{CohnKumar}. The density is achieved by highly symmetric lattices such as root lattices or the Leech lattice. Owing to highly celebrated results \cite{Hales2005APO,Marynadim8,Marynadim24}, some of these are even known to solve the general sphere packing problem. For arbitrary dimensions, the best known upper and lower bounds on $\Delta_n$ are however exponentially far apart as $n$ increases (see e.g., the survey article \cite{Cohn2016ACB} for more background). In this article we shall be concerned with lower bounds as $n$ increases and with giving effective constructions of lattices approaching these bounds.\par The classical Minkowski--Hlawka theorem \cite{Hlawka1943} states that $\Delta_n\geq 2\frac{\zeta(n)}{2^n}$, a bound which Rogers \cite{RogersExistence(Annals)} later improved by a linear factor to $\Delta_n\geq \frac{cn}{2^n}$ for $c=2/e\approx 0.74$. The constant was subsequently sharpened to $c=1.68$ by Davenport-Rogers \cite{DavenportRogers} and $c=2$ by Ball \cite{Ballbound} for all $n$. More recently, Vance \cite{VanceImprovedBounds} showed using lattices which are modules over the Hurwitz integers that one may take $c=24/e\approx 8.83$ and Venkatesh \cite{VenkateshBounds} showed that for $n$ large enough one may take $c=65963$. Moreover, by considering lattices from maximal orders in cyclotomic fields, Venkatesh was able to achieve for infinitely many dimensions the improvement $\Delta_n\geq \frac{n\log\log n}{2^{n+1}}$. The first author \cite{gargava2021lattice} then extended such results to lattices coming from orders $\mathcal{O}$ in arbitrary $\mathbb{Q}$-division algebras. This was achieved by proving a Siegel mean value theorem (see \cite{SiegelMVT,gargava2021lattice}) in this setting and exploiting the additional symmetries of the lattices under the group of finite order units in $\mathcal{O}^\times$ to obtain dense packings. In particular, new sequences of dimensions such that $\Delta_n\geq \frac{c_1\cdot n\log\log(n)^{c_2}}{2^n}$ for constant $c_1,c_2>0$ are uncovered. Lattices provide an important tool for coding, for instance for the Additive White Gaussian Noise (AWGN) model. For such applications it is often desirable to have lattices that are ``good'' in the sense of achieving high packing densities (see e.g., \cite{Loeliger97averagingbounds}). The sphere packing problem is moreover crucially connected to optimizing parameters of codes and energy optimization (see e.g.,\cite{Splag,UniversalOpt}). However, despite having Minkowski's lower bound for over a century, producing explicit families of lattices that achieve these asymptotic bounds in less than astronomical running time has proved elusive. Currently known polynomial time algorithms produce lattices whose densities are exponentially worse than these bounds \cite{litsyn1987constructive}. In this paper, we make the currently best known existential results for $\mathbb{Q}$-division algebras effective by exhibiting finite sets of lattices which for large enough dimension must contain a lattice approaching the non-constructive lower bounds stated above. \par Indeed, for orders $\mathcal{O}$ in a $\mathbb{Q}$-division algebra, we consider for suitable primes $p$ and for $t\geq 2$ the reduction map $\phi_p:\mathcal{O}^t\to (\mathcal{O}/p\mathcal{O})^t$ and may identify the quotient with a product of matrix rings over a finite field $\mathbb{F}_q$. The sets of lattices $\mathbb{L}_p$ we consider are then re-scaled pre-images via $\phi_p$ of codes in $(\mathcal{O}/p\mathcal{O})^t$ of a certain fixed $\mathbb{F}_q$-dimension. Our first main result, Theorem \ref{thm:specificaverage}, is in effect a Siegel mean value theorem for these sets of lifts of codes valid for general finite dimensional $\mathbb{Q}$-division algebras. We refer the reader to Section \ref{sec:three} for detailed statements, whereas some useful preliminary results on lattices from division algebras are established in Section \ref{sec:two}.\par In Section \ref{sec:four}, the extra symmetries of these lattices under finite subgroups of $\mathcal{O}^\times$ is exploited to obtain (see Theorem \ref{thm:improvedbounds}): \begin{theorem}\label{thm:mainintro} Let $A$ be a central simple division algebra over a number field $K$ with ring of integers $\mathcal{O}_K$. Let $\mathcal{O}$ be an $\mathcal{O}_K$-order in $A$. Let $n^2=[A:K]$, $m=[K:\mathbb{Q}]$ and let $t\geq 2$ be a positive integer. Let $G_0$ be a fixed finite subgroup of $\mathcal{O}^\times$. Then there exists a lattice $\Lambda$ in dimension $n^2mt$ achieving $$\Delta(\Lambda)\geq\frac{\vert G_0\vert\zeta(mn^2t)\cdot t}{2^{mn^2t}\cdot e(1-e^{-t})}.$$ Moreover, there exists for any $\varepsilon>0$ an $\mathcal{O}$-lattice $\Lambda_\varepsilon$ in dimension $n^2mt$ of packing density $$\Delta(\Lambda_\varepsilon)\geq(1-\varepsilon)\cdot \frac{\vert G_0\vert\zeta(mn^2t)\cdot t}{2^{mn^2t}\cdot e(1-e^{-t})}$$ which can be constructed. Indeed, $\Lambda_\varepsilon$ is obtained by applying Proposition \ref{prop:productminima} to a suitable sublattice of $\mathcal{O}^t$ obtained as a pre-image via reduction modulo primes $\mathfrak{p}$ of $\mathcal{O}_K$ of large enough norm of a code. The code in question is isomorphic to $k$ copies of simple left $\mathcal{O}/\mathfrak{p} \mathcal{O}$-modules for some $nt-t<k<nt$. \end{theorem} Note that Proposition \ref{prop:productminima} mentioned in the theorem is a version of a lemma of Minkowski extended to the division algebra setting. The theorem above is derived from Theorem \ref{thm:specificaverage} by mimicking an approach of Rogers \cite{RogersExistence(Annals)}, later used by Vance \cite{VanceImprovedBounds} and Campello \cite{CampelloRandom}. In this way, we combine nearly all of these existence results and generalize them in one theorem. \par In order to obtain the densest packings asymptotically, one therefore seeks families of orders $\mathcal{O}$ with large finite unit groups $G_0\subset \mathcal{O}^*$. Building on Amitsur's classification \cite{Amitsur1955FiniteSO} and following \cite{gargava2021lattice}, we give examples of such families. For instance, one may consider quaternion algebras over cyclotomic fields and hope to combine the improvements over the Minkowski-Hlawka bounds obtained by Vance and Venkatesh. However, due to a parity condition on the dimension, this is not quite the case and one obtains asymptotic lower bounds $$\Delta_n\geq \frac{3\cdot (\log\log n)^{7/24}n\cdot(1+o(1))}{2^n}.$$ Still, the lower bound obtained exceeds the lower bound from cyclotomic fields $n\log\log n\cdot 2^{-(n+1)}$ in less than astronomical dimensions due to the improved constant. Moreover, one can exhibit lattices from non-commutative rings that achieve the same asymptotic density as the cyclotomic lattices (see Proposition \ref{prop:loglogimprovement}). \par Finally, in Section \ref{sec:five} we establish effectivity by giving lower bounds on the norm of the prime $\mathfrak{p}$ so that packing densities such as in Theorem \ref{thm:mainintro} can be achieved, both by varying the division algebra and its dimension (Theorem \ref{thm:effective}) and by varying the rank of the $\mathcal{O}$-lattices to obtain an effective version of Vance's results (Proposition \ref{prop:effectiveVance}). As an application, we obtain in Proposition \ref{prop:cycloquaternionseffective}: \begin{prop} Let $m_k=\prod_{\substack{p\leq k \text{ prime}\\2\nmid\ord_2p}}p$ and set $n_k:=8\varphi(m_k)$. Then for any $\varepsilon>0$ there is an effective constant $c_\varepsilon$ such that for $k>c_\varepsilon$ a lattice $\Lambda$ in dimension $n_k$ with density $$\Delta(\Lambda)\geq (1-\varepsilon)\frac{24\cdot m_k}{2^{n_k}}$$ can be constructed in $e^{4.5\cdot n_k\log(n_k)(1+o(1))}$ binary operations. This construction leads to the asymptotic density of $\Delta(\Lambda)\geq (1-e^{-n_k})\frac{3\cdot n_k(\log\log n_k)^{7/24}}{2^{n_k}}$. \end{prop} We ought to stress that such effective lower bounds on the density are not the first of their kind but that there is a rather rich history of such results. Rush \cite{Rush1989}, building on work with Sloane \cite{RushnSloane}, recovered the Minkowski-Hlawka bound via coding-theoretic results such as the Varshamov-Gilbert bound and by lifting codes via the Leech-Sloane Construction A (see \cite[Chapter 5]{Splag}). The connection between random coding and such averaging results was further explicited by Loeliger \cite{Loeliger97averagingbounds}. This leads to families of approximate size $e^{n^2\log n}$ in which to search for lattices achieving the Minkowski-Hlawka bound. Gaborit and Z\'emor \cite{GaboritZemor2007} exploited additional structures to reduce the family size to $e^{n\log n}$. Finally, Moustrou \cite{MoustrouCodes} used a similar approach for cyclotomic lattices to obtain an effective version of Venkatesh's result. This approach was further formalized by Campello \cite{CampelloRandom}, where an example of such results for quaternion algebras is also mentioned. Our work thus owes a lot to these existing constructions. In particular, our approach is chiefly based on Moustrou's and Campello's work \cite{MoustrouCodes,CampelloRandom} and extends the scope of their results to division algebras, allowing symmetries from arbitrarily large non-commutative finite groups. We also note that the utility of codes from division rings is well-studied by coding theorists, see for example \cite{Berhuy2013AnIT, DucoatOggier, Vehkalahti2021}. \par We hope that this article provides a useful addition to both the coding-theoretic and mathematical literature. The effective results we arrive at in Section \ref{sec:five} typically have a complexity of $e^{C\cdot n\log n(1+o(1))}$, which is similar to \cite[Theorem 1]{MoustrouCodes}. However, the effective version of Vance's result (see Corollary \ref{cor:computationalVance}) has complexity $e^{1/4\cdot n^2\log n(1+o(1))}$ and it should be similar for other constructions obtained by increasing the $\mathcal{O}$-rank of the lattices.\par The running times correspond to an exhaustive search through all the finitely many lattice candidates and it would be interesting to examine if one can further reduce the complexity of this search. It should be remarked that our results can still be used to quickly generate one of these random lattices in high dimensions that have prescribed symmetries and are expected to have large minimal vectors. Furthermore, our results hint towards the algebraic structures that one might look for in order to construct explicit families of lattices that are asymptotically ``good''. \section{Preliminaries on division rings}\label{sec:two} In this section, we recall some definitions and results on central simple algebras and in particular division rings. The primary reference is Reiner's book \cite{reiner2003maximal}. Let $\mathcal{O}_K$ denote a Dedekind ring with quotient field $K$ and let $A$ denote a separable $K$-algebra. \begin{definition} An $\mathcal{O}_K$-order in $A$ is a subring $\mathcal{O}$ of $A$ having the same identity element and such that $\mathcal{O}$ is a \emph{full} $\mathcal{O}_K$-lattice in $K$, i.e. $\mathcal{O}$ is a finitely generated $\mathcal{O}_K$-submodule of $A$ such that $K\cdot \mathcal{O}=A$. \end{definition} The existence of $\mathcal{O}_K$-orders and maximal orders is easily shown. From now on let $\mathcal{O}$ denote such an order. We first review some results about ideals in $\mathcal{O}$. Note that we shall typically state results for general $\mathcal{O}_K$-orders, although simply dealing with maximal orders suffices for our applications. \subsection{Prime ideals} \begin{defn} A prime ideal of $\mathcal{O}$ is a proper two-sided ideal $\mathfrak{p}$ in $\mathcal{O}$ such that $K\cdot \mathfrak{p} =A$ and such that for every pair of two sided ideals $S,T$ in $\Lambda$, we have that $S\cdot T\subset \mathfrak{p}$ implies $S\subset \mathfrak{p}$ or $T\subset \mathfrak{p}$. \end{defn} For a prime $p$ of $\mathcal{O}_K$ we shall denote by $\mathcal{O}_p,A_p$ the localizations at $p$ of the $\mathcal{O}_K$-order $\mathcal{O}$ and of $A$ and by $\hat{\mathcal{O}}_p,\hat{A_p}$ the respective completions. Finally let $\mathrm{rad}(R)$ denote the Jacobson radical of a ring $R$. It is easy to obtain (see \cite[Thm 22.3,22.4]{reiner2003maximal}) the characterization: \begin{theorem}\label{thm:primecorrespondence} The prime ideals of an $\mathcal{O}_K$-order $\mathcal{O}$ coincide with the maximal two-sided ideals of $\mathcal{O}$. If $\mathfrak{p}$ is a prime ideal of $\Lambda$, then $p=\mathfrak{p}\cap \mathcal{O}_K$ is a non-zero prime of $\mathcal{O}_K$, and $\overline{\mathcal{O}}:=\mathcal{O}/\mathfrak{p}$ is a finite dimensional simple algebra over the residue field $\mathcal{O}_K/p$. Moreover, when $A$ is a central simple $K$-algebra, there is a bijection $\mathfrak{p}\leftrightarrow p$ between the set of primes of $\mathcal{O}$ and of $\mathcal{O}_K$, given by $$p=R\cap \mathfrak{p}\text{ and }\mathfrak{p}=\mathcal{O}\cap \mathrm{rad}(\mathcal{O}_p).$$ \end{theorem} We note that when $A$ is a central simple $K$-algebra, the quotient $\mathcal{O}/\mathfrak{p}$ is a simple Artin ring and hence a ring of matrices over a division ring. In particular, when $\mathcal{O}_K/p$ is a finite field the quotient $\mathcal{O}/\mathfrak{p}$ is isomorphic to a ring of matrices over a finite field. \par We now summarize the behavior of $\mathcal{O}$ and $A$ under localization as well as the splitting behavior for central simple algebras: \begin{theorem} Let $\mathcal{O}$ be a maximal order in a central simple $K$-algebra $A$. Let $p$ denote a prime of $\mathcal{O}_K$. Then the completion $\hat{A_p}$ is a central simple $\hat{K}_p$-algebra and $\hat{\mathcal{O}}_p$ is a maximal order. Moreover: \begin{enumerate} \item For almost every prime $p$ of $\mathcal{O}_K$, we have that $$\hat{A_p}\cong M_n(\hat{K}_p)$$ with $n^2=[A:K]$ (\emph{split} or \emph{unramified} case). Moreover $p$ is split if and only if the corresponding prime ideal $\mathfrak{p}$ of $\mathcal{O}$ as in Theorem \ref{thm:primecorrespondence} is just $p\mathcal{O}$. \\ \item The order $\Lambda=M_n(\hat{\mathcal{O}}_p)$ is a maximal $\hat{\mathcal{O}}_{K_p}$-order in $M_n(\hat{K}_p)$ having a unique maximal two-sided ideal $\pi_{\hat{K}_p}\Lambda$, where $\pi_{\hat{K}_p}$ is a prime element of the discrete valuation ring $\hat{\mathcal{O}}_{K_p}$. The powers $$(\pi_{\hat{K}_p}\Lambda)^t=\pi_{\hat{K}_p}^t\cdot\Lambda \text{ for }t=0,1,2,\ldots$$ exhaust all the non-zero two-sided ideals of $\Lambda$ and any maximal $\hat{\mathcal{O}}_{K_p}$-order is of the form $u\Lambda u^{-1}$ for $u\in \mathrm{GL}_n(\hat{K}_p)$. \end{enumerate} \end{theorem} \begin{proof} These are well-known results, see e.g. \cite[Theorems 17.3, 32.1]{reiner2003maximal}. \end{proof} In what follows, we will thus use the same notation $\mathfrak{p}$ for the primes of $\mathcal{O}_K$ and the corresponding prime in $\mathcal{O}$ in the split case. \par We can now establish the following lemma which will provide the necessary reduction maps in order to lift codes from characteristic $p$ to division rings: \begin{lemma}\label{lemma:primesexistence} Let $K$ be a number field and $A$ a division algebra with center $K$ so that $[A:K]=n^2$. Let $\mathcal{O}$ be an $\mathcal{O}_K$-order in $A$. Then for all but finitely many primes $\mathfrak{p}$ of $\mathcal{O}_K$, the quotient $\mathcal{O}/\mathfrak{p}\mathcal{O}$ is isomorphic to $M_n(\mathbb{F}_q)$, where $\mathcal{O}_K/\mathfrak{p}\mathcal{O}_K\cong \mathbb{F}_q$. \end{lemma} \begin{proof} This follows from our previous considerations and in particular the fact that only finitely many primes of $A$ are ramified. Indeed, first assume that $\mathcal{O}$ is a maximal order. Then we know that the completion satisfies $$\hat{A}_\mathfrak{p}=A\otimes_K\hat{K}_\mathfrak{p}\cong M_n(\hat{K}_\mathfrak{p})$$ and moreover that the order $\hat{\mathcal{O}}_\mathfrak{p}$ in the local case is maximal and conjugate to $M_n(\hat{\mathcal{O}}_{K_\mathfrak{p}})$. In particular, we then have $$\mathcal{O}/\mathfrak{p}\mathcal{O}\cong M_n(\hat{\mathcal{O}}_{K_\mathfrak{p}})/\mathrm{rad}(M_n(\hat{\mathcal{O}}_{K_\mathfrak{p}}))\cong M_n(\hat{\mathcal{O}}_{K_\mathfrak{p}}/\mathfrak{p}\hat{\mathcal{O}}_{K_\mathfrak{p}})\cong M_n(\mathbb{F}_q) .$$ When the order is not maximal, the same holds if one additionally avoids primes dividing the index of $\mathcal{O}$ in a maximal order since their localizations will coincide at those primes. \end{proof} If in the above one wishes to ensure $q=p$ is prime this can also typically be achieved for infinitely many primes by imposing that $p$ splits in $\mathcal{O}_K$ and by the \v Cebotarev density theorem.\par \subsection{Lattices from orders} The central simple algebras $A$ are equipped with natural embeddings $A\hookrightarrow A\otimes_\mathbb{Q}\mathbb{R}$. The latter space being a semisimple $\mathbb{R}$-algebra, it can be identified with a product of matrix rings over $\mathbb{R}, \mathbb{C}$ and $\mathbb{H}$ respectively, depending on the signature of $K$ and the splitting behavior of $A$ at infinity. An $\mathcal{O}_K$-order then embeds as a lattice into this space. We establish some properties of these lattices in the next two subsections. \begin{lemma} Any semisimple $\mathbb{R}$-algebra $A$ admits an involution $(\ )^{*} : A \rightarrow A$ such that the following conditions are satisfied. \begin{itemize} \item For any $a,b \in A$, we have $(a b) ^{*} = b^{*} a^{*}$. \item The trace yields a positive definite quadratic form $a \mapsto \T(a^{*} a )$ on $A$, meaning that $\T(a^{*} a)$ is always non-negative and is zero only when $a =0$. Moreover, this quadratic form induces the inner product $\langle x, y\rangle =\T(x^* y)$ on $A$. \end{itemize} \label{le:involution} In particular, when $A$ is a division algebra over $\mathbb{Q}$, such an involution exists on $A \otimes_{\mathbb{Q}} \mathbb{R}$. \end{lemma} \begin{proof} See e.g. \cite[Corollary 35]{gargava2021lattice} \end{proof} We will denote $A \otimes_{\mathbb{Q}} \mathbb{R}$ by $A_{\mathbb{R}}$. Involutions with the properties as described in Lemma \ref{le:involution} will henceforth be called ``positive involutions''. An element $a \in A_{\mathbb{R}}$ such that $a^*=a$ and $x \mapsto \T(x^* ax)$ is a positive definite real quadratic form on $A_{\mathbb{R}}$ is called symmetric and positive definite. For instance, for any unit $a$ the element $a^*a$ is symmetric positive definite. \par In practice, we will be considering $t\geq 2$ copies of orders $\mathcal{O}$ in division algebras $A$ with center a number field $K$ and our lattices will be $\mathcal{O}^{t} \subseteq A^{t} \hookrightarrow (A_\mathbb{R})^{t}$. We will endow the space $A_\mathbb{R}^t$ with the norm induced by the following quadratic form: \begin{align} A_{\mathbb{R}}^{t} & \rightarrow \mathbb{R} \\ (x_1,x_2,\dots,x_t) & \mapsto \sum_{i=1}^{t} \T(x_{i}^{*}ax_i),\label{eq:eva_norm} \end{align} where $(-)^*$ is a positive involution as defined above and $a \in A_{\mathbb{R}}$ is a symmetric positive definite element to be fixed later. This makes $A_{\mathbb{R}}^t$ into an Euclidean space of real dimension $mn^2t$ and $\mathcal{O}^t$ embeds as a lattice into that space. \subsection{Norm-trace inequality} Recall that for a finite dimensional algebra $A$ over any field $k$ we have norm and trace functions $N_{A/k},T_{A/k}:A \rightarrow k$ given by the determinant and trace of the left multiplication maps respectively. It will be useful to establish some of their properties; for instance we later use the norm-trace inequality to give lower bounds on the Euclidean norm of certain lattice points via Lemma \ref{lemma:lowerboundbadpoints}. \begin{lemma}\label{le:normtrace} Consider a finite dimensional semisimple $\mathbb{R}$-algebra $A_\mathbb{R}$ together with a positive involution $(\ )^{*}$. Let $a \in A_\mathbb{R}$ be a symmetric positive definite element and let $d = \dim_{\mathbb{R}} A_\mathbb{R}$. Then $\N(a) > 0$, $\T(a) > 0$ and \begin{align} \frac{1}{d}\T(a) \ge \N(a)^{\frac{1}{d}}. \end{align} \end{lemma} \begin{proof} See \cite[Lemma 40]{gargava2021lattice}. \end{proof} \begin{corollary} In the same setting as above, we have for any $x \in A_\mathbb{R}$ and for $a \in A_{\mathbb{R}}$ symmetric positive definite that \begin{equation} \frac{1}{d} \T(x^{*} a x ) \ge \N(x)^{\frac{2}{d}} \N(a)^{\frac{1}{d}}. \end{equation} \begin{proof} Note that $x^* a x$ is symmetric and positive definite when $x$ is a unit in $A_\mathbb{R}$. If $x$ is not a unit, then the right-hand side is trivial. \end{proof} \end{corollary} We remark that the inequalities above are sharp, equality being achieved by appropriate scalar matrices. For central simple $K$-algebras $A$, one can furthermore define the ``reduced norm'' and the ``reduced trace'' which we denote by $\nr_{A/K},\mathrm{Trace}_{A/K}:A \rightarrow K$. Definitions and properties can be found in \cite[Section 9]{reiner2003maximal}. The following definition and ensuing lemma can also be found in \cite[9.13-14]{reiner2003maximal}. \begin{definition} Suppose $A$ is a central simple $L$-algebra and $K \subseteq L$ is a subfield such that $[L:K] < \infty$. Then for each $a \in A$, we define the ``relative reduced trace'' $\mathrm{Trace}_{A/K}:A \rightarrow K$ and ``relative reduced norm'' $\nr_{A/K}: A \rightarrow K$ as \begin{align} \mathrm{Trace}_{A/K} = \T_{L/K} \circ \mathrm{Trace}_{A/L},\\ \nr_{A/K} = \N_{L/K} \circ \nr_{A/L}. \end{align} \end{definition} \begin{lemma} The following holds for a central simple $L$-algebra $A$ and a subfield $K \subseteq L$ with $[L:K] < \infty$ for any $a \in A$: \begin{align} \T_{A/K}(a) = \sqrt{[A:L]} \mathrm{Trace}_{A/K}, \\ \N_{A/K}(a) = \nr_{A/K}(a)^{\sqrt{[A:L]}}. \end{align} \end{lemma} We may now establish the following lemmas: \begin{lemma} Let $A$ be a division algebra over $\mathbb{Q}$ whose center is $K$ and $[A:K]= n^{2}$. Let $\mathcal{O} \subseteq A$ be a maximal order in the division algebra. Let $\mathfrak{p}$ be a prime ideal of $\mathcal{O}_K$ for which $A$ splits and let $\mathbb{F}_q = \mathcal{O}_{K}/\mathfrak{p}$ denote the residue field. Then the following diagram commutes: \begin{equation}\label{nrddiagram} \begin{tikzcd} \mathcal{O} \arrow[d, "\phi_p"] \arrow[r, "\nr_{A/K}"] & \mathcal{O}_K \arrow[d, "\pi_\mathfrak{p}"] \\ \mathcal{O}/\mathfrak{p}\mathcal{O}\cong M_n(\mathbb{F}_q) \arrow[r, "\det"] & \mathbb{F}_q, \end{tikzcd} \end{equation} where the vertical maps designate reduction modulo $\mathfrak{p}$. \label{lemma:nrdanddet} \end{lemma} \begin{proof} First note that $\nr_{A/K}(a) \in \mathcal{O}_{K}$ for each $a \in \mathcal{O}$ since $\nr_{A/K}(a)\in K$ and $\mathcal{O}_K$ is integrally closed (see also \cite[(10.1)]{reiner2003maximal}). The reduced norm $\nr_{A/K}(a)$ may be computed as the determinant of the corresponding matrix in $M_n(E)$, where $E$ is a splitting field for $A$ (and is easily seen to be independent of that choice). We may in particular choose $E$ to be the $\mathfrak{p}$-adic completion $\hat{K}_\mathfrak{p}$ since by our assumption we have $A\otimes_K \hat{K}_\mathfrak{p}\cong M_n(\hat{K}_\mathfrak{p})$. Therefore, in this case commutativity of the following diagram follows immediately: \begin{equation}\label{nrddiagram2} \begin{tikzcd} M_n(\mathcal{O}_{\hat{K}_\mathfrak{p}}) \arrow[d, "\tilde{\phi_p}"] \arrow[r, "\det"] & \mathcal{O}_{\hat{K}_\mathfrak{p}} \arrow[d, "\pi_\mathfrak{p}"] \\ \mathcal{O}_\mathfrak{p}/\mathfrak{p}\mathcal{O}_\mathfrak{p}\cong M_n(\mathbb{F}_q) \arrow[r, "\det"] & \mathbb{F}_q, \end{tikzcd} \end{equation} where $\mathcal{O}_p$ is the image of $\mathcal{O}$ in $A\otimes_KK_\mathfrak{p}$. Now it is not necessarily the case that $\mathcal{O}_\mathfrak{p}=M_n(\mathcal{O}_{\hat{K}_\mathfrak{p}})$. However, in the complete local case $M_n(\mathcal{O}_{\hat{K}_\mathfrak{p}})$ is a maximal $\mathcal{O}_{\hat{K}_\mathfrak{p}}$-order and moreover all the maximal orders are conjugate (see \cite[Theorem 17.3]{reiner2003maximal}). Since the reduced norm is invariant under conjugation we may assume $\nr_{A/K}$ on $\mathcal{O}$ factors through $M_n(\mathcal{O}_{\hat{K}_\mathfrak{p}})$. Similarly, we can assume $\phi_p$ factors through $M_n(\mathcal{O}_{\hat{K}_\mathfrak{p}})$. The commutativity of diagram \eqref{nrddiagram} thus follows from that of \eqref{nrddiagram2}. \end{proof} \begin{lemma}\label{lemma:lowerboundbadpoints} Let $A$ be a $\mathbb{Q}$-division algebra with center $K$, let $\mathcal{O}$ denote a maximal order and let $\mathfrak{p}$ be a prime of $\mathcal{O}_K$ at which $A$ is split, so that we have a reduction map $$\phi_p:\mathcal{O}\to\mathcal{O}/\mathfrak{p}\mathcal{O}\cong M_n(\mathbb{F}_q),$$ with $n={\sqrt{[A:K]}}$. Let $(\ )^{*}: A_{\mathbb{R}} \rightarrow A_{\mathbb{R}}$ be a positive involution. If $x \in \mathcal{O} \setminus \{ 0\}$ (which we may identify with its image in $A_{\mathbb{R}}$) is such that $\phi_p(x)$ is a non-invertible matrix, then \begin{equation} \|x\| \ge \left( \sqrt{[A:\mathbb{Q}]} \N(a)^{\frac{1}{2[A:\mathbb{Q}]}} \right) q^{\frac{1}{\sqrt{[A:K]}[K:\mathbb{Q}]}} . \end{equation} where $a\in A_{\mathbb{R}}$ is symmetric positive definite and $\|x\|^2:=\T(x^*ax)$ on $A_{\mathbb{R}}$. \end{lemma} \begin{proof} We know from the norm-trace inequality that \begin{align} \tfrac{1}{[A:\mathbb{Q}]} \T(x^{*} x ) \ge \N(x)^{\frac{2}{{[A:\mathbb{Q}]}}} \N(a)^{\frac{1}{[A:\mathbb{Q}]}}. \end{align} Since $\det \circ \phi_p(x) = 0$, we get by Lemma \ref{lemma:nrdanddet} that $\mathfrak{p} \mid \nr_{A/K}(x) $ and hence \begin{align*} &\N_{K/\mathbb{Q}}(\mathfrak{p}) \mid \N_{K/\mathbb{Q}} \circ \nr_{A/K}(x) \\ \Rightarrow &N_{K/\mathbb{Q}}(\mathfrak{p}) \mid \nr_{A/\mathbb{Q}}(x)\\ \Rightarrow &N_{K/\mathbb{Q}}(\mathfrak{p})^{\sqrt{[A:K]}} \mid \N_{A/\mathbb{Q}}(x) \\ \end{align*} Since $\N_{A/\mathbb{Q}}(x)^2 \in \mathbb{Z}_{\ge 0}$ and $x \neq 0$, this gives us a lower bound on $\N_{A/\mathbb{Q}}(x)^2$. Thus we have that \begin{align} \tfrac{1}{[A:\mathbb{Q}]} \T(x^{*}a x ) \ge \N_{K/\mathbb{Q}}(\mathfrak{p})^{\frac{2\sqrt{[A:K]}}{{[A:\mathbb{Q}]}}} \N(a)^{\frac{1}{[A:\mathbb{Q}]}} . \end{align} which proves the claim given that $\N_{K/\mathbb{Q}}(\mathfrak{p})=\vert\mathcal{O}_K/\mathfrak{p}\vert=q$. \end{proof} We also record the following result: \begin{lemma}\label{lemma:packingradius} Let $A$ be a central simple division $K$-algebra for $K$ a number field and let $\mathcal{O}$ be a maximal $\mathcal{O}_K$-order which we identify with the corresponding lattice in $A_\mathbb{R}=A\otimes_\mathbb{Q}\mathbb{R}$. Then, with respect to any quadratic form $q_a(x)=\T(x^*ax)$ for a symmetric positive definite element $a \in A_\mathbb{R}$, one may define the shortest vector length $\lambda_{1,q_a}$, Hermite parameter $\gamma_{q_a}$ and covering radius $\tau_{q_a}$, and they are subject to the following: \begin{enumerate} \item The shortest vector length satisfies $\lambda_{1,q_a}(\mathcal{O})\geq \sqrt{[A:\mathbb{Q}]}\cdot \N(a)^{1/(2[A:\mathbb{Q}])}$. \item For any two-sided $\mathcal{O}$-ideal I (by which we mean a full $\mathcal{O}_K$-lattice in $\mathcal{O}$), the Hermite parameter satisfies $$\gamma_{q_a}(I)\geq \frac{[A:\mathbb{Q}]}{d(\mathcal{O}/\mathbb{Z})^{1/[A:\mathbb{Q}]}}$$ \item The covering radius satisfies $$\tau_{q_a}(\mathcal{O})\leq d(\mathcal{O}/\mathbb{Z})^{1/[A:\mathbb{Q}]}\cdot \left(\frac{\sqrt{[A:\mathbb{Q}]}}{2\pi}+\frac{3}{\pi}\right)\cdot \N(a)^{-1/(2[A:\mathbb{Q}])},$$ where $d(\mathcal{O}/\mathbb{Z})$ denotes the discriminant of $\mathcal{O}$ computed with respect to a $\mathbb{Z}$-basis. \end{enumerate} \end{lemma} \begin{proof}The first part follows trivially from the norm-trace inequality. For the second statement, we know $I$ also yields a lattice in $A_\mathbb{R}$ and, following \cite[Section 4]{BAYERFLUCKIGER2006305}, we set $\beta_{I,q_a}(x):=\frac{q_a(x)}{\det_{q_a}(I)^{1/[A:\mathbb{Q}]}}$, where by definition $\det_{q_a}(I)$ is the determinant of a matrix of $q_a$ in a $\mathbb{Z}$-basis of the ideal lattice $I$. Note that by definition $\gamma_{q_a}(I)=\min_{x\in I}\beta_{I,q_a}(x)$. By the norm-trace inequality, we have that $\N_{A/\mathbb{Q}}(x)^2\leq (q_a(x)/[A:\mathbb{Q}])^{[A:\mathbb{Q}]}\cdot \N(a)^{-1}$. Therefore, since the discriminant of $\mathcal{O}$ satisfies $d(\mathcal{O}/\mathbb{Z})=\det_{q_{1}}(\mathcal{O})=\N(a)^{-1}\cdot \det_{q_a}(\mathcal{O})$, we obtain the inequality: $$\N_{A/\mathbb{Q}}(x)^2\leq \left(\frac{\beta_{\mathcal{O},q_a}(x)}{[A:\mathbb{Q}]}\right)^{[A:\mathbb{Q}]}\cdot d(\mathcal{O}/\mathbb{Z})^2.$$ Moreover, letting $\mathcal{N}(I)$ denote the norm of an $\mathcal{O}$-ideal as in \cite[Section 24]{reiner2003maximal} and $\mathcal{N}_\mathbb{Q}(I)$ the number representing the fractional $\mathbb{Z}$-ideal generated by $\N_{A/\mathbb{Q}}(x)$ for $x\in I$ (see \cite[Theorem 24.9]{reiner2003maximal}), we have that $\det_{q_a}(I)=\det_{q_a}(\mathcal{O})\cdot \mathcal{N}_\mathbb{Q}(I)^2$. Putting these together we deduce: \begin{equation}\label{eq:betahermitebound} \beta_{I,q_a}(x)=\beta_{\mathcal{O},q_a}(x)\cdot \mathcal{N}_\mathbb{Q}(I)^{-2/[A:\mathbb{Q}]}\geq \frac{[A:\mathbb{Q}]}{d(\mathcal{O}/\mathbb{Z})^{1/[A:\mathbb{Q}]}}\cdot\left(\frac{\N_{A/\mathbb{Q}}(x)}{\mathcal{N}_\mathbb{Q}(I)}\right)^{2/[A:\mathbb{Q}]}\end{equation} Taking the minimum over $x\in I$ of equation \eqref{eq:betahermitebound} yields the result, since for any $x\in I$ we have that $\N_{A/\mathbb{Q}}(x)/\mathcal{N}_\mathbb{Q}(I)\geq 1$.\par For the third statement, we use a transference theorem: indeed, denoting by $\mathcal{O}^\sharp$ the dual lattice, we have that $$\lambda_{1,q_a}(\mathcal{O}^\sharp)\cdot \tau_{q_a}(\mathcal{O})\leq \frac{[A:\mathbb{Q}]}{2\pi}+\frac{3\sqrt{[A:\mathbb{Q}]}}{\pi}$$ by a refinement \cite[(1.9)]{Miller2019KissingNA} of a result of Banaszczyk \cite{Banaszczyk1993}. But under the choice of inner product $\langle x,y\rangle=\T(x^*ay)$ corresponding to $q_a$, the dual lattice $\mathcal{O}^\sharp$ is just $a^{-1}$ times the image under the involution $(-)^*$ of the inverse different $\mathfrak{D}(\mathcal{O}/\mathbb{Z})^{-1}=\{x\in A:\mathrm{Trace}_{A/\mathbb{Q}}(x\cdot \mathcal{O})\subset \mathbb{Z}\}$ and therefore has the same parameters. Given that $a^{-1}\mathfrak{D}(\mathcal{O}/\mathbb{Z})^{-1}$ is a two sided (non-integral) $\mathcal{O}$-ideal, by the second statement we deduce $$\lambda_{1,q_a}(\mathcal{O}^\sharp)=\lambda_{1,q_a}(a^{-1}\mathfrak{D}(\mathcal{O}/\mathbb{Z})^{-1})\geq \sqrt{[A:\mathbb{Q}]} \left(\frac{\det_{q_a}(a^{-1}\mathfrak{D}(\mathcal{O}/\mathbb{Z})^{-1})}{d(\mathcal{O}/\mathbb{Z})}\right)^{1/(2[A:\mathbb{Q}])}.$$ Since $\det_{q_a}(a^{-1}\mathfrak{D}(\mathcal{O}/\mathbb{Z})^{-1})=\det_{q_a}(\mathcal{O}^\sharp)=\det_{q_a}(\mathcal{O})^{-1}$, we obtain that $$\lambda_{1,q_a}(\mathcal{O}^\sharp)\geq\sqrt{[A:\mathbb{Q}]}\cdot d(\mathcal{O}/\mathbb{Z})^{-1/[A:\mathbb{Q}]}\cdot \N(a)^{1/(2[A:\mathbb{Q}])}.$$ The result follows by the transference theorem. \end{proof} We now have the tools to tackle in the next section the first main result of this paper. \section{A general averaging result for lifts}\label{sec:three} Again, let $A$ denote a central simple $K$-algebra for $K$ a number field and assume $A$ is a division ring. We set $n^2=[A:K]$ and $m=[K:\mathbb{Q}]$. Let $\mathcal{O}$ denote an order in $A$. The purpose of this section is to prove a discrete version of a Siegel mean value theorem (see \cite{SiegelMVT,VenkateshBounds,gargava2021lattice}) for $\mathcal{O}$-lattices obtained via lifts of codes in characteristic $p$ for increasingly large primes $p$. In order to facilitate discussion and comparison with previous results, we will state a more abstract and flexible version of the main averaging result and later explain the most convenient choices of parameters. \par Assume we are given for a fixed integer $t\geq 2$ a family of surjective homomorphisms $$\phi_p: \mathcal{O}^t \to \mathcal{R}_p^t,$$ indexed by primes $p$, where $\mathcal{R}_p$ is a finite $\mathbb{F}_p$-algebra of fixed $\mathbb{F}_p$-rank $d_\mathcal{R}$ and where the sizes of $p,\mathcal{R}_p$ are unbounded in the family. We also fix an embedding $i: A\hookrightarrow \mathbb{R}^{n^2m}$ which identifies $\mathcal{O}^t$ with a lattice in $\mathbb{R}^{n^2mt}$. Assume we are given subsets $U_p\subset \mathcal{R}_p^t$ and sets of $\mathcal{R}_p$- submodules $\mathcal{C}_k$ of $\mathcal{R}_p^t$ of fixed $\mathbb{F}_p$-dimension $d(k)< t\cdot d_\mathcal{R}$ (which we refer to as generalized $k$-dimensional codes) such that the following two conditions hold: \begin{enumerate} \item ($U$-balancedness): For any fixed $p$ in the family and $x\in U_p$, the number of $k$-dimensional codes in $ \mathcal{C}_k$ containing $x$ is constant (we denote this constant $L_{U_p}$). \item (non-degeneracy): There exists a constant $c_U>0$ and $s>\frac{d_\mathcal{R}\cdot t-\max_k d(k)}{mn^2t}$ such that $$\|i(x)\|\geq c_U\cdot p^s \text{ for any nonzero }x\in \phi_p^{-1}(\mathcal{R}_p^t\setminus U_p).$$ We also require the mild condition that $\lim_{p\to\infty}\frac{\vert U_p\vert}{\vert R_p\vert^t}=1$. \end{enumerate} We can then prove an averaging result for lifts in this general setup. Denote within this setup the set of scaled lattices $$\mathbb{L}_p=\{\beta\phi_p^{-1}(C):C\in \mathcal{C}_k\},$$ where $\beta$ is chosen such that all the lattices in $\mathbb{L}_p$ have volume $V:=\mathrm{Vol}(\mathcal{O}^t)$, ergo $\beta=(\frac{p^{d(k)}}{\vert R_p\vert^t})^{1/n^2mt}$. Finally, we define following \cite[Def 2]{CampelloRandom}: \begin{definition} A function $f:\mathbb{R}^d\to \mathbb{R}$ is called semi-admissible if $f$ is Riemann-integrable and there exist positive constants $b,\delta>0$ such that $$\vert f(\mathbf{x})\vert\leq\frac{b}{(1+\|\mathbf{x}\|)^{d+\delta}}\text{ for all }\mathbf{x}\in \mathbb{R}^d.$$ \end{definition} We then have the following: \begin{theorem}\label{thm:mainaverage} Let $\mathcal{C}_k$ be the set of codes of fixed $\mathbb{F}_p$-dimension $d(k)$ and let $f:\mathbb{R}^{n^2mt}\to \mathbb{R}$ be a semi-admissible function for $t\geq 2$. Under the setup and notations above, assume the $U$-balancedness and non-degeneracy conditions are satisfied. Then, provided $d_\mathcal{R}\cdot t>d(k)>t\cdot(d_\mathcal{R}-s(n^2m))$, we have: $$\lim_{p\to\infty}\mathbb{E}_{\mathbb{L}_p}\left(\sum_{x\in (\beta\phi_p^{-1}(C))'}f(i(x))\right)\leq (\zeta(n^2mt)\cdot V) ^{-1}\cdot \int_{\mathbb{R}^{n^2mt}}f(x)dx,$$ where $p$ ranges over the primes in the family and where for $\Lambda\in \mathbb{L}_p$ we denote by $\Lambda'$ the primitive vectors in $\Lambda$ (=shortest vectors of the lattice on the real line they span). \end{theorem} We note that we set up the non-degeneracy condition so that in the very least the largest dimension $d(k)$ possible for the appropriate notion of $k$-dimensional code which is strictly contained in $\mathcal{R}_p^t$ satisfies the condition $d_\mathcal{R}\cdot t>d(k)>t\cdot(d_\mathcal{R}-s(n^2m))$ and so the result is not vacuous. \begin{proof} We split the expected value of the lattice sum into two parts: First, we show that the expected value of the term $$\sum_{x\in (\beta\phi_p^{-1}(C))', \phi_p(x/\beta)\in (\mathcal{R}_p^t\setminus U_p)}f(i(x))=\sum_{x\in (\phi_p^{-1}(C))', \phi_p(x)\in (\mathcal{R}_p^t\setminus U_p)}f(\beta i(x))$$ tends to zero in absolute value as $p\to \infty$. By the non-degeneracy condition, we can bound \begin{equation}\label{eq:lowerboundnorminthm} \|\beta i(x)\|\geq \beta \cdot c_u\cdot p^s=c_u\cdot p^{(d(k)-d_\mathcal{R}\cdot t)/(mn^2t)}p^s \end{equation} which under our assumption on $d(k)$ becomes arbitrarily large as $p\to \infty$. Since $f$ decays rapidly at infinity we get for each individual lattice in $\mathbb{L}_p$ that the sum converges to $0$ as $p\to \infty$ by dominated convergence, and therefore idem for the average.\par The remaining terms can be bounded on average via $U$-balancedness. Indeed, let $g:R_p^t\to \mathbb{R}^+$ denote any function. We have that \begin{align*} \mathbb{E}_\mathcal{C}[\sum_{c\in \mathcal{C}\cap U_p}g(c)]&=\sum_{x\in U_p}\mathbb{E}_\mathcal{C}[g(x)\mathbf{1}_C(x)]\\ &=\sum_{x\in U_p} g(x) \frac{L_{U_p}}{\vert \mathcal{C}\vert}\\ &\leq \sum_{x\in U_p} g(x) \frac{p^{d(k)}}{\vert U_p \vert}, \end{align*} where we use $U$-balancedness as well as a counting argument for the inequality and where the expected value is taken as average over all codes in $\mathcal{C}$. We deduce the inequality $$ \mathbb{E}[\sum_{x\in(\phi_p^{-1}(C))',\phi_p(x)\in U_p}f(\beta i(x))]\leq\frac{p^{d(k)}}{\vert U_p \vert}\sum_{x\in (\mathcal{O}^t)'} f(\beta i(x)).$$ By the M\"obius inversion formula (and exchanging summation and limits by dominated convergence when $f$ does not have bounded support), the latter equals $$\sum_{r=1}^\infty \frac{\mu(r)}{r^{n^2mt}}\sum_{x\in \mathcal{O}^t\setminus \{0\}}\frac{p^{d(k)}}{\vert U_p \vert}r^{n^2mt}f(r\beta i(x)).$$ The result now follows: first, note that we have for fixed $r\in \mathbb{N}$ by approximation of the Riemann integral of $f$ that $$\lim_{p\to\infty}\sum_{x\in \mathcal{O}^t\setminus \{0\}}(r\beta)^{n^2mt}f(r\beta i(x))=V^{-1}\int_{\mathbb{R}^{n^2mt}}f(x)dx$$ since $\beta\to 0^+$ as $p$ becomes large. Moreover, by the second part of the non-degeneracy assumption the ratio $\frac{\vert U_p\vert}{\vert R_p\vert^t}\to 1$, so that $\frac{p^{d(k)}}{\vert U_p \vert}$ approaches $\beta^{n^2mt}$ as $p\to \infty$. Finally, switching the limit in $p$ and summation in $r$ is allowed by dominated convergence, as $f$ decays rapidly. \end{proof} We now turn to examples of such a setup. \begin{example} In the case where $n=1$ and $\mathcal{O}$ is just the ring of integers $\mathcal{O}_K$ of a number field $K$, we can for example take $\phi_p$ to be the reduction map modulo a prime $\mathfrak{p}\vert p$ of $\mathcal{O}_K$ for (the infinitely many) primes $p$ that split completely in $\mathcal{O}_K$. In this case, we get $\mathcal{R}_p=\mathbb{F}_p$ and take $U_p$ to be the complement in $\mathcal{R}^t$ of $(\mathbb{F}_p\setminus\mathbb{F}_p^\times)^t$. With the usual definition of $k$-dimensional codes as free rank $k$ $\mathbb{F}_p$-submodules of $\mathbb{F}_p^t$, the balancedness condition is satisfied (see e.g., \cite[Lemma 1]{Loeliger97averagingbounds}), and moreover the non-degeneracy result is straightforward in this case: nonzero elements $x\in\mathfrak{p} \mathcal{O}_K$ have algebraic norm $$N_{K/\mathbb{Q}}(x)\in p\mathbb{Z}\setminus \{0\},$$ but the norm is just the product of the $n$ embedded coordinates of $i(x)$, so by the geometric/arithmetic mean inequality we obtain $$\|i(x)\|\gg p^{1/n}$$ and non-degeneracy is satisfied. The condition on $k$ in Theorem \ref{thm:mainaverage} simply becomes $k>0$ and this recovers the construction of \cite{CampelloRandom} in the number field case. \end{example} \begin{example}\label{example:firstresult} Let now $n>1$ so that we are in the non-commutative situation. The most straightforward generalization to division algebras $A$ with center $K$ of the number field results is the following: consider primes $\mathfrak{p}\vert p$ of $\mathcal{O}_K$ that split $A$. Let $\mathbb{F}_q\cong \mathcal{O}_K/\mathfrak{p}$ denote the residue field. We then have an infinite family of ring isomorphisms $$\phi_p: \mathcal{O}/\mathfrak{p}\mathcal{O}\to M_n(\mathbb{F}_q)$$ for a maximal order $\mathcal{O}$ in $A$. We take $t$ copies of these to build our reduction map. In order to make sure the multiplicative structure is preserved as well, it makes sense to set $\mathcal{R}_p:=M_n(\mathbb{F}_q)$ and to consider codes which are free $\mathcal{R}_p$-submodules of $\mathcal{R}_p^t$ of rank $k$. It also seems natural to define $U_p$ to be the complement in $\mathcal{R}_p^t$ of $$(\mathcal{R}_p^t\setminus U_p):=((M_n(\mathbb{F}_q)\setminus \mathrm{GL}_n(\mathbb{F}_q))^t.$$ It then follows that the $U$-balancedness condition is again satisfied (see the proof of \cite[Lemma 2]{CampelloRandom}). We get a non-degeneracy result from Lemma \ref{lemma:lowerboundbadpoints}: indeed we obtain for a nonzero vector $a\in \mathcal{O}^t$ that maps to $\mathcal{R}_p^t\setminus U_p$ the lower bound \begin{equation}\label{eq:lowerboundnorm} \|i(a)\|\gg q^{\frac{1}{nm}} \end{equation} Moreover, as $\vert \mathrm{GL}_n(\mathbb{F}_q)\vert=\prod_{i=0}^{n-1}(q^n-q^i)$ we also have that $\frac{\vert U_p\vert}{\vert \mathcal{R}_p\vert^t}\to 1$ as $p\to \infty$. Since the largest possible $\mathbb{F}_q$-dimension of a code is $n^2(t-1)$, tracing through the definitions we have non-degeneracy exactly when $t>n$, in which case via Theorem \ref{thm:mainaverage} we obtain the averaging result by pulling back codes which are free $M_n(\mathbb{F}_q)$-modules of rank $k$ for $t>k>t-t/n$. In particular, when $n=2$ we recover in the special case of the quaternion algebra $A=\left(\frac{-1,-1}{\mathbb{Q}}\right)$ the results of \cite[Theorem 3]{CampelloRandom} for the Lipschitz integers as well as for the maximal order of Hurwitz integers. We note that we recover the condition $k>t/2$ appearing in \cite[Theorem 3]{CampelloRandom}. So this does give a generalization to arbitrary $\mathbb{Q}$-division algebras when $t>n$, however it would be more convenient to obtain results that work for arbitrary $t\geq 2$. \end{example} In order to obtain averaging results that do not require the condition $t>n$, one would have to obtain a stronger non-degeneracy lower bound on the norm or relax the condition that the codes be free $\mathcal{R}_p=M_n(\mathbb{F}_q)$-modules. Since the norm-trace inequality and the lower bound are sharp, we focus on the latter. However, when relaxing the definition of codes, one has to be careful to preserve the $U$-balancedness condition and to preserve enough multiplicative structure so that the units $\mathcal{O}^\times$ act on the lattices and we obtain improved bounds on the packing density. This is achieved by taking as $k$-dimensional codes $k$ copies of the simple left $M_n(\mathbb{F}_q)$-module $V=\mathbb{F}_q^n$ for $n\leq k< tn$. This carries enough structure through to the lattices in $\mathbb{L}_p$ for the intended applications. Moreover, the $U$-balancedness result is a special case of the following, which we formulate in representation-theoretic terms: \begin{lemma} Let $k$ be a field. Let $R$ be a f.d. semisimple $k$-algebra and $V$ be a simple (left) $R$-module of finite dimension over $k$. Fix integers $n_1 \le n_2 \le n_3$. Consider $V^{\oplus n_3}$ as an $R$-module and consider the sets \begin{align*} U & = \{ v \in V^{\oplus n_3}\ | \ Rv \simeq V^{\oplus n_1}\}, \\ \mathcal{C}_{n_2,n_3} & = \{ C \subseteq V^{\oplus n_3} \ | \ C \text{ is an $R$-submodule, }C \simeq V^{\oplus n_2}\}. \end{align*} Assuming that $U$ is non-empty, for each $u \in U$ there is a bijection \begin{align*} \{ C \in \mathcal{C}_{n_2,n_3} \ | \ u \in C\} \leftrightarrow \mathcal{C}_{n_2-n_1,n_3-n_1} \end{align*} \end{lemma} \begin{proof} Observe that for any $u \in U$ \begin{align*} u \in C \Leftrightarrow &\ R u \subseteq C \subseteq V^{\oplus n_3}\\ \Leftrightarrow & \ \frac{C}{ Ru}\subseteq \frac{ V^{\oplus n_3} }{ Ru } \simeq V^{\oplus (n_3 - n_1)}. \end{align*} Hence, if we identify $V^{\oplus n_3}/Ru$ with $V^{\oplus (n_3 - n_1)}$, the proposed bijection is simply $C \mapsto C/Ru$. \end{proof} \begin{corollary}\label{cor:balancedness} In the previous lemma, if $k$ is a finite field, then the number $\#\{ C \in \mathcal{C}_{n_2,n_3} \ | \ u \in C\}$ is independent of $u$. \end{corollary} Setting $R= M_n(\mathbb{F}_q)$, $V=\mathbb{F}_q^{n}$, $n_3 = nt$, $n_2 = k$ and $n_1 = n$, we deduce our $U$-balancedness condition. We summarize our results in this case as a consequence of Theorem \ref{thm:mainaverage} in easily accessible form: \begin{theorem} \label{thm:specificaverage} Let $A$ be a $\mathbb{Q}$-division algebra whose center is a number field $K$. Let $n=\sqrt{[A:K]}$ and $m=[K:\mathbb{Q}]$. Let $\mathcal{O}$ be an order in $A$ and for an integer $t\geq 2$ we consider an infinite family of surjective reduction maps $$\phi_p:\mathcal{O}^t\to M_n(\mathbb{F}_q)^t$$ as given in each coordinate by Lemma \ref{lemma:primesexistence}. Let $i:A^t\to (A\otimes_\mathbb{Q}\mathbb{R})^t$ denote the coordinate-wise embedding and let $f:\mathbb{R}^{n^2mt}\to \mathbb{R}$ be a semi-admissible function. For a fixed $n\leq k < nt$, set \begin{align*} \mathcal{C}_{k,p} & = \{ C \subseteq M_n(\mathbb{F}_q)^{\oplus t } \ | \ C \text{ is a $M_n(\mathbb{F}_q)$-submodule isomorphic to } (\mathbb{F}_q^{n})^{\oplus k}\}\\ \mathbb{L}_{k,p} & = \{ \beta_{p} \phi_p^{-1}(C) \ | \ C \in \mathcal{C}_{k,p}\}, \end{align*} where the constant $\beta_p$ normalizing the covolume of lattices in $\mathbb{L}_{k,p}$ to $V:=\mathrm{Vol}(\mathcal{O}^t)$ is given by $\beta_p=q^{\frac{nk-n^2t}{n^{2}mt}}$. Then if $(n-1)t<k<nt$, we have that $$\lim_{p\to\infty}\mathbb{E}_{\mathbb{L}_{k,p}}\left(\sum_{x\in (\beta_p\phi_p^{-1}(C))'}f(i(x))\right)\leq (\zeta(n^2mt)\cdot V) ^{-1}\cdot \int_{\mathbb{R}^{n^2mt}}f(x)dx,$$ where the limit is taken over primes in the family and $(\beta_p\phi_p^{-1}(C))'$ denotes the primitive vectors in $\beta_p\phi_p^{-1}(C)$. \end{theorem} \begin{proof} This readily follows from our discussions above as a special case of Theorem \ref{thm:mainaverage}: indeed we set $$ U_{p} = \{ v \in M_n(\mathbb{F}_q)^{\oplus t} \ | \ \dim_{\mathbb{F}_q}\left( M_n(\mathbb{F}_q) v \right) = n^{2} \}.$$ Moreover, we take $R_p=M_n(\mathbb{F}_q)$ and the U-balancedness condition is satisfied by Corollary \ref{cor:balancedness}. Finally, the non-degeneracy condition is again satisfied as in Example \ref{example:firstresult}, since if $a \in \mathcal{O}^{\oplus t} \setminus \{ 0\}$ is such that $\phi_p(a) \notin U_{p}$, then $a$ has to have one coordinate which is non-trivial and is a non-invertible matrix modulo $p$. Thus we obtain the bound in equation \eqref{eq:lowerboundnorm} and obtain non-degeneracy as before. Finally, the condition on $d(k)$ in Theorem \ref{thm:mainaverage} then just becomes $tn-t<k<tn$. \end{proof} In particular, we obtain in this way a valid result as soon as $t\geq 2$. \section{Improved bounds}\label{sec:four} Keeping the notation from the previous section, we now show how to leverage the extra symmetries under finite groups $G_0\subset \mathcal{O}^\times$ of the lattices obtained in $\mathbb{L}_p$ in order to obtain sphere packings of density exceeding the Minkowski--Hlawka bound. We first present a result based on the approach in \cite[Corollary 1]{CampelloRandom} which in turn is inspired by Vance \cite{VanceImprovedBounds} and Rogers' \cite{RogersExistence(Annals)} work. \par For a central $K$-division algebra $A$, we recall that $A_{\mathbb{R}}$ denotes the real vector space $A\otimes_{\mathbb{Q}}\mathbb{R}$ of dimension $n^2m$. We also recall that the space $A_{\mathbb{R}}^{t}$ is endowed with a norm coming from the quadratic form as defined in Equation \ref{eq:eva_norm} for some positive definite and symmetric $a \in A_{\mathbb{R}}$. Such a norm will be chosen and fixed permanently in Lemma \ref{lemma:unitaction}. Given a lattice $\Lambda\subset A_{\mathbb{R}}^t$ which is an $\mathcal{O}$-module and such a choice of norm we however first define the \emph{k-th $A$-minimum} $\min_k(\Lambda)$ to be the smallest $r$ such that the closed ball $\mathbb{B}_{A_{\mathbb{R}}}(r)$ of radius $r$ contains $k$ $A_{\mathbb{R}}$-linearly independent lattice vectors (under the left $A_{\mathbb{R}}$-action on $A_{\mathbb{R}}^{t}$). \par In particular, $\min_1(\Lambda)$ is the shortest vector length $\lambda_1(\Lambda)$ in $\Lambda$. We begin by remarking that a lemma of Minkowski \cite{minkowski1910geometrie} which was extended by Vance \cite[Theorem 2.2]{VanceImprovedBounds} holds even more generally: \begin{prop}\label{prop:productminima} Let $t\geq 2$ and $\Lambda$ denote an $\mathcal{O}$-lattice in $A_{\mathbb{R}}^t$. Then $\Lambda$ contains a left $A_{\mathbb{R}}$-module basis $\{v_1,\ldots,v_t\}$ such that $\|v_i\|=\min_i(\Lambda)$. Moreover, if $\mathrm{Vol}(\Lambda)=1$, there exists an $\mathcal{O}$-lattice ${\Lambda'}$ of covolume one in $A_{\mathbb{R}}^t$ such that $$\lambda_1({\Lambda'})=\left(\prod_{i=1}^t\Min_i(\Lambda)\right)^{1/t}.$$ \end{prop} \begin{proof} Essentially the same proof goes through as in \cite[Theorem 2.2]{VanceImprovedBounds} (replacing $4$ by the appropriate dimension $mn^2$). To proceed, we select \begin{align*} v_1 & = \argmin_{v \in \Lambda \setminus \{ 0\}}\ \|i(v)\| \\ v_2 &= \argmin_{v \in \Lambda \setminus A v_1} \|i(v)\|\\ v_3 &= \argmin_{v \in \Lambda \setminus (A v_1 + A v_2)} \|i(v)\|\\ v_4 & = \argmin_{v \in \Lambda \setminus (A v_1 + A v_2 + A v_3)} \|i(v)\| \\ & \vdots \end{align*} and we can argue inductively that they are linearly independent with respect to $A_{\mathbb{R}}$-action and satisfy $\|i(v_i)\| = \min_{i}(\Lambda)$. We now generate vectors $x_1,x_2,\cdots,x_k$ using a Gram-Schmidt process (see Appendix \ref{se:gs_process}) on $v_1,v_2,\cdots,v_k$. Since $\{ v_i\}_{i=1}^{k}$ is free with respect to left-$A_{\mathbb{R}}$ action, we get that $x_1,x_2,\cdots,x_k$ freely generate $A_{\mathbb{R}}^{k}$ as a left-$A_{\mathbb{R}}$ module. Now consider the $\mathbb{R}$-linear map given by \begin{align*} T: y \mapsto \frac{y_i}{\lambda_i} x_1 + \frac{y_2}{ \lambda_2} x_2 + \cdots \frac{y_k}{ \lambda_k} x_k , \\ \text{ for }y = y_1x_1+y_2x_2 + \cdots +y_kx_k \in A_{\mathbb{R}}^{k}. \end{align*} Define a lattice $\Lambda'$ by: \begin{align*} \Lambda' = (\lambda_1 \lambda_2 \ldots \lambda_k)^{ {1}/{k}}T(\Lambda). \end{align*} We observe that $\det(\Lambda') = \det(\Lambda)$. For any $y' \in \Lambda' \setminus \{ 0\}$, we can now find $y = y_1 x_1 + \cdots + y_k x_k \in \Lambda$ such that $y' = (\lambda_1\cdots \lambda_k)^{1/k} T ( y)$. Furthermore, there must be a smallest $i_0 \ge 1$ such that $y_{i_0} \neq 0$ and $y_{i_0+1} = y_{i_0+2} = \cdots = y_{k} = 0$. Then $y \in \Lambda \cap ( A_{\mathbb{R}} v_1 + A_{\mathbb{R}} v_2 + \cdots + A_{\mathbb{R}} v_{i_0})$ and $y\not\in\Lambda \cap ( A_{\mathbb{R}} v_1 + \cdots + A_{\mathbb{R}} v_{i_0 - 1} )$. Therefore $\|i(y)\| \ge \lambda_{i_0}$. This implies that \begin{align*} \|i(y')\|^{2} = \left( \lambda_1 \lambda_2 \ldots \lambda_k \right)^{2/k} \sum_{i=1}^{i_0} \left| \frac{i(y_i)}{\lambda_i}\right|^{2} \ge \left( \lambda_1 \lambda_2 \ldots \lambda_k \right)^{2/k} \frac{1}{\lambda_{i_0}^{2}} \sum_{i=1}^{k}\left| {i(y_1)}\right|^{2} \ge (\lambda_1 \ldots \lambda_k)^{2/k}, \end{align*} using orthogonality as in Appendix \ref{se:gs_process}. This lower bound is tight, since we can set $y_1=1_{A_{\mathbb{R}}}$ and $y_i=0$ for $i \ge 2$. \end{proof} \begin{remark}\label{rem:effectiveMinkowski} This version of Minkowski's lemma given above is effective in the sense that for an explicit lattice $\Lambda$, the lattice $\Lambda'$ can also be computed algorithmically. The only important step here is to find the successive minima vectors $v_i$, and then $\Lambda'$ is easily seen to be computable. When $A=\mathbb{Q}$ (so $A_\mathbb{R} = \mathbb{R}$), finding the $v_i$ can be achieved by the so-called {\bf SMP} algorithm, which will have an exponential running time of $O(2^{2t})$. Details can be found in \cite{micciancio2012complexity}. The division algebra case requires only slight modifications of the algorithm and should have a running time of $O(2^{2mn^2t})$. \end{remark} We also record the lemma: \begin{lemma}\label{lemma:unitaction} Let $\mathcal{O}$ denote an order in a $K$-division algebra $A$ and let $G_0\subset \mathcal{O}^*$ denote a finite group. Then $G_0$ acts on vectors in any $\mathcal{O}$-lattice $\Lambda\in \mathbb{L}_p$ obtained as in the construction of Theorem \ref{thm:mainaverage}. Furthermore, we may choose a symmetric positive definite element $a \in A_\mathbb{R}$ such that for all such $\Lambda$ the induced norm satisfies $$\|i(x)\|^2=\sum_{i=1}^{t} \T(x_{i}^{*}ax_i) = \|i(g\cdot x)\|^2 \text{ }\forall g\in G_0, x\in \Lambda.$$ \end{lemma} \begin{proof} The lattices obtained in $\mathbb{L}_p$ via our construction are easily seen to be preserved under the $\mathcal{O}$-action when the morphisms $\phi_p$ preserve the multiplicative structure and the codes in $\mathcal{C}$ we are pulling back are $\phi_p(\mathcal{O})$-modules. Therefore the units act as well. \par For the second part, we may set $$a=\sum_{ g \in G_0} g^* g.$$ One can easily check that the induced quadratic form then has the required $G_0$-invariance. \end{proof} From now on, we may and will assume a norm as in Lemma \ref{lemma:unitaction} has been chosen on $A_\mathbb{R}$. Using the methods from \cite{RogersExistence(Annals),VanceImprovedBounds,CampelloRandom} we apply Theorem \ref{thm:mainaverage} to a specific function $f:\mathbb{R}^{mn^2t}\to \mathbb{R}$ in order to obtain improved bounds. \begin{theorem}\label{thm:improvedbounds} Let $A$ be a central simple division algebra over a number field $K$. Let $\mathcal{O}$ be an $\mathcal{O}_K$-order in $A$. Let $n^2=[A:K]$, $m=[K:\mathbb{Q}]$ and let $t\geq 2$ be a positive integer. Let $G_0$ be a fixed finite subgroup of $\mathcal{O}^\times$. Then there exists a lattice $\Lambda$ in dimension $n^2mt$ achieving $$\Delta(\Lambda)\geq\frac{\vert G_0\vert\zeta(mn^2t)\cdot t}{2^{mn^2t}\cdot e(1-e^{-t})}.$$ Moreover, there exists for any $\varepsilon>0$ an $\mathcal{O}$-lattice $\Lambda_\varepsilon$ in dimension $n^2mt$ achieving $$\Delta(\Tilde{\Lambda}_\varepsilon)\geq(1-\varepsilon)\cdot \frac{\vert G_0\vert\zeta(mn^2t)\cdot t}{2^{mn^2t}\cdot e(1-e^{-t})}$$ which can be constructed. Indeed, $\Tilde{\Lambda}_\varepsilon$ is obtained by applying Proposition \ref{prop:productminima} to a suitable sublattice of $\mathcal{O}^t$ obtained as a pre-image via reduction modulo primes $\mathfrak{p}$ of $\mathcal{O}_K$ of large enough norm of a code isomorphic to $k$ copies of simple left $\mathcal{O}/\mathfrak{p} \mathcal{O}$-modules for $nt-t<k<nt$ as in Theorem \ref{thm:specificaverage}. \end{theorem} \begin{proof} We define $f$ to be the radial function $f_r$ of bounded support given by $$f_r(y)=\begin{cases}\frac{1}{mn^2}& \text{ if }0\leq \|y\|<re^{(1-t)/mn^2t} \\ \frac{1}{mn^2t}-\log (\frac{\|y\|}{r})&\text{ if }re^{(1-t)/mn^2t}\leq \|y\|\leq re^{1/mn^2t}\\ 0& \text{ else } \end{cases}$$ This function is indeed semi-admissible and we have that $$\int_{\mathbb{R}^{mn^2t}}f_r(y)dy=V_{mn^2t}\cdot r^{mn^2t}\cdot \frac{e(1-e^{-t})}{mn^2t},$$ where $V_{mn^2t}$ denotes the volume of the unit ball in $mn^2t$-dimensional Euclidean space. For a small $0<\varepsilon<1$ we may find $r\geq 0$ so that $$V_{mn^2t}\cdot r^{mn^2t}\cdot \frac{e(1-e^{-t})}{mn^2t}=(1-\varepsilon)\cdot \frac{\vert G_0\vert \vert \mathrm{Vol}(\mathcal{O}^t)\vert \zeta(mn^2t)}{mn^2}.$$ Taking $\mathbb{L}_p$ and $k$ satisfying the assumptions of Theorem \ref{thm:mainaverage}, we may therefore for $p$ large enough find a lattice $\Lambda=\beta\cdot i(\Lambda_0)\in \mathbb{L}_p$ of volume $\mathrm{Vol}(\mathcal{O}^t)$ such that $$\sum_{y\in \Lambda'}f_r(y)\leq (1-\varepsilon)\frac{\vert G_0\vert}{mn^2}<\frac{\vert G_0\vert}{mn^2}.$$ We now use the fact that the units of finite order $G_0<\mathcal{O}^\times$ act freely on primitive vectors of $\Lambda$ and that $\| i(gv)\|=\|i(v)\|$ for $g\in G_0$ for our choice of norm (see Lemma \ref{lemma:unitaction}). Indeed, letting $\{v_1,\ldots,v_t\}$ be linearly independent vectors achieving the $A$-minima $\|\beta\cdot i(v_j)\|=\min_j(\Lambda)$ as guaranteed by Proposition \ref{prop:productminima}, we then have that $$\sum_{y\in \Lambda'}f_r(y)\geq \sum_{j=1}^t\sum_{g\in G_0}f_r(\beta\cdot i(gv_j))=\vert G_0\vert\sum_{j=1}^t f_r(\beta\cdot i(v_j)). $$ In other words, $\sum_{j=1}^t f_r(\beta\cdot i(v_j))<1/(mn^2)$ so that by definition of $f_r$ we must have \begin{equation}\label{eq:minimaboundone} \min_j(\Lambda)\geq r e^{(1-t)/(mn^2t)} \text{ for all }j. \end{equation} Moreover, it must then be by definition of $f_r$ that \begin{equation}\label{eq:minimaboundtwo} \sum_{j=1}^t \log \left(\frac{\min_j(\Lambda)}{r}\right)>0 \end{equation} and hence $$\left(\prod_{j=1}^t \min_j(\Lambda)\right)^{1/t}>r.$$ From proposition \ref{prop:productminima} we deduce the (constructive) existence of a lattice $\Tilde{\Lambda}$ with volume equal to $\mathrm{Vol}(\Lambda)$ and shortest vector length $\lambda_1(\Tilde{\Lambda})>r$. We thus obtain for all such $\varepsilon$ the existence of a lattice $\Tilde{\Lambda}_\varepsilon$ of volume $\mathrm{Vol}(\mathcal{O}^t)$ and packing density $$\Delta(\Tilde{\Lambda}_\varepsilon)\geq(1-\varepsilon)\cdot \frac{\vert G_0\vert\zeta(mn^2t)\cdot t}{2^{mn^2t}\cdot e(1-e^{-t})}.$$ Letting $\varepsilon\to 0$, it thus also follows by Mahler compactness that the sequence $\Tilde{\Lambda}_\varepsilon$ has a converging subsequence (in the quotient topology on $\mathrm{GL}_{mn^2t}(\mathbb{R})/\mathrm{GL}_{mn^2t}(\mathbb{Z})$). Since the packing density is a continuous function with respect to this topology, we also get the existence of a lattice with density $\Delta_t\geq\frac{\vert G_0\vert\zeta(mn^2t)\cdot t}{2^{mn^2t}\cdot e(1-e^{-t})}$. \end{proof} \begin{remarks} \begin{enumerate} \item First note that the zeta factor quickly approaches one as $n^2mt$ increases and thus can be ignored for the purpose of giving asymptotic bounds for large dimensions. \item The lower bounds on the density in Theorem \ref{thm:improvedbounds} have the advantage of producing a factor $t$ in the numerator for lattices constructed from $\mathcal{O}^t$. Via a simpler approach, taking $f$ to be the indicator function of a ball one finds an $\mathcal{O}$- lattice $\Lambda$ which outperforms the bound above (slightly) only when $t=2$. We record this below. \end{enumerate} \end{remarks} \begin{prop}\label{prop:simpleimprovedbounds} With the notations of Theorem \ref{thm:improvedbounds}, there exists a $n^2mt$-dimensional sub-lattice $\Lambda_{\varepsilon}\subset \mathcal{O}^t$ with packing density $$\Delta(\Lambda_{\varepsilon})\geq (1-\varepsilon)\cdot\frac{\vert G_0\vert\zeta(mn^2t)}{2^{mn^2t}}$$ in the set of scaled pre-images of codes $\mathbb{L}_p$ for $p$ large enough. Moreover, there exists a $n^2mt$-dimensional lattice $\Lambda$ with density $\Delta(\Lambda)\geq \frac{\vert G_0\vert\zeta(mn^2t)}{2^{mn^2t}}$ for all $t\geq 2$. \end{prop} \begin{proof} Take $f$ to be the indicator function of a ball of radius $r$ and let $r$ be chosen so that $\mathrm{Vol}(\mathbb{B}(r))=(1-\varepsilon)\vert G_0\vert\zeta(mn^2t)\mathrm{Vol}(\mathcal{O}^t)$. Applying Theorem \ref{thm:specificaverage}, there must be for large enough $p$ a lattice $\Lambda_\varepsilon$ in $\mathbb{L}_p$ (with the notations of the theorem) such that \begin{equation}\label{eq:orbittrick}\vert \mathbb{B}(r)\cap\Lambda_\varepsilon'\vert\leq (1-\varepsilon)\vert G_0\vert.\end{equation} Having arranged for $f$ to be $G_0$-invariant, the left hand side of \eqref{eq:orbittrick} must be a multiple of $\vert G_0\vert$ and thus is forced to equal $0$. This gives a lower bound on the shortest vector leading to the desired packing density for $\Lambda_\varepsilon$, while the second statement follows again by Mahler compactness. \end{proof} Taking $t=2$ is often the most advantageous in view of optimizing the packing density in relation to the dimension of the lattice. Nevertheless, having improved bounds for arbitrary $t\geq 2$ should be useful. \subsection{Classification of finite subgroups of $\mathcal{O}^\times$ and bounds.} In order to examine the density of lattice packings that can be achieved via this method, it is necessary to understand which finite groups $G_0<\mathcal{O}^{\times}$ can occur for $\mathbb{Q}$-division algebras. This classification was completely carried through by Amitsur \cite{Amitsur1955FiniteSO}. As outlined in more detail in \cite[2.2--3]{gargava2021lattice}, we summarize some cases that lead to new, dense packings. \par To that end, we recall that recently dense packings were found in special cases of our construction by Venkatesh \cite{VenkateshBounds} when $A=K=\mathbb{Q}(\zeta_m)$ and by Vance \cite{VanceImprovedBounds} when $A=\left(\frac{-1,-1}{\mathbb{Q}}\right)$ by exploiting that the respective $\mathcal{O}$-lattices are invariant under $\mathbb{Z}/m\mathbb{Z}$ and the binary tetrahedral group $\mathfrak{T}^*\cong \mathrm{SL}_2(\mathbb{F}_3)$ of order $24$, respectively. The most spectacular lattice packing densities in the cyclotomic case then occur when maximizing the ratio $\frac{\vert \mathbb{Z}/m\mathbb{Z}\vert}{[\mathbb{Q}(\zeta_m):\mathbb{Q}]}=\frac{m}{\varphi(m)}$ whereas in the Hurwitz lattice case the improved bounds are obtained in dimensions $4t$. The first result is that in the more general context of division algebras we may in some sense combine these two improvements: \begin{prop} Assume $m$ is a positive integer such that $2$ has odd order modulo $m$. Then the algebra $\mathbb{Q}(\zeta_m)\otimes_\mathbb{Q}\left(\frac{-1,-1}{\mathbb{Q}}\right)$ is a division algebra with center $\mathbb{Q}(\zeta_m)$ and has a maximal $\mathbb{Z}[\zeta_m]$-order $\mathcal{O}$ with subgroup $\mathfrak{T}^*\times \mathbb{Z}/m\mathbb{Z}\subset\mathcal{O}^\times$. \end{prop} \begin{proof} See \cite[Theorems 6a, 7]{Amitsur1955FiniteSO}. \end{proof} In particular, we obtain for $m$ satisfying the parity condition above the existence of lattices $\Lambda_m$ in dimension $8\varphi(m)$ satisfying: $$\Delta(\Lambda)\geq \frac{24m\zeta(8\varphi(m))}{2^{8\varphi(m)}}>\frac{24m}{2^{8\varphi(m)}}. $$ via lifting codes as in Theorem \ref{thm:specificaverage} and Proposition \ref{prop:simpleimprovedbounds}. By maximizing the ratio $m/\varphi(m)$ under the additional parity condition, we arrive at: \begin{prop}\label{prop:cycloHurwitz} Using the construction above and letting $$m_k=\prod_{\substack{p\leq k \text{ prime}\\2\nmid\ord_2p}}p,$$ we obtain lattice packings in dimension $8\varphi(m_k)$ of density \begin{equation}\label{eq:cycloquatdensity} \Delta \geq C(\log\log \varphi(m_k))^{7/24}\cdot \frac{24\cdot\varphi(m_k)}{2^{8\varphi(m_k)}} \end{equation} for some fixed constant $C>0$. Moreover, for any $C<1$ the bound in \eqref{eq:cycloquatdensity} is valid for $m_k$ large enough. \end{prop} \begin{proof} This is \cite[Theorem 30]{gargava2021lattice}. \end{proof} Moreover, there are several other ways to obtain dense packings in new dimensions via the division algebra approach. We refer the reader to the discussion in \cite{gargava2021lattice} but simply restate \cite[Prop 64]{gargava2021lattice}: \begin{prop}\label{prop:loglogimprovement} There exists an infinite sequence of dimensions $\{d_n\}$ in which a packing density $$\Delta_{d_n}\geq \frac{1}{2}\log\log d_n \frac{d_n}{2^{d_n}}$$ is achieved and the lattice achieving this packing density is invariant under the action of a non-commutative group. \end{prop} \begin{proof} See \cite[Prop 31]{gargava2021lattice}. The division algebras in question are $\left( \frac{-1,-1}{\mathbb{Q}[\zeta_m + \zeta_m^{-1}]}\right)$ for $m$ a product of primes maximizing $m/\varphi(m)$ under suitable conditions. The extra symmetries here come from the action of the dihedral group with $2m$ elements on lattices in dimension $4\varphi(m)$. \end{proof} One of the main advantages in obtaining such lattices via lifts of codes is that, at least in theory, such lattices can be explicitly found by searching a finite set of parameters as opposed to, say, the averaging results in \cite{gargava2021lattice}. We conclude by a discussion of such effectivity questions. \section{Notes on effectivity}\label{sec:five} Our results such as Theorem \ref{thm:specificaverage} imply that dense lattices in dimension $mn^2t$ can be found among pre-images of codes in characteristic $p$ as $p\to\infty$. In this last section we show how large it suffices to take $p$ in order to guarantee a lattice of packing density greater than $(1-\varepsilon)\frac{\vert G_0\vert}{2^{mn^2t}} $ is found, with $G_0<\mathcal{O}^*$ designating the units of finite order in $\mathcal{O}$.\par \subsection{Varying the division ring} We first focus on the case of $t=2$ in Theorem \ref{thm:improvedbounds} when in fact the better bounds are obtained by taking the simpler indicator function $f=\mathds{1}_{\mathbb{B}(r)}$ of a ball of appropriate radius as in Proposition \ref{prop:simpleimprovedbounds}. \par \begin{theorem}\label{thm:effective} Let $A$ denote central simple division $K$-algebras for number fields $K$ and denote $[A:K]=n^2$ and $[K:\mathbb{Q}]=m$. Let $\mathcal{O}$ denote a maximal order in such $A$. Fix $0< \varepsilon< 1$. Assume the prime $\mathfrak{p}\vert p$ in $\mathcal{O}_K$ is chosen large enough with respect to $m,n$ so that the size of the residue field $\vert \mathcal{O}_K/\mathfrak{p}\vert =q$ satisfies: \begin{enumerate} \item we have as $m,n$ increase the relation: $$(n^2m)^{2}\mathrm{Vol}(\mathcal{O})^{2/(mn^2)}\vert G_0\vert^{-1/(mn^2)}=o(q^{1/mn}),$$ \item the ratio $\frac{\vert M_n(\mathbb{F}_q)\vert ^2}{\vert M_n(\mathbb{F}_q)\vert ^2-\vert M_n(\mathbb{F}_q)\setminus\mathrm{GL}_n(\mathbb{F}_q)\vert^2}<(1+\varepsilon/3)$. \end{enumerate} Then there exists an effective constant $C_\varepsilon>0$ such that in dimension $2n^2m>C_\varepsilon$ there exists a lattice $\Lambda\in \mathbb{L}_p$ with packing density $$\Delta(\Lambda)\geq (1-\varepsilon)\frac{\vert G_0\vert }{2^{2n^2m}}.$$ Here $\mathbb{L}_p$ denotes the set of scaled preimages of generalized codes of $\mathbb{F}_q$-dimension $2n^2-n$ via the reduction map $\phi_p:\mathcal{O}^2\to(\mathcal{O}/\mathfrak{p}\mathcal{O})^2$ as in Theorem \ref{thm:specificaverage}. \end{theorem} \begin{proof} Tracing through the proof of Theorem \ref{thm:specificaverage} for $t=2$ and $k=2n-1$ (the only sensible choice), we find that the term $$\sum_{x\in (\phi_p^{-1}(C))', \phi_p(x)\in (\mathcal{R}_p^2\setminus U_p)}f(\beta i(x))$$ is trivial for $f=\mathds{1}_{\mathbb{B}(r)}$ and some $C \in \mathcal{C}_{k,p}$ as soon as \begin{equation} r< \left( \N(a)^{1/2n^2m}\cdot n\sqrt{m} \right) q^{\frac{1}{2nm}}, \end{equation} via Lemma \ref{lemma:lowerboundbadpoints} and \eqref{eq:lowerboundnorminthm}, where $a=\sum_{g\in G_0}g^*g$. Since $\N(g^*g)=1$ for $g\in G_0$, it is easy to give a uniform lower bound $\N(a)^{1/2n^2m}\geq 1$ or even $\N(a)^{1/2n^2m}\geq \sqrt{|G_0|}$ by the Minkowski determinant inequality. In particular, it suffices to ensure the parameter $r$ satisfies \begin{equation}\label{eq:effectivepush} r< \left( n\sqrt{m} \right) q^{\frac{1}{2nm}}. \end{equation} The expected value for the remaining terms for fixed characteristic $p$ can then be seen by balancedness to be bounded by: \begin{equation} \mathbb{E}\leq \frac{q^{n(2n-1)}}{q^{2n^2}-(q^{n^2}-\prod_{i=0}^{n-1}(q^n-q^i))^2}\cdot \sum_{x\in (\mathcal{O}^2)'} \mathds{1}_{\mathbb{B}(r)}(\beta_p i(x)) \end{equation} Now by a classical geometry of numbers result (see \cite[Lemma 4]{CampelloRandom} or \cite[Lemma 3 (2)]{MoustrouCodes}) we can bound \begin{equation} \sum_{x\in (\mathcal{O}^2)'} \mathds{1}_{\mathbb{B}(r)}(\beta_p i(x))\leq (r+\beta_p\tau(\mathcal{O}^2))^{2n^2m}\cdot \frac{V_{2n^2m}}{\beta_p^{2n^2m}\mathrm{Vol}(\mathcal{O}^2)}, \end{equation} where $\tau(\mathcal{O}^2)$ denotes the packing radius of $\mathcal{O}^2$ and $V_d$ denotes the volume of the $d$-dimensional unit ball. Writing $S_n(q):=\frac{q^{n(2n-1)}}{q^{2n^2}-(q^{n^2}-\prod_{i=0}^{n-1}(q^n-q^i))^2}\geq \beta_p^{2n^2m}$ we arrive at: \begin{equation}\label{eq:effectiveineq} \mathbb{E}\leq \frac{S_n(q)}{\beta_p^{2n^2m}} r^{2n^2m}\frac{V_{2n^2m}}{\mathrm{Vol}(\mathcal{O}^2)}\cdot \left(1+\frac{\tau(\mathcal{O}^2)\beta_p}{r}\right)^{2n^2m} \end{equation} Observe now that $\frac{S_n(q)}{\beta_p^{2n^2m}}=\frac{\vert M_n(\mathbb{F}_q)\vert ^2}{\vert M_n(\mathbb{F}_q)\vert ^2-\vert M_n(\mathbb{F}_q)\setminus\mathrm{GL}_n(\mathbb{F}_q)\vert^2}$, so that we can assume $q$ is large enough so that $\frac{S_n(q)}{\beta_p^{2n^2m}}<(1+\varepsilon/3)$. \par Moreover, for a radius $r$ that yields the density bound, we should have that the volume of the ball of radius $r$ is around $\vert G_0\vert (1-\varepsilon)\mathrm{Vol}(\mathcal{O}^2)$. By the Stirling formula, we may estimate as the dimension grows $$r\sim \frac{n\sqrt{m}}{\sqrt{\pi e}}(\vert G_0\vert \mathrm{Vol}(\mathcal{O}^2))^{1/2n^2m},$$ which under our assumptions on $q$ satisfies the inequality \eqref{eq:effectivepush} using a trivial bound like $\vert G_0\vert=o((n^2m)^{nm})$. It now suffices to show that under the parameters above, we can bound $\left(1+\frac{\tau(\mathcal{O}^2)\beta_p}{r}\right)^{2n^2m}< (1+\varepsilon/3)$ for large enough dimension, since then we get from the inequality \eqref{eq:effectiveineq} the existence of a lattice in $\mathbb{L}_p$ with the desired lower bound on the pac king density. Recall that $\beta_p=q^{-1/2nm}$ with our parameters and we have from Lemma \ref{lemma:packingradius} that $$\tau(\mathcal{O}^2)=\sqrt{2}\cdot \tau(\mathcal{O})\leq \mathrm{Vol}(\mathcal{O})^{2/n^2m}\cdot (n\sqrt{m}+6)/(\sqrt{2}\pi).$$ We thus have \begin{equation*} \frac{\tau(\mathcal{O}^2)\beta_p}{r}\lesssim \mathrm{Vol}(\mathcal{O})^{1/(mn^2)} \vert G_0\vert^{-1/(2mn^2)} q^{-1/(2mn)} \end{equation*} But under the assumptions of the theorem on $q$, the result now follows since as $mn^2$ goes to infinity the term $2mn^2\cdot \frac{\tau(\mathcal{O}^2)\beta_p}{r}$ becomes arbitrarily small. Assuming $p$ and $q$ chosen large enough for each $n,m$ as in the assumptions of the theorem, we may thus view $\left(1+\frac{\tau(\mathcal{O}^2)\beta_p}{r}\right)^{2n^2m}$ as a function in $n,m$ which approaches $1$ for large enough dimension $n^2m$. This easily yields an effective constant $C_\varepsilon$ guaranteeing $\left(1+\frac{\tau(\mathcal{O}^2)\beta_p}{r}\right)^{2n^2m}<(1+\varepsilon/3)$ for $n^2m>C_\varepsilon$. \end{proof} We may then for instance apply this result to specific families of maximal orders in division rings of increasing $\mathbb{Q}$-dimension. One may arrange for the size of the finite units $G_0$ to be known in this family via Amitsur's results (\cite{Amitsur1955FiniteSO}). Moreover, the computation of the volume $\mathrm{Vol}(\mathcal{O})$ reduces to a computation of $\sqrt{d(\mathcal{O}/\mathbb{Z})}$, since the $\mathbb{Z}$-discriminant $d(\mathcal{O}/\mathbb{Z})$ can be defined as the ideal generated by $\{\det(\mathrm{Trace}_{A/\mathbb{Q}}x_ix_j)_{1\leq i,j\leq [A:\mathbb{Q}]}\}$ for $x_i\in \mathcal{O}$ a $\mathbb{Z}$-basis. Equivalently it is the norm of the $\mathbb{Q}$-algebra different $\N_{A/\mathbb{Q}}(\mathfrak{D}(\mathcal{O}/\mathbb{Z}))$. We may then write (\cite[Ex 25.1]{reiner2003maximal}): $$d(\mathcal{O}/\mathbb{Z})=\N_{K/\mathbb{Q}}(d(\mathcal{O}/\mathcal{O}_K))\cdot d(\mathcal{O}_K/\mathbb{Z})^{n^2},$$ where $d(\mathcal{O}/\mathcal{O}_K)$ is just the regular discriminant of the central simple $K$-algebra $A$ and $d(\mathcal{O}_K/\mathbb{Z})$ is the discriminant of the central number field. In particular, when there is some control of the ramification behavior of $A/K$ and we have some upper bounds for the discriminant of $K$, the conditions of Theorem \ref{thm:effective} become entirely explicit. \begin{example} When $K=\mathbb{Q}(\zeta_m)$ is a cyclotomic field, one has that \begin{equation}\label{eq:discriminanteffective} d(\mathcal{O}_K/\mathbb{Z})=\frac{m^{\varphi(m)}}{\prod_{l\in \mathbb{P}, l\mid m}l^{\varphi(m)/(l-1)}}.\end{equation} When $n=1$, by considering cyclotomic fields $\mathbb{Q}(\zeta_m)$ Moustrou thus finds via a version of Theorem \ref{thm:effective} in this case effective dense lattices in dimensions $2\varphi(m)$ for large enough $m$ and shows a suitable $q$ can be found in time $O(m^3\log(m))^{\varphi(m)}$, see \cite[Theorem 1, Prop 3.1]{MoustrouCodes}. \end{example} \begin{example} Similarly, fixing $n=2$, varying $K=\mathbb{Q}(\zeta_m)$ and considering the quaternion algebra over $K$ $$A=\left(\frac{-1,-1}{\mathbb{Q}(\zeta_m)}\right)$$ when $2$ has odd order modulo $m$, we can use that $d(\mathcal{O}_K/\mathbb{Z})\leq m^{\varphi(m)}$ and that the discriminant of the Hurwitz integers $d(\mathcal{H}/\mathbb{Z})=2$ to obtain an effective version of Proposition \ref{prop:cycloHurwitz} via Theorem \ref{thm:effective}. We record this below. \end{example} \begin{prop}\label{prop:cycloquaternionseffective} Let $m_k=\prod_{\substack{p\leq k \text{ prime}\\2\nmid\ord_2p}}p$ and set $n_k:=8\varphi(m_k)$. Then for any $\varepsilon>0$ there is an effective constant $c_\varepsilon$ such that for $k>c_\varepsilon$ a lattice $\Lambda$ in dimension $n_k$ with density $$\Delta(\Lambda)\geq (1-\varepsilon)\frac{24\cdot m_k}{2^{n_k}}$$ can be constructed in $e^{4.5\cdot n_k\log(n_k)(1+o(1))}$ binary operations. This construction leads to the asymptotic density of $$\Delta(\Lambda)\geq (1-e^{-n_k})\frac{3\cdot n_k(\log\log n_k)^{7/24}}{2^{n_k}}$$ in dimension $n_k$. \end{prop} \begin{proof} We consider the quaternion algebras $A_k=\left(\frac{-1,-1}{\mathbb{Q}(\zeta_{m_k})}\right)$ and exhibit a large enough residue field size $q$ so that the conditions of Theorem \ref{thm:effective} are satisfied. From the discriminant relation \eqref{eq:discriminanteffective} we obtain that the first condition amounts to $$(m_k\varphi(m_k)^2)^{2\varphi(m_k)}=o(q).$$ It is convenient to as in \cite[Prop 3.1.]{MoustrouCodes} search for large primes which split completely in $\mathbb{Q}(\zeta_{m_k})$, and this happens when $p\equiv 1\mod m_k$. Using an effective version of the \v Cebotarev density theorem (see e.g., \cite{effectiveCebo}), one sees that an interval of size, say, $(1/m_k)(m_k)^{6\varphi(m_k)}$ around $m_k^{6\varphi(m_k)}$ must contain such a prime for large enough $m_k$. Such a prime then satisfies $(m_k\varphi(m_k)^2)^{2\varphi(m_k)}=o(p)$ for our choice of $m_k$. Finding a suitable prime $p$ can thus be done in at most around $e^{3/4 n\log n}$ steps. Moreover, for such primes, we have that $$\left\vert \frac{\vert M_2(\mathbb{F}_p)\vert ^2}{\vert M_2(\mathbb{F}_p)\vert ^2-\vert M_2(\mathbb{F}_p)\setminus\mathrm{GL}_2(\mathbb{F}_p)\vert^2}-1 \right\vert=o(e^{-n_k}),$$ which deals with the second condition of Theorem \ref{thm:effective}. The time estimate for enumerating the lattice family is then of $e^{4.5\cdot n_k\log(n_k)(1+o(1))}$ binary operations since the number of codes we consider in Theorem \ref{thm:effective} here amounts to $O(p^6)$. The costs of the remaining computations, such as computing the packing density of lattices, are also exponential in the dimension, but being of cost $2^{O(n_k)}$ do not contribute to the main term of the estimate. We thus obtain an effective version of the bounds in Proposition \ref{prop:cycloHurwitz}, as claimed. \end{proof} The density lower bounds above outperform the best known effective lower bounds on the density from cyclotomic fields up to dimensions around $1.98\cdot 10^{46}$ because of the improved constant coming from the size of the binary tetrahedral group ( see also \cite[Fig. 1]{gargava2021lattice}). \begin{example} Consider the case when $k=2$ and $m_k=7 \cdot 23=161$. A suitable prime $p$ is then for instance $$p= (161 \cdot \varphi(161)^2)^{2 \varphi(161)} + 223147$$ of size about $10^{1072}$ and satisfying $p\equiv 1\mod 161$. Using this prime in Theorem \ref{thm:effective} yields a lattice $\Lambda$ in $8\cdot \varphi(161)=1056$ dimensions such that $$\Delta(\Lambda)\geq (1-\varepsilon)\frac{3864 }{2^{1056}}, $$ for $\varepsilon \le 10^{-1000}$ in about $10^{10^4}$ bit-operations. Even with the best computers, it is currently far out of reach to enumerate a basis for such a lattice. Nevertheless, this gives a very explicit construction of such a random lattice that achieves a good packing. \end{example} Finally, we note that similarly an effective version of Proposition \ref{prop:loglogimprovement} can be obtained. It seems plausible for such results that as long as only the degree of the center of the division algebras over $\mathbb{Q}$ is unbounded as in Proposition \ref{prop:cycloquaternionseffective}, the family size of $n$-dimensional lattices to be searched should be $e^{C\cdot n\log(n)(1+o(1))}$ for some constant $C>0$. \subsection{Varying the rank $t$} Finally, we remark that one also obtains effective good asymptotic lattices from our constructions by fixing the division ring $A$ and maximal order $\mathcal{O}$ and instead varying the rank of the $\mathcal{O}$-lattices as in Vance's construction \cite{VanceImprovedBounds}. In particular, one obtains an effective version of Vance's construction which we record here. The general case is handled in the same way and is left to the reader. \par \begin{prop}\label{prop:effectiveVance} For any $0<\varepsilon<1$, there exists a lattice in $\mathbb{H}^t$ which is a free rank $t$ module over the ring of Hurwitz integers $\mathcal{H}$, whose geometric mean of the quaternionic minima satisfy $$\left(\prod_{j=1}^t \min_j(\Lambda)\right)^{1/t}>r$$ where $r$ is defined by $\mathrm{Vol}(\mathbb{B}(r))=(1-\varepsilon)\frac{24t \mathrm{Vol}(\mathcal{H}^t)\cdot \zeta(4t)}{e(1-e^{-t})}$, and which, provided the odd prime $p$ satisfies $t^2=o(p)$ and $t$ is large enough, lies in the set of (rescaled) lifts $$\mathbb{L}_p=\{p^{\frac{1-t}{2t}}\phi_p^{-1}(C):C\in \mathcal{C}_{t+1}\},$$ where $\phi_p:\mathcal{H}^t\to(\mathcal{H}/p\mathcal{H})^t\cong M_2(\mathbb{F}_p)^t$ is the reduction map and $\mathcal{C}_{t+1}$ is the set of left $M_2(\mathbb{F}_p)$-submodules of $M_2(\mathbb{F}_p)^t$ isomorphic to $t+1$ copies of the simple left module $\mathbb{F}_p^2$. \end{prop} \begin{proof} Consider the proof of Theorem \ref{thm:improvedbounds}. Then for any $t\geq 2$ the support of the radial function $f_r(y)$ is contained in the ball of radius $re^{1/mn^2t}=re^{1/4t}$. Choose $r$ such that \begin{equation}\label{eq:roftsize} \mathrm{Vol}(\mathbb{B}(r))=(1-\varepsilon)\frac{24t \mathrm{Vol}(\mathcal{H}^t)\cdot \zeta(4t)}{e(1-e^{-t})} \end{equation} Via Stirling's formula, we get from \eqref{eq:roftsize} and the discriminant $d(\mathcal{H})=2$ the asymptotic \begin{equation}\label{eq:rapproxsize} r\sim t^{1/(4t)+1/2}\cdot \frac{2^{5/8}}{\sqrt{\pi e}}. \end{equation} First consider any $t<k<2t$. Pulling back codes of $\mathbb{F}_p$-dimension $2k$ as in Theorem \ref{thm:specificaverage}, in order to lift the averaging result we see that the support of $f$ has to be contained in the ball of radius $2p^{1/4}$, so that we arrive at the condition $$e^{(1+\ln(t)-2t)/(4t)}\cdot \frac{2^{-3/8}}{\sqrt{\pi} }\sqrt{t}<p^{1/4}.$$ Thus for $t\geq 2$ it in particular suffices to take $p\geq t^2$. Note that here any odd prime $p$ is unramified and can be used in the construction. Inspecting the proof of Theorem \ref{thm:mainaverage} (and ignoring for simplicity the M\"obius inversion step since the zeta factor quicly approaches $1$ for large $t$), we have that \begin{align*} \mathbb{E}&\leq \frac{p^{d(k)}}{\vert U_p\vert\cdot \beta_p^{4t}}\cdot\sum_{x\in\mathcal{O}^t\setminus \{0\}}\beta_p^{4t}f_r(\beta_p i(x))\\ &=\frac{p^{4t}}{p^{4t}-(p^3+p^2-p)^t}\cdot\sum_{x\in\mathcal{O}^t\setminus \{0\}}\beta_p^{4t}f_r(\beta_p i(x)) \end{align*} as $\beta_p=p^{k/2t-1}$, and it therefore remains to bound the difference \begin{equation} \Delta(p,t)=\left |\beta_p^{4t}\cdot\sum_{x\in\mathcal{O}^t\setminus \{0\}}f_r(\beta_pi(x))-V^{-1}\int_{\mathbb{R}^{4t}}f_r(x)dx\right| \end{equation} recalling here that $f_r$ is the radial function given by $$f_r(y)=\begin{cases}\frac{1}{4}& \text{ if }0\leq \|y\|<re^{(1-t)/4t} \\ \frac{1}{4t}-\log (\frac{\|y\|}{r})&\text{ if }re^{(1-t)/4t}\leq \|y\|\leq re^{1/4t}\\ 0& \text{ else. } \end{cases}$$ We note that in particular $f_r$ has derivative bounded by $C_r=e^{1/4}/r$. Tiling the support of $f_r$ by Vorono\"i cells of diameter the packing radius $\tau(\beta\mathcal{O}^t)$, we can bound the error in approximating the Riemann integral via the lattice sum on each individual cell by $\mathrm{Vol}(\mathcal{O}^t)\beta_p^{4t}\cdot C_r\cdot 2\tau(\beta\mathcal{O}^t)$. For large enough $p,t$, we may estimate that the support of $f_r$ is covered by $\beta^{-4t}e\cdot \frac{Vol(\mathbb{B}(r))}{\mathrm{Vol}(\mathcal{O}^t)}$ cells (with an error that is $o(\mathrm{Vol}(\mathcal{O}^t))$ as $t\to \infty$), so that we arrive at the total error estimate \begin{equation} \Delta(p,t)\lesssim 2eC_r\cdot \beta_p\frac{\mathrm{Vol}(\mathbb{B}(r))}{\mathrm{Vol}(\mathcal{O}^t)}\cdot \sqrt{t}\cdot \tau(\mathcal{O}) \end{equation} We therefore obtain for $r$ satisfying \eqref{eq:roftsize} the bound as $p,t$ become large of $$\Delta(p,t)=O(t)\cdot p^{\frac{k-2t}{2t}}.$$ It now seems most convenient to take $k=t+1$ and we see that in particular the condition $t^2=o(p)$ suffices to guarantee for any given $\varepsilon>0$ that $\Delta(p,t)<\varepsilon$ for large enough rank $t$. We have thus shown that for any $\varepsilon$ we can find $t$ large enough so that under our assumptions on $p$ there exists $\Lambda\in \mathbb{L}_p$ with $\sum_{y\in\Lambda'}f_r(y)\leq (1-\varepsilon)\cdot 6$. The result now follows as in the proof of Theorem \ref{thm:improvedbounds}. \end{proof} We therefore conclude: \begin{corollary}\label{cor:computationalVance} Given any $0<\varepsilon<1$, for large enough $t$ a lattice $\tilde{\Lambda}$ in dimension $4t$ whose packing density satisfies $$\Delta(\tilde{\Lambda})\geq (1-\varepsilon)\cdot \frac{24t\zeta(4t)}{2^{4t}\cdot e(1-e^{-t})}$$ can be constructed with $e^{4t^2\log (t)(1+o(1))}$ bit operations. \end{corollary} \begin{proof} Given $\varepsilon$ and large enough $t$, it is easy to find a large enough prime $p$ so that the construction of Proposition \ref{prop:effectiveVance} applies. The corresponding family of lattices has $\vert \mathcal{C}_{t+1}\vert$ elements in Proposition \ref{prop:effectiveVance}'s notation. They are generated by vectors with coefficients polynomial in $t$. The cost of computing their density or shortest vector as well as of computing their successive quaternionic minima (see remark \ref{rem:effectiveMinkowski}) is thus small compared to the cost of enumerating the family and we find by applying the effective version of \ref{prop:productminima} the desired lattice. \end{proof} \section*{Acknowledgements} We would like to thank Gauthier Leterrier, Martin Stoller and Maryna Viazovska for helpful conversations on the topics of this paper. We thank Matthew De Courcy-Ireland for insightful comments on a previous version of this manuscript. Nihar Gargava got funded by the Swiss National Science Foundation (SNSF), Project funding (Div. I-III), ``Optimal configurations in multidimensional spaces'', 184927. \newpage \begin{appendix} \section{Gram-Schmidt process for real semisimple algebras with positive involutions} \label{se:gs_process} Let $A$ be a real semisimple algebra and let $(\ )^{*}: A\rightarrow A$ be a positive involution. \subsection{Orthogonality in $A^k$} Let $k \ge 1$. Fix a positive definite symmetric element $a \in A$. Let us define \begin{align*} \langle \ , \ \rangle_{A} : A^{k} \times A^{k} & \mapsto A \\ \langle x,y\rangle_{A} & = \sum_{i=1}^{k} x_{i} a y_{i}^*. \end{align*} This form is $\mathbb{R}$-bilinear but not necessarily $A$-bilinear, since $A$ may not be commutative. We can however define a real positive definite symmetric bilinear form on $A^{k}$ given by \begin{align*} \langle x,y\rangle_{\mathbb{R}} = \T\left( \langle x, y\rangle_{A}\right). \end{align*} We assume that $A^{k}$ carries this real inner product. \begin{lemma} The following properties are satisfied by $\langle \ , \ \rangle_{A}$. \begin{enumerate} \item For $x,y \in A^{k}$, we have \begin{align*} \langle x,y\rangle^{*}_{A} = \langle y,x\rangle_{A}. \end{align*} \item For $x \in A^{k} \setminus \{ 0\}$, $\langle x,x\rangle_{A}$ is a symmetric and positive definite in $A$, and hence is a unit in $A$. \item For $x,y \in A^{k}$ and $\alpha \in A$, we have \begin{align*} \langle \alpha x,y\rangle_{A} & = \alpha\langle x,y\rangle_{A},\\ \langle x, \alpha y \rangle_{A} & = \langle x ,y\rangle_{A} \alpha^{*}. \end{align*} \item Suppose $x \in A^{k}\setminus \{ 0\}$. Then there exists some $b \in A$ such that $\langle bx,bx\rangle = 1_{A}$. \end{enumerate} \end{lemma} \begin{proof} All of them are trivial verifications. For the last one, we must find a $b \in A$ such that $\langle x,x\rangle_{A}^{-1} = b^{*}b$. See \cite[Corollary 46]{gargava2021lattice}. \end{proof} \begin{definition} We say two vectors in $A^{k}$ are orthogonal if the above given product between them is $0$. We call a set of vectors $\{ x_1,x_2,\cdots,x_m\} \in A^{k}$ orthonormal if $\langle x_i,x_{j}\rangle_{A} = \delta_{ij} 1_{A}$. \end{definition} Note that if $\langle x,y\rangle= 0$, then $\langle \alpha_1x,\alpha_2y\rangle=0$ for each $\alpha_1,\alpha_2 \in A$ and $x,y \in A^{k}$. \begin{prop} Suppose $x_1,x_2,\cdots,x_m \in A^{k}$ is an orthonormal set of vectors. Then $m \le k$ and $x_1,x_2,\cdots,x_m$ are free under the left action of $A$. If $m=k$, then the vectors $x_1,\cdots,x_k$ freely generate $A^{k}$ as a left $A$-module. \end{prop} \begin{proof} Observe that if $a_1,a_2,\cdots, a_m \in A$ are such that \begin{align*} & a_1 x_1 + a_2 x_2+ \dots + a_m x_m = 0 , \end{align*} then we can just evaluate \begin{align*} \langle a_1 x_1 + a_2 x_2+ \dots + a_m x_m ,x_i \rangle = a_i \langle x_i, x_i\rangle = a_i 1_{A} = 0. \end{align*} Hence, we have that each $a_i=0$. Using this, we get that \begin{align*} A^{m} & \rightarrow A^{k} \\ (a_1,a_2,\ldots,a_m) & \mapsto a_1x_1 + \cdots + a_m x_m. \end{align*} is an injective $\mathbb{R}$-linear map. Hence, for dimensional constraints, $m \le k$ and $m=k$ will imply that it is an isomorphism. \end{proof} \ \begin{remark} If we were to define \begin{align*} \langle \ , \ \rangle_{A} : A^{k} \times A^{k} & \mapsto A \\ \langle x,y\rangle & = \sum_{i=1}^{k} x_{i}^{*} y_{i}, \end{align*} we would reach the same proposition above, but for right actions instead of left. \end{remark} \begin{corollary} Suppose $x_1,x_2,\cdots,x_k \in A^{k}$ is an orthonormal set of vectors. Then for any $v = a_1 x_1+ a_2 x_2 + \dots + a_k x_k$ for $\{ a_{i}\}_{i=1}^{k} \subseteq A$, we have \begin{align*} \langle v,v\rangle_{\mathbb{R}} = \T(\langle v,v\rangle_{A}) = \sum_{i=1}^{k} \T(a_{i}^{*}a a_{i}). \end{align*} \end{corollary} \subsection{Gram-Schmidt algorithm} The following algorithm is an analogue of Gram-Schmidt. Suppose $v_1,v_2,\cdots,v_k \in A^{k}$ are some vectors that freely generate $A^{k}$ as a left $A$-module. We claim that using these vectors, it is possible to create a set of orthonormalized basis vectors $x_1,x_2,\cdots,x_k$. First, we do the following. Define for $u,v \in A^{k}$, \begin{align*} \pr(u, v)= \begin{cases} {\langle u,v\rangle_{A}}\langle u,u\rangle_{A}^{-1} u & \text{ if }u \neq 0 \\ 0 & \text{if } u = 0 \end{cases}. \end{align*} This has the property that $\langle \pr(u,v),u\rangle_{A} = \langle u,v\rangle_{A}$. Generate vectors $x_1',x_2',\cdots,x_k'$ as follows. \begin{align*} x_1'& = {v_1}, \\ x_{2}'& = v_{2} - \pr(x_1',v_2) \\ x_{3}'& = v_{3} - \pr(x_1',v_3) - \pr(x_2',v_3)\\ x_{4}'& = v_{4} - \pr(x_1',v_4) - \pr(x_2',v_4) - \pr(x_3',v_4)\\ \vdots\\ \end{align*} We can prove that $\langle x_i',x_j'\rangle =0$ for $i > j$ by first induction via ordering $(i,j)$ as $(2,1),(3,1),\cdots$, $(k,1),(3,2),(4,2),\cdots$, $(k,2),(4,3),\cdots$. Now choose $b_i$ such that $x_i = b_i x'_i$ has $\langle x_i,x_i\rangle = 1_{A}$. Hence, we are done. \begin{definition} Given a real semisimple algebra $A$ with a positive involution $(\ )^{*}$, we call the above method of generating $x_1,x_2,\cdots,x_m$ from vectors $v_1,v_2,\cdots,v_m$ the Gram-Schmidt algorithm. If $\{ v_i\}_{i=1}^{m}$ are free under left $A$ multiplication, then so are $\{ x_i\}_{i=1}^{m}$. \end{definition} \end{appendix} \nocite{}
1,116,691,501,033
arxiv
\section{Related Works} \label{sec2} \paragraph{\fcircle[fill=black]{3pt} Neural Ordinary Differential Equations} As pointed out by \citep{chen2018neural}, the NODEs can be regarded as the continuous version of the ResNets having an infinite number of layers \citep{he2016deep}. The residual block of the ResNets is mathematically written as $\mathbf{z}_{t+1} = \mathbf{z}_t + f(\mathbf{z}_t, \theta_t)$, where $\mathbf{z}_t$ is the feature at the $t$-th layer, and $f(\cdot, \cdot)$ is a dimension-preserving and nonlinear function parametrized by a neural network with $\theta_t$, the parameter vector pending for learning. Notably, such a transformation could be viewed as the special case of the following discrete-time equations: \begin{equation} \label{eq1} \frac{\mathbf{z}_{t+1} - \mathbf{z}_t }{\Delta t} = f(\mathbf{z}_t, \theta_t) \end{equation} with $\Delta t=1$. In other words, as $\Delta t$ in \eqref{eq1} is set as an infinitesimal increment, the ResNets could be regarded as the Euler discretization of the NODEs which read: \begin{equation} \label{eq2} \frac{d\mathbf{z}(t) }{d t} = f(\mathbf{z}(t), \theta). \end{equation} Here, the shared parameter vector $\theta$, which unifies the vector $\theta_t$ of every layer in Eq.~\eqref{eq1}, is injected into the vector field across the finite time horizon, to achieve parameter efficiency of the NODEs. As such, the NODEs can be used to approximate some unknown function $F: \mathbf{x}\mapsto F(\mathbf{x})$. Specifically, the approximation is achieved in the following manner: Constructing a flow of the NODEs starting from the initial state $\mathbf{z}(0)=\mathbf{x}$ and ending at the final state $\mathbf{z}(T)$ with $\mathbf{z}(T) \approx F(\mathbf{x})$. Thus, a standard framework of the NODEs, which takes the input as its initial state and the feature representation as the final state, is formulated as: \begin{equation} \label{eq3} \left\{ \begin{aligned} \mathbf{z}(T) & = \mathbf{z}(0) + \int_0^T f(\mathbf{z}(t), \theta) dt \\ & = \mbox{ODESolve}(\mathbf{z}(0), f,0,T,\theta), \\ \mathbf{z}(0) & = \mbox{input}, \end{aligned} \right. \end{equation} where $T$ is the final time and the solution of the above ODE can be numerically obtained by the standard ODE solver using adaptive schemes. Indeed, a supervised learning task can be formulated as: \begin{equation} \label{eq4} \begin{array}{c} \min_{\theta} L(\mathbf{z}(T)),\\ \mbox{~s.t. Eq.~(\ref{eq2}) holds for any~} t \in[0, T], \end{array} \end{equation} where $L(\cdot)$ is a predefined loss function. To optimize the loss function in \eqref{eq4}, we need to calculate the gradient with respect to the parameter vector. This calculation can be implemented with a memory in an order of $\mathcal{O}(1)$ by employing the adjoint sensitivity method \citep{chen2018neural, pontryagin1962mathematical} as: \begin{equation} \frac{dL}{d\theta} = -\int_{T}^0 \mathbf{a}(t)^{\top} \frac{\partial f(\mathbf{z}(t),\theta)}{\partial \theta} dt, \end{equation} where $\mathbf{a}(t):=\frac{\partial L}{\partial \mathbf{z}(t)}$ is called the \textit{adjoint}, representing the gradient with respect to the hidden states $\mathbf{z}(t)$ at each time point $t$. \noindent \paragraph{\fcircle[fill=black]{3pt} Variants of NODEs} As shown in \citep{dupont2019augmented}, there are still some typical class of functions that the NODEs cannot represent. For instance, the \textit{reflections}, defined by $g_{1{\rm d}}:\mathbb{R}\rightarrow \mathbb{R}$ with $g_{1{\rm d}}(1)=-1$ and $g_{1{\rm d}}(-1)=1$, and the \textit{concentric annuli}, defined by $g_{2{\rm d}}:\mathbb{R}^2\rightarrow \mathbb{R}$ with \begin{equation} \label{eq6} g_{2{\rm d}}(\mathbf{x}) = \left\{ \begin{array}{ll} -1, & \mbox{if~} \|\mathbf{x}\|\leq r_1, \\ 1, & \mbox{if~} r_2\leq \|\mathbf{x}\|\leq r_3, \end{array} \right. \end{equation} where $\|\cdot\|$ is the $L_2$ norm, and $0<r_1<r_2<r_3$. Such successful constructions of the two counterexamples are attributed to the fact that the feature mapping from the input (i.e., the initial state) to the features (i.e., the final state) by the NODEs is a homeomorphism. Thus, the features always preserve the topology of the input domain, which mathematically results in the impossibility of separating the two connected regions in \eqref{eq6}. A few practical strategies have been timely proposed to address this problem. For example, proposed creatively in \citep{dupont2019augmented} was an argumentation of the input domain into a higher dimensional space, which makes it possible to have more complicated dynamics emergent in the Augmented NODEs. Very recently, articulated in \citep{zhu2021neural} was a novel framework of the NDDEs to address this issue without argumentation. Actually, such a framework was inspired by a broader class of functional differential equations, named delay differential equations (DDEs), where a time delay was introduced \citep{erneux2009applied}. Fox example, a simple form of NDDEs reads: \begin{equation} \label{eqNDDEold} \left\{ \begin{aligned} \frac{d\mathbf{z}(t) }{d t} &= f(\mathbf{z}(t-\tau), \theta), ~t\in[0, T],\\ \mathbf{z}(t)&=\phi(t)=\mathbf{x}, ~t\in[-\tau, 0], \end{aligned} \right. \end{equation} where $\tau$ is the delay effect and $\phi(t)$ is the initial function. Hereafter, we assume $\phi(t)$ as a constant function, i.e., $\phi(t)\equiv \mathbf{x}$ with input $\mathbf{x}$. Due to the infinite-dimension nature of the NDDEs, the crossing orbits can be existent in the lower-dimensional phase space. More significantly as demonstrated in \cite{zhu2021neural}, the NDDEs have a capability of universal approximation with $T=\tau$ in~\eqref{eqNDDEold}. \noindent \paragraph{\fcircle[fill=black]{3pt} Control theory} Training a continuous-depth neural network can be regarded as a task of solving an optimal control problem with a predefined loss function, where the parameters in the network act as the controller \citep{pontryagin1962mathematical, chen2018neural, weinan2019mean}. Thus, developing a new sort of continuous-depth neural network is intrinsic or equivalent to designing an effective controller. Such a controller could be in a form of open-loop or closed-loop. Therefore, from a viewpoint of control, all the existing continuous-depth neural networks can be addressed as control problems. However, these problems require different forms of controllers. Specifically, when we consider the continuous-depth neural network $\frac{dx(t)}{dt} = f(x(t), u(t), t)$, $u(t)$ is regarded as a controller. For example, $u(t)$ treated as constant parameters yields the network frameworks proposed in \citep{chen2018neural}, $u(t)$ as a data-driven controller yields a framework in \citep{massaroli2020dissecting}, and $u(t)$ as other forms of controllers brings more fruitful network structures \citep{chalvidal2020go, li2020scalable, kidger2020neural,zhu2021neural}. Here, the mission of this work is to design a delayed feedback controller for rendering a continuous-depth neural network more effectively in coping with synthetic or/and real-world datasets. } \section{Neural Piecewise-Constant Delay Differential Equations} \label{sec3} In this section, we propose a new framework of continuous-depth neural networks with delay (i.e., the NPCDDEs) by an articulated integration of some tools from machine learning and dynamical systems: the NDDEs and the piecewise-constant DDEs \citep{1988A, cooke1991survey, 1992On}. We first transform the delay of the NDDEs in \eqref{eqNDDEold} into a form of the piecewise-constant delay \citep{1988A, cooke1991survey, 1992On}, so that we have \begin{equation} \label{eqNDDE} \left\{ \begin{aligned} \frac{d\mathbf{z}(t) }{d t} &= f(\mathbf{z}(\left\lfloor \frac{t}{\tau} \right\rfloor \tau), \theta), t\in[0, T],\\ \mathbf{z}(0)&=\mathbf{x}, \end{aligned} \right. \end{equation} where the final time $T=n\tau$ and $n$ is supposed to be a positive integer hereafter. We note that the NPCDDEs in \eqref{eqNDDE} with $T=\tau$ is exactly the NDDEs in~\eqref{eqNDDEold}, owning the universal approximation as mentioned before. As the vector filed of the NPCDDEs in \eqref{eqNDDE} is constant in each interval $[k\tau, k\tau+\tau]$ for $k=0,1,..., \lfloor \frac{T}{\tau} \rfloor$, the simple NPCDDEs in \eqref{eqNDDE} can be treated as a discrete-time dynamical system: \begin{equation} \mathbf{z}(k+1) = \mathbf{z}(k) + \tau f(\mathbf{z}(k),\theta) := \hat{F}(\mathbf{z}(k),\theta). \end{equation} Actually, this iterative property of dynamical systems enables the NPCDDEs in \eqref{eqNDDE} to learn some functions with specific \textit{structures} more effectively. For example, if the map $F(x)=c^2x$ with a large real number $c>0$ is pending for learning and the vector field is set as: \begin{equation} \label{linearmodel} f(\mathbf{z}( \left\lfloor \frac{t}{\tau} \right\rfloor \tau), \theta) := a \mathbf{z}( \left\lfloor \frac{t}{\tau} \right\rfloor \tau) + b \end{equation} with $\tau=1$ and the initial parameters $a=b=0$ before training, then, we only use $T=2\tau$ as the final time for the NPCDDEs in \eqref{eqNDDE} and require $x(\tau)$ to learn the small coefficient in the linear function $x\mapsto cx$ (or, equivalently, require $f$ to learn $x\mapsto (c-1)x$). As such, the feature $x(T)\approx (x(\tau))^2\approx c^2x$ naturally approximates the above-set function $F(x)$, because $F(x)$ can be simply represented as two iterations of the function $\hat{F}(x)=cx$, i.e., $\hat{F}\circ \hat{F} (x)=F(x)$. We experimentally show the structural representation power in Fig.~\ref{figpoly}, where the training loss of the NPCDDEs in \eqref{eqNDDE} with $T=2\tau$ decreases faster than that only with $T=\tau$. \begin{figure}[htb] \begin{center} \centerline{\includegraphics[width=0.47\textwidth]{ax05.pdf}} \caption{The training processes for fitting the function $F(x)=16x$ using the NPCDDEs in \eqref{eqNDDE}, respectively, with the final times $T=\tau$ and $T=2\tau$. The training loses (left), and the evolution of two parameters $a$ (middle), and $b$ (right), as defined in \eqref{linearmodel} during the training processes.} \label{figpoly} \end{center} \vskip -0.3in \end{figure} Given the above example, the following question arises naturally: For any given function $x\mapsto F(x)$, does there exist a function $x\mapsto \hat{F}(x)$ such that the \textit{functional} equation \begin{equation} \label{functional_eq} \hat{F}\circ \hat{F} (x)=F(x) \end{equation} holds? Unfortunately, the answer is no, which is rigorously stated in the following proposition. \begin{prop} \label{eq:x^2} \citep{Radovanovic2007FunctionalE} There does not exist any function $f:\mathbb{R}\rightarrow \mathbb{R}$ such that $f(f(x)) = x^2-2$ for all $x \in\mathbb{R}$. \end{prop} As shown in Proposition \ref{eq:x^2}, although the iterative property of NPCDDEs in \eqref{eqNDDE} allows the effective learning of functions with certain structure, the solution of the functional equation \eqref{functional_eq} does not always exist. This thus implies that \eqref{eqNDDE} cannot represent a wide class of functions \citep{rice1980f, 2011Solution}. To further elaborate this point, we use $T=\tau$ and $T=2\tau$, respectively, for the NPCDDEs in \eqref{eqNDDE} to model the function $g_{2{\rm d}}(\mathbf{x})$ as defined in \eqref{eq6}. Clearly, Fig. \ref{figcircle1} shows that the training processes for fitting the concentric annuli using \eqref{eqNDDE} with the two delays are different. Contrary to the preceding example, the training loss of one with $T=\tau$ decreases much faster than that of the one with $T=2\tau$. In order to sustain the capability of universal approximation from the NDDEs to the current framework, we modify the NPCDDEs in \eqref{eqNDDE} by adding a skip connection from the time $0$ to the final time $2\tau$ in the following manner: \begin{equation} \label{eqDNDDE} \left\{ \begin{aligned} \frac{d\mathbf{z}(t) }{d t} &= f(\mathbf{z}(\left\lfloor \frac{t}{\tau} \right\rfloor \tau), \mathbf{z}(\left\lfloor \frac{t-\tau}{\tau} \right\rfloor \tau), \theta), ~t\in[0, 2\tau],\\ \mathbf{z}(-\tau)&=\mathbf{z}(0)=\mathbf{x}. \end{aligned} \right. \end{equation} As can be seen from Fig. \ref{figcircle1}, the training loss of the modified NPCDDEs in \eqref{eqDNDDE}) decreases outstandingly faster than that of the NPCDDEs in \eqref{eqNDDE} with $T=2\tau$ and that of NODEs. Also, it is slightly faster than the one with $T=\tau$. Moreover, the dynamical behaviors of the feature spaces during the training processes using different neural frameworks are shown in Fig. \ref{figcircle2}. In particular, the NPCDDEs in \eqref{eqDNDDE} first separate the two clusters among these models at the $3$rd training epoch, which is beyond the ability of the baselines. \begin{figure}[htb] \begin{center} \centerline{\includegraphics[width=0.45\textwidth]{circle_loss05.pdf}} \caption{The training processes for fitting the function $g_{2{\rm d}}(\mathbf{x})$. (a) The training loses, respectively, using the NODEs, the NPCDDEs in \eqref{eqNDDE} with $n=1$ and $\tau=1$, the NPCDDEs in \eqref{eqNDDE} with $n=2$ and $\tau=0.5$, and the special NPCDDEs in \eqref{eqDNDDE} with $\tau=0.5$. (b) A part of the training dataset for visualization. The flows mapping from the initial states to the target states, respectively, by the NODEs (c), the NPCDDEs in \eqref{eqNDDE} with $n=2$ and $\tau=0.5$ (d), NPCDDEs in \eqref{eqNDDE} with $n=1$ and $\tau=1$ (e), and the special NPCDDEs in \eqref{eqDNDDE} with $\tau=0.5$ (f). The red (resp. blue) points and the yellow (resp. cyan color) points are the initial states and the final states of all the flows, respectively.} \label{figcircle1} \end{center} \vskip -0.3in \end{figure} \begin{figure}[htb] \begin{center} \centerline{\includegraphics[width=0.45\textwidth]{circletrain04.pdf}} \caption{The dynamical behaviors of the feature spaces during the training processes (totally $6$ epochs from the left column to the right column) for fitting $g_{2{\rm d}}(\mathbf{x})$ using different models: the NODEs (the top row), the NPCDDEs in \eqref{eqNDDE} with $n=1$ and $\tau=1$ (the second row), the NPCDDEs in \eqref{eqNDDE} with $n=2$ and $\tau=0.5$ (the third row), and the special NPCDDEs in \eqref{eqDNDDE} with $\tau=0.5$ (the bottom row).} \label{figcircle2} \end{center} \vskip -0.3in \end{figure} \begin{figure*}[t] \begin{center} \centerline{\includegraphics[width=0.98\textwidth]{Dense_NDDE05.pdf}} \caption{Sketches of different kinds of continuous-depth neural networks, including the NODEs, the NDDEs, and our newly proposed framework, the NPCDDEs. Specifically, $\phi(t)\equiv \mathbf{z}(0)$ as a constant function is the initial function for the NDDEs. For the NPCDDEs in \eqref{eqDNDDEgenral}, at each time point in the interval $[k\tau, k\tau+\tau]$, the time dependencies are unaltered, different from the dynamical delay in the NDDEs.} \label{icml-historical} \end{center} \vskip -0.2in \end{figure*} More importantly, the following theorem demonstrates that NPCDDEs in \eqref{eqDNDDE} are universal approximators, whose proof is provided in the supplementary material. \begin{thm} \label{thm1} (Universal approximation of the NPCDDEs in \eqref{eqDNDDE}) Consider the NPCDDEs in \eqref{eqDNDDE} of $n$-dimension. If, for any given function $F: \mathbb{R}^n \rightarrow \mathbb{R}^n$, there exists a neural network $g(\mathbf{x}, \theta)$ that can approximate the map $G(\mathbf{x}) = \frac{1}{2\tau} [F(\mathbf{x}) - \mathbf{x}]$, then the NPCDDEs that can learn the map $\mathbf{x} \mapsto F(\mathbf{x})$. In other words, we have $\mathbf{z}(T)\approx F(\mathbf{x})$ provided that both the initial states $\mathbf{z}(-\tau)$ and $\mathbf{z}(0)$ are set as $\mathbf{x}$, the input. \end{thm} Notice that, for the NPCDDEs in \eqref{eqNDDE} and the modified NPCDDEs in \eqref{eqDNDDE}, their vector fields keep constant in a $\tau$ period of time. More generally, we can extend these models by adding the dependency on the current state, enlarging the value of the final time, and introducing more skip connections from the previous time to the current time. As such, a more generic framework of the NPCDDEs reads: \begin{equation} \label{eqDNDDEgenral} \left\{ \begin{aligned} \frac{d\mathbf{z}(t) }{d t} = & f(\mathbf{z}(t), \mathbf{z}(\left\lfloor \frac{t}{\tau} \right\rfloor \tau), \mathbf{z}(\left\lfloor \frac{t-\tau}{\tau} \right\rfloor \tau), ..., \\ &\mathbf{z}(\left\lfloor \frac{t-n\tau}{\tau} \right\rfloor \tau), \theta), t\in[0, T],\\ \mathbf{z}(-n\tau)=&\cdots=\mathbf{z}(-\tau)=\mathbf{z}(0)=\mathbf{x}, \end{aligned} \right. \end{equation} where $T=n\tau$ with $n$ being a positive integer. Analogous to the proof of Theorem \ref{thm1}, the universal approximation of the NPCDDEs in \eqref{eqDNDDEgenral} can be validated (see Proposition~\ref{prop2}). \begin{prop} \label{prop2} The NPCDDEs in \eqref{eqDNDDEgenral} have a capability of universal approximation. \end{prop} To further improve the modeling capability of the NPCDDEs, we propose an extension of the NPCDDEs without sharing the parameters, which reads: \begin{equation} \label{eqDNDDEgenralunshared} \left\{ \begin{aligned} \frac{d\mathbf{z}(t) }{d t} = & f(\mathbf{z}(t), \mathbf{z}(\left\lfloor \frac{t}{\tau} \right\rfloor \tau), \mathbf{z}(\left\lfloor \frac{t-\tau}{\tau} \right\rfloor \tau), ..., \\ &\mathbf{z}(0), \theta_k), t\in[k\tau, k\tau+\tau],\\ \mathbf{z}(0)=&\mathbf{x}, \end{aligned} \right. \end{equation} where $\theta_k$ is the parameter vector used in the time interval $[k\tau, k\tau+\tau]$ for $k=0,1,...,n-1$. For simplicity, we name such a model as unshared NPCDDEs (UNPCDDEs). As in the ResNets~\eqref{eq1}, a typical neural network, the parameters in each layer are independent with the parameters in the other layer. Moreover, the gradients of the loss with respect to the parameters of the UNPCDDEs in \eqref{eqDNDDEgenralunshared} are shown in Theorem \ref{thm2}, whose proof is provided in the supplementary material. Moreover, setting $\theta_k\equiv \theta$ straightforwardly in Theorem \ref{thm2} enables us to compute the gradients of the NPCDDEs in \eqref{eqDNDDEgenral}. \begin{thm} \label{thm2} (Backward gradients of the UNPCDDEs in \eqref{eqDNDDEgenralunshared}) Consider the loss function $L(\mathbf{z}(T))$ with the final time $T=n\tau$. Thus, we have \begin{equation} \label{parasgrad} \frac{d L}{d \theta_k} = \int_{k\tau+\tau}^{k\tau} -\mathbf{a}(t)^{\top} \frac{\partial f}{\partial \theta_k} d t, \end{equation} where the dynamics of the adjoint can be specified as: \begin{equation} \label{adjoint} \left\{ \begin{aligned} \frac{d \mathbf{a}(t)}{d t} & = -\mathbf{a}(t)^{\top} \frac{\partial f}{\partial \mathbf{z}(t)}, ~t\in[k\tau, k\tau+\tau]\\ \mathbf{a}(l\tau)& =\mathbf{a}(l\tau) + \int_{k\tau+\tau}^{k\tau} -\mathbf{a}(t)^{\top} \frac{\partial f}{\partial \mathbf{z}(l\tau)}dt, \\ & ~l=0,1,\cdots,k,\\ \end{aligned} \right. \end{equation} where the backward initial condition $\mathbf{a}(T) = \frac{\partial L(\mathbf{h}(T))}{\partial \mathbf{z}(T)}$ and $k=n-1,n-2,\cdots,0$. \end{thm} We note that in \eqref{adjoint}, due to the skip connections, analogous to DenseNets \citep{huang2017densely}, the gradients are accumulated from multiple paths through the reversed skip connections in the backward direction, which likely renders the parameters optimized sufficiently. Additionally, if the loss function $L(\mathbf{z}(T))$ depends on the states at different time points, viz., the new loss function $L(\mathbf{z}(t_0), \mathbf{z}(t_1),..., \mathbf{z}(t_N))$, we need to update instantly the adjoint state in the backward direction by adding the partial derivative of the loss at the observational time point, viz. $\mathbf{a}(t_i) =\mathbf{a}(t_i)+\frac{\partial L}{\partial \mathbf{z}(t_i)}$. For the specific tasks of classification and regression, refer to the section of \textbf{Experiments}. \section{Major Properties of NPCDDEs} \label{sec4} The NPCDDEs in \eqref{eqDNDDEgenral} and the UNPCDDEs in \eqref{eqDNDDEgenralunshared} generalize the ResNets and the NODEs as well. Also, they have strong connections with the Augmented NODEs. Moreover, the discontinuous nature of the NPCDDEs enables us to model complex dynamics beyond the NODEs, the Augmented NODEs, and the NDDEs. Lastly, the NPCDDEs are shown to enjoy advantages in computation over the NDDEs. In the sequel, we discuss these properties. \noindent \paragraph{\fcircle[fill=black]{3pt} Both the ResNets and the NODEs are special cases of the UNPCDDEs in \eqref{eqDNDDEgenralunshared}.} We emphasize that any dimension-preserving neural networks (multi-layer residual blocks) are special cases of the UNPCDDEs. Actually, one can enforce the $\mathbf{z}(t), \mathbf{z}(\left\lfloor \frac{t-\tau}{\tau} \right\rfloor \tau), \mathbf{z}(\left\lfloor \frac{t-2\tau}{\tau} \right\rfloor \tau), \cdots, \mathbf{z}(0)$ as the dummy variables in the vector field of \eqref{eqDNDDEgenral} by assigning the weights connected to these variables to be zero, except for the variable $\mathbf{z}(\left\lfloor \frac{t}{\tau} \right\rfloor \tau)$. Moreover, letting $\tau=1$ results in very simple unshared NPCDDEs as: \begin{equation} \label{eqDNDDEgenralsimple} \frac{d\mathbf{z}(t) }{d t} = f(\mathbf{z}(k), \theta_k), ~t\in[k, k+1],~\mathbf{z}(0)=\mathbf{x}. \end{equation} Due to the vector field of \eqref{eqDNDDEgenralsimple} keeping constant in each interval $[k, k+1]$, we have \begin{equation} \label{eqDNDDEgenralsimple_res} \mathbf{z}(k+1) = \mathbf{z}(k) + f(\mathbf{z}(k), \theta_k),~ \mathbf{z}(0)=\mathbf{x}, \end{equation} which is exactly the form of the ResNets \eqref{eq1}. In addition, if we let $\mathbf{z}(\left\lfloor \frac{t}{\tau} \right\rfloor \tau), \mathbf{z}(\left\lfloor \frac{t-\tau}{\tau} \right\rfloor \tau), ..., \mathbf{z}(0)$ as the dummy variables in the vector field of \eqref{eqDNDDEgenral} and set $\theta_k\equiv \theta$, the UNPCDDEs in \eqref{eqDNDDEgenralunshared} indeed become the typical NODEs. Interestingly, though the NODEs are inspired by the ResNets, they are not equivalent to each other because of the limited modeling capability of the NODEs. But UNPCDDEs in \eqref{eqDNDDEgenralunshared} provides a more general form of the two. \noindent \paragraph{\fcircle[fill=black]{3pt} Connection to Augmented NODEs} The NPCDDEs in \eqref{eqDNDDEgenral} can be viewed as a particular form of the Augmented NODEs: \begin{equation} \label{eqDNDDEgenralaug} \left\{ \begin{aligned} \frac{d\mathbf{z}(t) }{d t} &= f(\mathbf{z}(t), \mathbf{z}_0(t), \mathbf{z}_1(t), ..., \mathbf{z}_n(t), \theta), t\in[0, T],\\ \frac{d\mathbf{z}_0(t)}{d t} &= \mathbf{0}, \mathbf{z}_0(t) =\mathbf{z}(\left\lfloor \frac{t}{\tau} \right\rfloor \tau), \\ &\cdots\\ \frac{d\mathbf{z}_n(t) }{d t} &= \mathbf{0}, \mathbf{z}_n(t) =\mathbf{z}(\left\lfloor \frac{t-n\tau}{\tau} \right\rfloor \tau), \\ \mathbf{z}(-n\tau)&=\cdots\mathbf{z}(-\tau)=\mathbf{z}(0)=\mathbf{x}. \end{aligned} \right. \end{equation} Hence, we can apply the framework of the NODEs to coping with the NPCDDEs by solving the Augmented NODEs in \eqref{eqDNDDEgenralaug}. It is worthwhile to emphasize that the Augmented NODEs in \eqref{eqDNDDEgenralaug} are not trivially equivalent to the traditional Augmented NODEs developed in \citep{dupont2019augmented}. In fact, the dynamics of $\mathbf{z}_i(t)$ in \eqref{eqDNDDEgenralaug} are piecewise-constant (but $\mathbf{z}(t)$ is continuous) and thus \textit{discontinuous} at each time instant $k\tau$, while the traditional Augmented NODEs still belong to the framework of NODEs whose dynamics are continuously evolving. The benefits of discontinuity are specified in the following. \noindent \paragraph{\fcircle[fill=black]{3pt} Discontinuity of the piecewise-constant delay(s)} { Notice that $\lfloor \cdot \rfloor$ used in the piecewise-constant delay(s) is a discontinuous function, which makes the first-order derivative of the function discontinuous at each key time point (i.e., integer multiple of the time delay). This characteristic overcomes a huge limitation, the homeomorphisms/continuity of the trajectories produced by the NODEs, and thus enhances the flexibility of the NPCDDEs to handling plenty of complex dynamics (e.g., jumping derivatives and chaos evolving in the lower dimensional space). We will validate this advantage in the section of \textbf{Experiments}. Additionally, the simple Euler scheme for the ODEs in \eqref{eq1} is actually a special PCDDE: $\frac{d\mathbf{z}(t) }{d t} = f(\mathbf{z}(\lfloor \frac{t}{\Delta t}\rfloor \Delta t))$ \citep{cooke1991survey}. Based on the discontinuous nature, the approximation of the DDEs using the PCDDEs has been validated in \citep{cooke1991survey}. Finally, such kind of discontinuous settings could be seen as typical forms of those discontinuous control strategies that are frequently used in control problems \citep{evans1983introduction, lewis2012optimal}. Actually, discontinuous control strategies can bring benefits on time and energy consumption \citep{sun2017closed}. } { \noindent \paragraph{\fcircle[fill=black]{3pt} Computation advantages of NPCDDEs over NDDEs} For solving the conventional NDDEs in \eqref{eqNDDEold}, we need to recompute the delay states in time using appropriate ODE solver \citep{zhu2021neural}, which requires $\mathcal{O}(n)$ memory and $\mathcal{O}(nK)$ computation, where $K$ is the adaptive depth of the ODE solver. On the contrary, for NPCDDEs in \eqref{eqDNDDEgenral} and the unshared NPCDDEs in \eqref{eqDNDDEgenral}, the delays are constant, and thus recomputing is not needed. As a result, for NPCDDEs (or UNPCDDEs), computational cost is approximately in orders of $\mathcal{O}(n)$ and $\mathcal{O}(K)$. Thus, the computational cost of NPCDDEs is cheaper than NDDEs. } \begin{figure*}[htb] \vskip 0.0in \begin{center} \includegraphics[width=15cm]{Population1d02.pdf} \end{center} \caption{ {The training losses and the test losses of the piecewise-constant delay population dynamics \eqref{PCDDE1d} with the growth parameter $a=2.0$ (the oscillation regime, top) and $a=3.2$ (the chaos regime, bottom), respectively, by using the NPCDDEs, the NDDEs, the NODEs, and the Agumented NODEs (where the augmented dimension equals to $1$). The panels in the first column depict the training losses. The panels from the second column to the fifth column depict the test losses over the time intervals, respectively, with the lengths $1$, $2$, $5$, and $10$.} } \label{P_1d_fig} \vskip -0.1in \end{figure*} \section{Experiments} \label{sec5} \subsection{Population Dynamics: One-Dimensional PCDDE} \label{sec:toy_exp} We consider a 1-d PCDDE, which reads: \begin{equation} \label{PCDDE1d} \frac{dx(t)}{dt} = a x(t) (1-x(\lfloor t \rfloor)), ~x(0)=x_0\geq 0. \end{equation} where the growth parameter $a>0$ \citep{1988A, cooke1991survey}. The above PCDDE~\eqref{PCDDE1d} is analogous to the well-known, first-order nonlinear logistic differential equation of one-dimension, which describes the growth dynamics of a single population and can be written as: \begin{equation} \label{logeq} \frac{dx(t)}{dt} = a x(t) (1-x(t)), ~x(0)=x_0\geq 0. \end{equation} Clearly, replacing the term $1-x(t)$ in the vector field of \eqref{logeq} by the term $1-x(\lfloor t \rfloor)$ results in the vector field of \eqref{PCDDE1d}. For each given $a>0$ and $x_0\geq 0$, if we consider the state $x(t)$ at the integer time instants $t=0,1,2,\cdots$, the corresponding discrete sequence, $x(0), x(1), x(2),\cdots$, satisfy the following discrete dynamical system: \begin{equation} \label{1dmapppp} x(t+1) = x(t) e^{a(1-x(t))}, ~t=0,1,2,\cdots. \end{equation} Thus, we study the function \begin{equation} \label{1dmap} f_a(x) = x e^{a(1-x)}, ~x\in[0,\infty). \end{equation} Direct computation indicates that the function $f_a(\cdot)$ in \eqref{1dmap} is a $C^1$-unimodal map in $[0,\infty]$ with its maximal value as $f_a(x^*)=f_a(\frac{1}{a})$. Thus, $[0, \frac{1}{a}]$ is a strictly increasing regime of this function while $[\frac{1}{a}, \infty)$ is a strictly decreasing regime. As pointed out in \citep{1988A, cooke1991survey}, the discrete dynamical system \eqref{1dmapppp} can exhibit complex dynamics including chaos. More precisely, at $a^*=3.11670...$, the solution of \eqref{1dmapppp} with the initial value $x(0)=x_0=\frac{1}{a^*}$ is periodic and asymptotically stable with a period of three, so that $f_{a^*}\circ f_{a^*} \circ f_{a^*} (x_0)=x_0$. This further implies that the map with the adjustable parameter $a$ admits period-doubling bifurcations and thus has chaotic dynamics according to the well-known Sharkovskii Theorem \citep{li1975period, 1988A, cooke1991survey}. Moreover, since the discrete dynamical system \eqref{1dmapppp} could be regarded as the sampled system with integer sampling time instants from the original PCDDE \eqref{PCDDE1d}, this PCDDE exhibits chaotic as well for $a$ in the vicinity of $a^*$. We thereby test the NODEs, the NDDEs, the NPCDDEs, and the Augmented NODEs on the piecewise-constant delay population dynamics \eqref{PCDDE1d}, respectively, with $a=2.0$ and $a=3.2$, which corresponds to two regimes of oscillation and chaos. Moreover, as can be seen from Fig. \ref{P_1d_fig}, the training losses and the test losses of the NPCDDEs decrease significantly, compared to those of the other models. Additionally, in the oscillation regime, the losses of the NPCDDEs approach a very low level in both training and test stages, while in the chaos regime, the NPCDDEs can achieve short-term prediction in an accurate manner. Naturally, it is hard to achieve long-term prediction because of the sensitive independence of initial conditions in a chaotic system. Here, for training, we produce $100$ time series from different initial states in the time interval $[0, 3]$ with $0.1$ as the sampling period. Thus, still with $0.1$ as the sampling period, we use the final states of the training data as the initial states for the $100$ test time series in the next time interval $[3, 13]$. More specific configurations for our numerical experiments are provided in the supplementary material. \subsection{Image datasets} \label{secimage} We conduct experiments on several image datasets, including MNIST, CIFAR10, SVHN, by using the (unshared) NPCDDEs and the other baselines. In the experiments, we follow the setup in the work \citep{zhu2021neural}. For a fair comparison, we construct all models without augmenting the input space, and for the NDDEs, we assume that the initial function keeps constant (i.e., the initial function $\phi(t)=\mbox{input}$ for $t\leq 0$), which is different from the initial function used for the NDDEs in \citep{zhu2021neural}. We note that our models are orthogonal to these models, since one can also augment the input space and model the initial state as the feature of an NODE in the framework of NPCDDEs. Additionally, the number of the parameters for all models are almost the same ($84$k params for MNIST, $107$k params for CIFAR10 and SVHN). Notably, the vector fields of all the models are parameterized with the convolutional architectures \citep{dupont2019augmented, zhu2021neural}, where the arguments that appeared in the vector fields are concatenated and then fed into the convolutional neural networks (CNNs). For example, for the NDDEs, the vector field is $f(\mbox{concat}(\mathbf{z}(t), \mathbf{z}(t-\tau)), \theta)$, where $\mbox{concat}(\cdot,\cdot)$ is a concatenation operator for two tensors on the channel dimension. Moreover, the initial states for these models are just the images from the datasets. It is observed that our models outperform the baselines on these datasets. The detailed test accuracies are shown, respectively, in Tab.~\ref{table}. For the specific training configurations for all the models and more experiments equipped with augmentation \cite{dupont2019augmented}, please refer to the supplementary material. \begin{table}[t] \vskip -0.1in \caption{The test accuracies with their standard deviations over 5 realizations of different models on the image datasets. In the first column, the integer $i$ in NPCDDE$i$ or UNPCDDE$i$ means that $n=i$ for the NPCDDEs in \eqref{eqDNDDEgenral} or for the UNPCDDEs in \eqref{eqDNDDEgenralunshared}. The results for the NODEs and NDDEs are reported in \citep{zhu2021neural}. The final time $T$ for all models is assigned as $1$. } \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{llll} \hline\hline \multicolumn{1}{c}{\bf ~} &\multicolumn{1}{c}{CIFAR10} &\multicolumn{1}{c}{MNIST} &\multicolumn{1}{c}{SVHN} \\ \hline NODE &$53.92\%\pm0.67$ &$96.21\%\pm0.66$ &$80.66\%\pm0.56$\\ NDDE &$55.69\%\pm0.39$ &$96.22\%\pm0.55$ &$81.49\%\pm0.09$\\ NPCDDE2 (ours) &$56.03\%\pm0.25$ &$97.32\%\pm0.30$ &$82.63\%\pm0.36$\\ UNPCDDE2 (ours) &$56.22\%\pm0.42$ &$97.43\%\pm0.13$ &$82.99\%\pm0.23$\\ NPCDDE3 (ours) &$56.34\%\pm0.51$ &$97.34\%\pm0.10$ &$82.38\%\pm0.35$\\ UNPCDDE3 (ours) &$56.09\%\pm0.37$ &$97.52\%\pm0.14$ &$83.19\%\pm0.32$\\ NPCDDE5 (ours) &$56.59\%\pm0.44$ &$97.40\%\pm0.19$ &$82.62\%\pm0.69$\\ UNPCDDE5 (ours) &${\bf56.73\%\pm0.54}$ &${\bf 97.69\%\pm0.13}$ &${\bf 83.45\%\pm0.38}$\\ \hline\hline \end{tabular} } \end{center} \label{table} \vskip -0.2in \end{table} \section{Discussion} As shown above, the NPCDDEs achieve good performances not only on the 1-d PCDDE example but on the image datasets as well. However, such NPCDDEs are not the perfect framework, still having some limitations. Here, we suggest several directions for future study, including: 1) For an NPCDDE, seeking a good strategy to determine the number of the skip connections and the specific value of each delay for different tasks, 2) applying the NPCDDEs to the other suitable real-world datasets, such as the time series with the piecewise-constant delay effects, 3) providing more analytical results for the NPCDDEs to guarantee the stability and robustness, and 4) leveraging the optimal control theory \cite{pontryagin1962mathematical} for dynamical systems to further promote the performance of neural networks. \section{Conclusion} In this article, we have articulated a framework of the NPCDDEs, which is mainly inspired by several previous frameworks, including the NODEs, the NDDEs, and the PCDDEs. The NPCDDEs own not only the provable capability of universal approximation but also the outstanding power of nonlinear representations. Also, we have derived the backward gradients along with the adjoint dynamics for the NPCDDEs. We have emphasized that both the ResNets and the NODEs are the special cases of the NPCDDEs and that the NPCDDEs are of a more general framework compared to the existing models. Finally, we have demonstrated that the NPCDDEs outperform the several existing frameworks on representative image datasets (MNIST, CIFAR10, and SVHN). All these suggest that integrating the elements of dynamical systems with different kinds of neural networks is indeed beneficial to creating and promoting the frameworks of deep learning using continuous-depth structures. \section{Acknowledgments} We thank the anonymous reviewers for their valuable and constructive comments that helped us to improve the work. Q.Z is supported by the STCSM (No. 21511100200). W.L. is supported by the National Key R\&D Program of China (No. 2018YFC0116600), by the National Natural Science Foundation of China (Nos. 11925103 and 61773125), and by the STCSM (Nos. 19511132000, 19511101404, and 2021SHZDZX0103).
1,116,691,501,034
arxiv
\section{Introduction} Spontaneous emission is one of the most important features of atoms and so far mechanisms such as vacuum fluctuations \cite{Welton48, CPP83}, radiation reaction \cite{Ackerhalt73}, or a combination of them \cite{Milonni88} have been put forward to explain why spontaneous emission occurs. The ambiguity in physical interpretation arises because of the freedom in the choice of ordering of commuting operators of the atom and field in a Heisenberg picture approach to the problem. The controversy was resolved when Dalibard, Dupont-Roc and Cohen-Tannoudji(DDC) \cite{Dalibard82,Dalibard84} proposed a formalism which distinctively separates the contributions of vacuum fluctuations and radiation reaction by demanding a symmetric operator ordering of atom and field variables. The DDC formalism has recently been generalized to study the spontaneous excitation of uniformly accelerated atoms in interaction with vacuum fluctuations of scalar and electromagnetic fields in a flat spacetime~\cite{Audretsch94,H. Yu,ZYL06,YuZ06,ZYu07}, and these studies show that when an atom is accelerated, the delicate balance between vacuum fluctuations and radiation reaction that ensures the ground state atom's stability in vacuum is altered, making possible the transitions to excited states for ground-state atoms even in vacuum. Inspired by an equivalence principle-type argument, i.e., the same accelerated atoms are seen by comoving observers as static ones in a uniform ``gravitational field", one may wonder what happens if an atom is held static in a curved spacetime, such as that of a black hole, for example. Do static atoms spontaneously excite outside a black hole? and if it does, will the excitation rate be what one expects assuming the existence of Hawking radiation from black holes? Answer to these questions may reveal relationship between the Hawking radiation and the spontaneous excitation of atoms outside a black hole, and thus provide alternative derivation of Hawking radiation. When we move to study the spontaneous excitation of static atoms interacting with vacuum fluctuations of quantum fields in a curved spacetime, a delicate issue then arises as to how the vacuum state of the quantum fields is determined. Normally, a vacuum state is associated with non-occupation of positive frequency modes. However, the positive frequency of field modes are defined with respect to the time coordinate. Therefore, to define positive frequency, one has to first specify a definition of time. In a spherically symmetric black hole background, one definition is the Schwarzschild time, $t$ , and it is a natural definition of time in the exterior region. The vacuum state, defined by requiring normal modes to be positive frequency with respect to the Killing vector $\partial/ \partial t$ with respect to which the exterior region is static, is called the Boulware vacuum. Other possibilities that have been proposed are the Unruh vacuum~\cite{Unruh} and the Hartle-Hawking vacuum~\cite{Hartle-Hawking}. The Unruh vacuum is defined by taking modes that are incoming from $\mathscr{J}^-$ to be positive frequency with respect to $\partial/ \partial t$, while those that emanate from the past horizon are taken to be positive frequency with respect to the Kruskal coordinate $\bar u$, the canonical affine parameter on the past horizon. The Hartle-Hawking vacuum, on the other hand, is defined by taking the incoming modes to be positive frequency with respect to $\bar v$, the canonical affine parameter on the future horizon, and outgoing modes to be positive frequency with respect to $\bar u$. The calculations of the values of physical observables, such as the expectation values of the energy-momentum tensor and the response rate of an Unruh detector in these vacuum states, have yielded the following physical understanding: (i) The Boulware vacuum corresponds to our familiar concept of a vacuum state at large radii, but is problematic in the sense that the expectation value of the energy-momentum tensor, evaluated in a free falling frame, diverges at the horizon. (ii) The Unruh vacuum is the vacuum state that best approximates the state that would obtain following the gravitational collapse of a massive body, since in the spatial asymptotic region, it corresponds to an outgoing flux of black-body radiation at the Hawking temperature. (ii) The Hartle-Hawking state, however, does not correspond to our usual notion of a vacuum, as it has thermal radiation incoming to the black hole from infinity and describes a black hole in equilibrium with a sea of thermal radiation. In the current paper, we would like to apply the DDC formalism to study the spontaneous excitation of a static two-level atom outside a 4-dimensional Schwarzschild black hole in interaction with massless quantum scalar fields in all the above three vacuum states, aiming to answer the question of whether a static atom outside a black hole spontaneously excite. We also hope to gain more insights into the physical meaning of the vacuum states proposed so far in the black hole spacetime, as well as to reveal relationship between the Hawking radiation and spontaneous excitation of atoms. Let us note that recently, we have already studied the spontaneous excitation of a static two-level atom interacting with massless scalar fields in both the Unruh vacuum and the Hartle-Hawking vacuum outside a 1+1 dimensional Schwarzschild black hole and found that the atom spontaneously excites as if there is thermal radiation at the Hawking temperature emanating from the black hole~\cite{YuZhou07}. \section{General formalism} Let us consider a two-level atom in interaction with a quantum real massless scalar field outside a Schwarzschild black hole. The metric of the spacetime can be written in terms of the Schwarzschild coordinates as \begin{equation} ds^2= -\bigg(1-{2M\over r}\bigg)\;dt^2+\bigg(1-{2M\over r}\bigg)^{-1}\;dr^2+r^2\,(d\theta^2+\sin^2\theta\,d\varphi^2)\;, \end{equation} wher $M$ is the mass of the black hole. Without loss of generality, we assume a pointlike two-level atom on a stationary space-time trajectory $x(\tau)$, where $\tau$ denotes the proper time on the trajectory. The stationarity of the trajectory guarantees the existence of stationary atomic states, $|+ \rangle$ and $|- \rangle$, with energies $\pm{1\/2}\omega_0$ and a level spacing $\omega_0$. The atom's Hamiltonian which controls the time evolution with respect to $\tau$ is given, in Dicke's notation \cite{Dicke}, by \begin{equation} H_A (\tau) =\omega_0 R_3 (\tau)\;, \label{atom's Hamiltonian} \end{equation} where $R_3 = {1\/2} |+ \rangle \langle + | - {1\/2}| - \rangle \langle - |$ is the pseudospin operator commonly used in the description of two-level atoms\cite{Dicke}. The free Hamiltonian of the quantum scalar field that governs its time evolution with respect to $\tau$ is \begin{equation} H_F (\tau) = \int d^3 k\, \omega_{\vec k} \,a^\dagger_{\vec k}\, a_{\vec k}\, {dt\/d \tau}\;. \label{free Hamiltonian} \end{equation} Here $a^\dagger_{\vec k}$, $a_{\vec k}$ are the creation and annihilation operators with momentum ${\vec k}$. The interaction between the atom and the quantum field is assumed to be described by a Hamiltonian~\cite{Audretsch94} \begin{equation} H_I (\tau) = \mu\, \,R_2 (\tau)\,\phi ( x(\tau))\;, \label{interaction Hamiltonian} \end{equation} where $\mu$ is a coupling constant which we assume to be small, $R_2 = {1\/2} i ( R_- - R_+)$, and $R_+ = |+ \rangle \langle - |$, $R_- = |- \rangle \langle +|$. The coupling is effective only on the trajectory $x(\tau)$ of the atom. We can now write down the Heisenberg equations of motion for the atom and field observables. The field is always assumed to be in its vacuum state $|0 \rangle$. We will separately discuss the two physical mechanisms that contribute to the rate of change of atomic observables: the contribution of vacuum fluctuations and that of radiation reaction. For this purpose, we can split the solution of field $\phi$ of the Heisenberg equations into two parts: a free or vacuum part $\phi^f$, which is present even in the absence of coupling, and a source part $\phi^s$, which represents the field generated by the interaction between the atom and the field. Following DDC\cite{Dalibard82,Dalibard84}, we choose a symmetric ordering between atom and field variables and consider the effects of $\phi^f$ and $\phi^s$ separately in the Heisenberg equations of an arbitrary atomic observable G. Then, we obtain the individual contributions of vacuum fluctuations and radiation reaction to the rate of change of G. Since we are interested in the spontaneous excitation of the atom, we will concentrate on the mean atomic excitation energy $\langle H_A(\tau) \rangle$. The contributions of vacuum fluctuations(vf) and radiation reaction(rr) to the rate of change of $\langle H_A \rangle$ can be written as ( cf. Ref.\cite{Dalibard82,Dalibard84,Audretsch94} ) \begin{eqnarray} \left\langle {d H_A (\tau) \/ d\tau} \right\rangle_{vf} &=& 2 i\, \mu^2 \int_{\tau_0}^\tau d \tau' \, C^F(x(\tau),x(\tau')) {d\/ d \tau} \chi^A(\tau,\tau')\;, \label{general form of vf}\\ \left\langle {d H_A (\tau) \/ d\tau} \right\rangle_{rr} &=& 2 i\, \mu^2 \int_{\tau_0}^\tau d \tau' \, \chi^F(x(\tau),x(\tau')) {d\/ d \tau} C^A(\tau,\tau')\;, \label{general form of rr} \end{eqnarray} with $| \rangle = |a,0 \rangle$ representing the atom in the state $|a\rangle$ and the field in the vacuum state $|0 \rangle$. Here the statistical functions of the atom, $C^{A}(\tau,\tau')$ and $\chi^A(\tau,\tau')$, are defined as \begin{eqnarray} C^{A}(\tau,\tau') &=& {1\/2} \langle a| \{ R_2^f (\tau), R_2^f (\tau')\} | a \rangle\;,\label{general form of Ca} \\ \chi^A(\tau,\tau') &=& {1\/2} \langle a| [ R_2^f (\tau), R_2^f (\tau')] | a \rangle \;,\label{general form of Xa} \end{eqnarray} and those of the field are as \begin{eqnarray} C^{F}(x(\tau),x(\tau')) &=& {1\/2}{\langle} 0| \{ \phi^f (x(\tau)), \phi^f(x(\tau')) \} | 0 \rangle\;, \label{general form of Cf}\\ \chi^F(x(\tau),x(\tau')) &=& {1\/2}{\langle} 0| [ \phi^f(x(\tau)),\phi^f (x(\tau'))] | 0 \rangle\;. \label{general form of Xf} \end{eqnarray} $C^A$ is called the symmetric correlation function of the atom in the state $|a\rangle$, $\chi^A$ its linear susceptibility. $C^F$ and $\chi^F$ are the Hadamard function and Pauli-Jordan or Schwinger function of the field respectively. The explicit forms of the statistical functions of the atom are given by \begin{eqnarray} C^{A}(\tau,\tau')&=&{1\/2} \sum_b|\langle a | R_2^f (0) | b \rangle |^2 \left( e^{i \omega_{ab}(\tau - \tau')} + e^{-i \omega_{ab} (\tau - \tau')} \right)\;, \label{explicit form of Ca}\\ \chi^A(\tau,\tau') & =& {1\/2}\sum_b |\langle a | R_2^f (0) | b \rangle |^2 \left(e^{i \omega_{ab}(\tau - \tau')} - e^{-i \omega_{ab}(\tau - \tau')} \right)\;, \label{explicit form of Xa}\end{eqnarray} where $\omega_{ab}= \omega_a-\omega_b$ and the sum runs over a complete set of atomic states. \section{Spontaneous excitation of static atoms outside a black hole.} In the exterior region of the Schwarzschild black hole, a complete set of normalized basis functions for the massless scalar field that satisfy the Klein-Gordon equation is given by \begin{eqnarray} \overrightarrow{u}_{\omega lm}=(4\pi\omega)^{-\frac{1}{2}}e^{-i\omega t}\overrightarrow{R}_l(\omega|r)Y_{lm}(\theta,\varphi)\;, \end{eqnarray} \begin{eqnarray} \overleftarrow{u}_{\omega lm}=(4\pi\omega)^{-\frac{1}{2}}e^{-i\omega t}\overleftarrow{R}_l(\omega|r)Y_{lm}(\theta,\varphi)\;, \end{eqnarray} where $Y_{lm}(\theta,\varphi)$ are the spherical harmonics and the radial functions have the following asymptotic forms~\cite{Dewitt75} \begin{equation} \label{asymp1} \overrightarrow{R}_l(\omega|r)\sim\left\{ \begin{aligned} &r^{-1}e^{i\omega r_\ast}+\overrightarrow{A}_l(\omega)r^{-1}e^{-i\omega r_\ast},\;\;r \rightarrow 2M\;,\cr & {B}_l(\omega)r^{-1}e^{i\omega r_\ast},\;\;\quad\quad \quad\quad \;\;\;\;r \rightarrow\infty\;,\cr \end{aligned} \right. \end{equation} \begin{equation} \label{asymp2} \overleftarrow{R}_l(\omega|r)\sim\left\{ \begin{aligned} &{B}_l(\omega)r^{-1}e^{-i\omega r_\ast},\;\;\quad\quad \quad\quad \;\;\;\;r \rightarrow2M\;,\cr &r^{-1}e^{-i\omega r_\ast}+\overleftarrow{A}_l(\omega)r^{-1}e^{i\omega r_\ast},\;\;r \rightarrow \infty\;, \end{aligned} \right. \end{equation} with \begin{eqnarray} r_\ast=r+2M\ln\bigg(\frac{r}{2M}-1\bigg)\;, \end{eqnarray} being the Regge-Wheeler tortoise coordinate. The physical interpretation of these modes is that $\overrightarrow u$ represents modes emerging from the past horizon and the $\overleftarrow u$ denotes those coming in from infinity. With the basics of the scalar field modes given above, we now apply the formalism outlined in the preceding section to examine the spontaneous excitation of the static atoms in three vacuum states of the quantum scalar fields respectively. \paragraph{Boulware vacuum.} The Boulware vacuum is defined by requiring normal modes to be positive frequency with respect to the Killing vector $\partial/ \partial t$. One can show that the Wightman function for massless scalar fields in this vacuum state is given by~\cite{Fulling77,Candelas80} \begin{eqnarray} D_B^+(x,x')\,=\frac{1}{4\pi}\sum_{lm}|Y_{lm}(\theta,\varphi)|^2\, \int_{0}^{+\infty}\frac{d\omega}{\omega}\, e^{-i\omega\Delta t}\biggl[\,|\overrightarrow{R}_l(\omega|\,r)|^2 +|\overleftarrow{R}_l(\omega|\,r)|^2\biggr]\;, \end{eqnarray} and the corresponding Hadamard function and Pauli-Jordan or Schwinger function of the field are respectively \begin{eqnarray} C^F(x\,(\tau),x\,(\tau')\,)&=&\frac{1}{8\pi}\,\sum_{lm}\,|Y_{lm}(\theta,\varphi)|^2 \int_0^{+\infty}\frac{d\omega}{\omega}\, \biggl(e^{\frac{i\omega\Delta\tau}{\sqrt{1-2M/r}}}+e^{-\frac{i\omega\Delta\tau} {\sqrt{1-2M/r}}}\biggr)\times\nonumber\\&&\biggl[|\overrightarrow{R}_l(\omega|\,r)|^2 +|\overleftarrow{R}_l(\omega|\,r)|^2\biggr]\;, \end{eqnarray} and \begin{eqnarray} \chi^F(x\,(\tau),x\,(\tau')\,)&=&\frac{1}{8\pi}\,\sum_{lm}\,|Y_{lm}(\theta,\varphi)|^2 \int_0^{+\infty}\frac{d\omega}{\omega}\, \biggl(e^{-\frac{i\omega\Delta\tau}{\sqrt{1-2M/r}}}-e^{\frac{i\omega\Delta\tau}{\sqrt{1-2M/r}}} \biggr)\times\nonumber\\&&\biggl[|\overrightarrow{R}_l(\omega|\,r)|^2 +|\overleftarrow{R}_l(\omega|\,r)|^2\biggr]\;, \end{eqnarray} where use has been made of \begin{equation} \Delta\tau=\Delta\,t \,\sqrt{1-\frac{2M}{r}}\;. \end{equation} Substituting the above results into Eqs.~(\ref{general form of vf}) and (\ref{general form of rr}), extending the integration range for $\tau$ to infinity for sufficiently long times $\tau-\tau_0$, and performing the double integration, we obtain the contribution of the vacuum fluctuations to the rate of change of the mean atomic energy for an atom held static at a distance $r$ from the black hole \begin{eqnarray} \biggl\langle\frac{dH_A(\tau)}{d\tau}\biggr\rangle_{vf}&=& -\,\frac{\mu^2}{4\pi}\,\biggl[\;\sum_{\omega_a>\omega_b} \, \omega_{ab}^2\,|\langle a|R_2^f(0)|b\rangle|^2\,P\,(\,\omega_{ab}\,,r)\nonumber\\&&\;\quad\quad-\sum_{\omega_a<\omega_b} \, \omega_{ab}^2\,|\langle a|R_2^f(0)|b\rangle|^2 P\,(-\,\omega_{ab}\,,r)\biggr]\;, \end{eqnarray} and that of radiation reaction \begin{eqnarray} \biggl\langle\frac{dH_A(\tau)}{d\tau}\biggr\rangle_{rr}&=& -\,\frac{\mu^2}{4\pi}\,\biggl[\;\sum_{\omega_a>\omega_b} \, \omega_{ab}^2\,|\langle a|R_2^f(0)|b\rangle|^2\,P\,(\,\omega_{ab}\,,r)\nonumber\\&&\;\quad\quad+\sum_{\omega_a<\omega_b} \, \omega_{ab}^2\,|\langle a|R_2^f(0)|b\rangle|^2 P\,(-\,\omega_{ab}\,,r)\biggr]\;. \end{eqnarray} Here we have defined \begin{eqnarray} P(\omega_{ab},r)= \overrightarrow{P}(\omega_{ab},r) + \overleftarrow{P}(\omega_{ab},r)\;, \end{eqnarray} \begin{eqnarray} \label{rightP} \overrightarrow{P}(\omega_{ab},r)&=&{\pi \over \omega_{ab}^2}\sum_{lm}|Y_{lm}(\theta,\phi)|^2\,\biggl|\overrightarrow R_l\biggl(\omega_{ab}r\sqrt{1-\frac{2M}{r}}\biggr)\;\biggr|^2\nonumber\\ &=&\,\frac{1}{\omega_{ab}^2}\sum_{l=0}^{\infty}\frac{2l+1}{4}\;\biggl|\overrightarrow R_l\biggl(\omega_{ab}r\sqrt{1-\frac{2M}{r}}\biggr)\biggr|^2\;, \end{eqnarray} and \begin{eqnarray} \label{leftP} \overleftarrow{P}(\omega_{ab},r)&=&{\pi \over \omega_{ab}^2}\sum_{lm}|Y_{lm}(\theta,\phi)|^2\,\biggl|\overleftarrow R_l\biggl(\omega_{ab}r\sqrt{1-\frac{2M}{r}}\biggr)\biggr|^2\nonumber\\ &=&\,\frac{1}{\omega_{ab}^2}\sum_{l=0}^{\infty}\frac{2l+1}{4}\;\biggl|\overleftarrow R_l\biggl(\omega_{ab}r\sqrt{1-\frac{2M}{r}}\biggr)\biggr|^2\;. \end{eqnarray} The following property of the spherical harmonics \begin{equation} \sum^l_{m=-l}|\,Y_{lm}(\,\theta,\varphi\,)\,|^2= {2l+1 \over 4\pi}\;. \end{equation} has been utilized in Eqs.~(\ref{rightP}) and (\ref{leftP}). Adding up two contributions, we obtain the total rate of change of the mean atomic energy \begin{eqnarray} \label{BoulwareRate} \biggl\langle\frac{dH_A(\tau)}{d\tau}\biggr\rangle_{tot}\,= -\,\frac{\mu^2}{2\pi}\sum_{\omega_a>\omega_b} \, \omega_{ab}^2\,|\langle a|R_2^f(0)|b\rangle|^2\,P\,(\,\omega_{ab}\,,r)\;. \end{eqnarray} It follows that for an static atom in the ground state $(\omega_a<\omega_b)$, the contribution of the vacuum fluctuations and that of radiation exactly cancel, since each term in $\left\langle {d H_A (\tau) \/ d\tau} \right\rangle_{vf}$ is canceled exactly by the corresponding term in $\left\langle {d H_A (\tau) \/ d\tau} \right\rangle_{rr}$. Therefore, although both contributions to the rate of change of the mean atomic energy is modified by the presence of the factor $P(\omega_{ab}, r )$ as compared to the Minkowski vacuum case~\cite{Audretsch94}, the balance between them remains and the static ground state atom in the Boulware vacuum is still stable. It should be pointed out, however, that the spontaneous emission rate of a static atom outside a Schwarzschild black hole in the Boulware vacuum is different from that of an inertial atom in the Minkowski vacuum in an unbounded flat space because of the presence of the factor $P(\omega_{ab}, r )$ in Eq.~(\ref{BoulwareRate}). In this sense, the Boulware vacuum is not equivalent to the usual Minkowski vacuum. However, a comparison of Eq.~(\ref{BoulwareRate}) with Eq.~(23) in Ref.~\cite{H. Yu} , which gives the rate of change of the mean atomic energy for an inertial atom in a flat space with a reflecting boundary, shows that the two rates are quite similar, and the appearance of $P(\omega_{ab}, r )$ in Eq.~(\ref{BoulwareRate}) can be understood as a result of backscattering of the vacuum field modes off the spacetime curvature of the black hole in much the same way as the reflection of the field modes at the reflecting boundary in a flat spacetime. In order to gain more understanding, let us now analyze the behavior of $P(\omega_{ab}, r )$ both in the asymptotic region and at the event horizon. Using the following asymptotic properties of the radial functions \begin{equation} \label{asymp3} \sum_{l=0}^\infty\,(2l+1)\,|\overrightarrow{R}_l(\,\omega\,|r\,)\,|^2\sim\left\{ \begin{aligned} &\frac{4\omega^2}{1-\frac{2M}{r}}\;,\;\;\;\quad\quad\quad\quad\quad\quad\quad r\rightarrow2M\;,\cr &\frac{1}{r^2} \sum_{l=0}^\infty(2l+1)\,|\,{B}_l\,(\omega)\,|^2\;,\quad\;r\rightarrow\infty \;,\cr \end{aligned} \right. \end{equation} \begin{equation} \label{asymp4} \sum_{l=0}^\infty\,(2l+1)\,|\overleftarrow{R}_l(\,\omega\,|r\,)\,|^2\sim\left\{ \begin{aligned} &\frac{1}{4M^2}\sum_{l=0}^\infty(2l+1)\,|\,{B}_l\,(\omega)\,|^2,\quad\;r\rightarrow2M\;,\cr &4\omega^2,\;\;\;\;\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad r\rightarrow\infty \;,\cr \end{aligned} \right. \end{equation} we obtain \begin{equation} \label{asymp rightP} \overrightarrow{P}(\,\omega_{ab},r)\sim\left\{ \begin{aligned} &1\;,\;\;\;\;\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\;r\rightarrow2M\;,\cr &\frac{1}{4r^2\omega_{ab}^2} \,\sum_{l=0}^\infty\,(2l+1)\,|\,B_l\,(\,\omega_{ab})|^2\;,\;\;\; r\rightarrow\infty \;,\cr \end{aligned} \right. \end{equation} \begin{equation} \label{asymp leftP} \overleftarrow{P}(\,\omega_{ab},r)\sim\left\{ \begin{aligned} &\frac{1}{16M^2\omega_{ab}^2} \sum_{l=0}^\infty(2l+1)\,|\,{B}_l\,(\,0\,)|^2\;,\;\;\;r\rightarrow2M\;,\cr &1\;,\;\;\;\;\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\;\;r\rightarrow\infty\;,\cr \end{aligned} \right. \end{equation} and this leads to \begin{equation} \label{asymp_P} P(\omega_{ab}, r)\sim\left\{ \begin{aligned} &1+\frac{1}{16M^2\omega_{ab}^2} \sum_{l=0}^\infty(2l+1)\,|\,{B}_l\,(\,0\,)\,|^2\;,\;\;r\rightarrow2M\;,\cr &1+\frac{1}{4r^2\omega_{ab}^2} \,\sum_{l=0}^\infty\,(2l+1)\,|\,B_l\,(\,\omega_{ab})\,|^2\;,\;\; r\rightarrow\infty \;.\cr \end{aligned} \right. \end{equation} So, when\,$r\rightarrow\infty$, we have \begin{eqnarray} \biggl\langle\frac{dH_A(\tau)}{d\tau}\biggr\rangle_{tot}\approx -\,\frac{\mu^2}{2\pi}\sum_{\omega_a>\omega_b}\, \omega_{ab}^2\,|\langle a|R_2^f(0)|b\rangle|^2\,\biggl[1+\frac{1}{4r^2\omega_{ab}^2} \,\sum_{l=0}^\infty\,(2l+1)\,|\,B_l\,(\,\omega_{ab})|^2\biggr]\;, \end{eqnarray} and when $r\rightarrow2M$, \begin{eqnarray} \label{BRateEH} \biggl\langle\frac{dH_A(\tau)}{d\tau}\biggr\rangle_{tot}\approx -\,\frac{\mu^2}{2\pi}\sum_{\omega_a>\omega_b}\, \omega_{ab}^2\,|\langle a|R_2^f(0)|b\rangle|^2\,\biggl[1+\frac{1}{16M^2\omega_{ab}^2} \sum_{l=0}^\infty(2l+1)\,|\,{B}_l\,(\,0\,)\,|^2\;\biggr]\;. \end{eqnarray} These asymptotic forms tell us that the rate of change of the mean atomic energy for a static atom outside a Schwarzschild black hole interacting with massless scalar fields in the Boulware vacuum gets enhanced as compared to the case of an inertial atom in the Minkowski vacuum in an unbounded flat space, and it reduces to the result of an inertial atom in the Minkowski vacuum at infinity and behaves normally at the event horizon. The normal behavior of the rate of change of the mean atomic energy near the horizon is in sharp contrast to the response rate of an Unruh detector~\cite{Candelas80}. \paragraph{Unruh vacuum.} For the Unruh vacuum, the Wightman function for the massless scalar fields is given by~\cite{Fulling77,Candelas80} \begin{eqnarray} D_U^+(x,x')\,&=&\frac{1}{4\pi}\sum_{lm}|Y_{lm}(\theta,\varphi)|^2\, \int_{-\infty}^{+\infty}\frac{d\omega}{\omega}\times\nonumber\\&& \biggl[\,\frac{e^{-i\omega\Delta t}}{1-e^{-2\pi\,\omega/\kappa}}\, |\overrightarrow{R}_l(\omega|\,r)|^2 +\theta(\omega)\,e^{-i\omega\Delta t}|\overleftarrow{R}_l(\omega|\,r)|^2\biggr]\;, \end{eqnarray} where $\kappa=1/4M$ is the surface gravity of the black hole. Then the statistical functions of the scalar field readily follow \begin{eqnarray} C^F(x\,(\tau),x\,(\tau')\,)&=&\frac{1}{8\pi}\,\sum_{lm}\,|Y_{lm}(\theta,\varphi)|^2 \int_{-\infty}^{+\infty}\frac{d\omega}{\omega}\, \biggl(e^{\frac{i\omega\Delta\tau}{\sqrt{1-2M/r}}}+e^{-\frac{i\omega\Delta\tau} {\sqrt{1-2M/r}}}\biggr)\times\nonumber\\&&\biggl(\frac{|\overrightarrow{R}_l(\omega|\,r)|^2} {{1-e^{-2\pi\,\omega/\kappa}}} +\theta(\omega)|\overleftarrow{R}_l(\omega|\,r)|^2\biggr)\;, \end{eqnarray} \begin{eqnarray} \chi^F(x\,(\tau),x\,(\tau')\,)&=&\frac{1}{8\pi}\sum_{lm}\,|Y_{lm}(\theta,\varphi)|^2\, \int_{-\infty}^{+\infty}\frac{d\omega}{\omega}\,\biggl(e^{-\frac{i\omega\Delta\tau} {\sqrt{1-2M/r}}}-e^{\frac{i\omega\Delta\tau}{\sqrt{1-2M/r}}}\biggr)\,\times \nonumber\\&&\biggl[\frac{|\overrightarrow{R}_l(\omega|\,r)|^2}{{1-e^{-2\pi\,\omega/\kappa}}}+ \theta(\omega)|\overleftarrow{R}_l(\omega|\,r)|^2\biggr]\;. \end{eqnarray} Similarly, we can compute the contributions of vacuum fluctuations and radiation reaction to the rate of change of the mean atomic energy to get \begin{eqnarray} \biggl\langle\frac{dH_A(\tau)}{d\tau}\biggr\rangle_{vf}&=&-\,\frac{\mu^2}{4\pi}\, \biggl\{\sum_{\omega_a>\omega_b}\,\omega_{ab}^2|\langle a|R_2^f(0)|b\rangle|^2\, \biggl[\,\biggl(1+\frac{1}{e^{(2\pi\,\omega_{ab})/\kappa_r}-1}\biggr) \overrightarrow{P}(\omega_{ab},r)\nonumber\\&&\;\;\;\;\quad\quad\quad\quad+\, \frac{\overrightarrow{P}(-\,\omega_{ab},r)}{e^{(2\pi\,\omega_{ab})/\kappa_r}-1}\,+ \overleftarrow{P}(\omega_{ab},r)\,\biggr]\nonumber\\&&\;\;\quad\quad- \sum_{\omega_a<\omega_b}\omega_{ab}^2|\langle a|R_2^f(0)|b\rangle|^2 \biggl[\biggl(\,1+\frac{1}{e^{(2\pi\,|\,\omega_{ab}|)/\kappa_r}-1}\biggr) \overrightarrow{P}(-\,\omega_{ab},r)\nonumber\\&&\,\;\;\;\;\quad\quad\quad\quad+\, \frac{\overrightarrow{P}(\omega_{ab},r)}{e^{{(2\pi\,|\,\omega_{ab}|)/\kappa_r}-1}} +\overleftarrow{P}(-\,\omega_{ab},r)\,\biggl]\biggr\}\;, \end{eqnarray} and \begin{eqnarray} \biggl\langle\frac{dH_A(\tau)}{d\tau}\biggr\rangle_{rr}&=&-\,\frac{\mu^2}{4\pi}\, \biggl\{\sum_{\omega_a>\omega_b}\omega_{ab}^2|\langle a|R_2^f(0)|b\rangle|^2\, \biggl[\,\biggl(1+\frac{1}{e^{(2\pi\,\omega_{ab})/\kappa_r}-1}\biggr) \overrightarrow{P}(\omega_{ab},r)\nonumber\\&&\;\;\;\;\quad\quad\quad\quad-\, \frac{\overrightarrow{P}(-\,\omega_{ab},r)}{e^{(2\pi\,\omega_{ab})/\kappa_r}-1}+ \overleftarrow{P}(\omega_{ab},r)\,\biggr]\nonumber\\&&\;\;\quad\quad+\sum_{\omega_a<\omega_b} \omega_{ab}^2|\langle a|R_2^f(0)|b\rangle|^2 \biggl[\,\biggl(\,1+\frac{1}{e^{(2\pi\,|\,\omega_{ab}|)/\kappa_r}-1} \biggr)\overrightarrow{P}(-\,\omega_{ab},r)\nonumber\\&&\,\;\;\;\;\quad\quad\quad\quad-\, \frac{\overrightarrow{P}(\omega_{ab},r)}{e^{(2\pi\,|\,\omega_{ab}|)/\kappa_r}-1} +\overleftarrow{P}(-\,\omega_{ab},r)\,\biggl]\,\biggr\}\;, \end{eqnarray} where we have defined \begin{equation} \kappa_r=\frac{\kappa}{\sqrt{1-\frac{2M}{r}}}\;. \end{equation} From the above results, one can see that both contributions are altered due to the appearance of thermal terms, as compared to the case of the Boulware vacuum. If we add up two contributions, we find the total rate \begin{eqnarray} \biggl\langle\frac{dH_A(\tau)}{d\tau}\biggr\rangle_{tot}&=&-\,\frac{\mu^2}{2\pi}\, \biggl\{\sum_{\omega_a>\omega_b}\omega_{ab}^2|\langle a|R_2^f(0)|b\rangle|^2\,\biggl[\, \biggl(1+\frac{1}{e^{(2\pi\,\omega_{ab})/\kappa_r}-1}\biggr)\, \overrightarrow{P}(\omega_{ab})+\,\overleftarrow{P}(\omega_{ab})\,\biggr]\nonumber\\&& \;\;\quad\quad-\sum_{\omega_a<\omega_b}\omega_{ab}^2|\langle\,a|R_2^f(0)|b\rangle|^2\,\, \frac{\overrightarrow{P}(\omega_{ab})}{e^{(2\pi\,|\,\omega_{ab}|)/\kappa_r}-1}\, \,\biggr\}\;. \end{eqnarray} This reveals that the delicate balance no longer exists between the vacuum fluctuations and radiation reaction that ensures the stability of ground state atoms held static at a radial distance $r$ from the black hole in the Boulware vacuum. There is a positive contribution from the second term ( $\omega_{a}< \omega_{b}$ term), therefore transitions of ground-state atoms to excited states could spontaneously occur in the Unruh vacuum outside the black hole. When the atom is held close to the event horizon, i.e., when\;$r\rightarrow2M$, the total rate becomes \begin{eqnarray} \label{URateEH} \biggl\langle\frac{dH_A(\tau)}{d\tau}\biggr\rangle_{tot}&\approx&-\,\frac{\mu^2}{2\pi}\, \biggl\{\sum_{\omega_a>\omega_b}|\langle a|R_2^f(0)|b\rangle|^2\omega_{ab}^2\,\times\nonumber\\&& \quad\quad\quad\biggl[\,\biggl(1+\frac{1}{16\,M^2\omega_{ab}^2}\,\sum_{l=0}^\infty\,(2l+1)\,|\,B_l\,(\,0\,)|^2 \biggr)+ \frac{1}{e^{(2\pi\,\omega_{ab})/\kappa_r}-1} \biggr]\nonumber\\&& \quad\quad\;-\sum_{\omega_a<\omega_b}|\langle\,a|R_2^f(0)|b\rangle|^2\, \omega_{ab}^2\,\frac{1}{e^{(2\pi\,|\,\omega_{ab}|)/\kappa_r}-1} \biggr\}\;. \end{eqnarray} In comparison to Eq.~(\ref{BRateEH}), the corresponding result in the Boulware vacuum case, one sees the appearance of thermal terms which may be considered as resulting from the contribution of thermal radiation emanating from the black hole at a temperature \begin{eqnarray} T={\kappa_r\over 2\pi}=\frac{\kappa}{2\pi}\frac{1}{\sqrt{1-\frac{2M}{r}}} =(g_{00})^{-1/2}\,T_{H}\;, \end{eqnarray} where $T_H=\kappa/2\pi$ is the usual Hawking temperature of the black hole. Actually, this is the well-known Tolman relation~\cite{Tolman} which gives the proper temperature as measured by a local observer. Notice that $T$, being always larger than the Hawking temperature, and reducing to it only at infinity, however diverges as the event horizon is approached. This can be understood as a result of that the atom must be in acceleration relative to the local free-falling frame to maintain at a fixed distance from the black hole, and this acceleration, which blows up at the horizon, gives rise to additional thermal effect. If the atom is far away from the black hole in the asymptotic region, that is, when\,$r\rightarrow\infty$, one then finds \begin{eqnarray} \biggl\langle\frac{dH_A(\tau)}{d\tau}\biggr\rangle_{tot}&\approx&-\,{\mu^2\over2\pi}\, \biggl\{\sum_{\omega_a>\omega_b}|\langle a|R_2^f(0)|b\rangle|^2\omega_{ab}^2\,\biggl[\,1+f(\omega_{ab}, r) + \frac{f(\omega_{ab}, r)}{e^{(2\pi\,\omega_{ab})/\kappa_r}-1} \,\biggr] \nonumber\\&&\;\quad\quad-\sum_{\omega_a<\omega_b}|\langle\,a|R_2^f(0)|b\rangle|^2\, \omega_{ab}^2\,\frac{f(\omega_{ab}, r)}{e^{(2\pi\,|\,\omega_{ab}|)/\kappa_r}-1}\, \biggr\}\;, \end{eqnarray} where \begin{eqnarray} f(\omega_{ab}, r)=\frac{1}{4\,r^2\omega_{ab}^2}\,\sum_{l=0}^\infty\,(2l+1)\,|\,B_l\,(\,\omega_{ab})|^2\;. \end{eqnarray} The appearance of $f(\omega_{ab}, r)$ in the thermal terms now can be envisaged as a result of backscattering of outgoing thermal flux from the event horizon off the spacetime curvature. The backscattering results in the depletion of part of the outgoing flux. The influence of the thermal flux becomes weaker as the atom is placed farther away. \paragraph{Hartle-Hawking vacuum.} Let us now turn briefly to the case of the Hartle-Hawking vacuum. The Wightman function for the massless scalar fields becomes now ~\cite{Fulling77,Candelas80} \begin{eqnarray} D_H^+(x,x')\,=\frac{1}{4\pi}\sum_{lm}\,|Y_{lm}(\theta,\varphi)|^2\ \int_{-\infty}^{+\infty}\frac{d\omega}{\omega}\,\biggl[\,\frac{e^{-{i\omega\Delta t} }}{1-e^{-2\pi\,\omega/\kappa}}\,|\overrightarrow{R}_l(\omega|r)|^2 +\frac{e^{-{i\omega\Delta t}}}{1-e^{-2\pi\,\omega/\kappa}}\,|\overleftarrow{R}_l(\omega|r)|^2\biggr]\;,\nonumber\\ \end{eqnarray} which leads to the statistical functions of the scalar field in the Hartle-Hawking vacuum as follows \begin{eqnarray} C^F(x\,(\tau),x\,(\tau')\,)&=&\frac{1}{8\pi}\sum_{lm}\,|Y_{lm}(\theta,\varphi)|^2\, \int_{-\infty}^{+\infty}\frac{d\omega}{\omega}\,\biggl(e^{\frac{i\omega\Delta\tau} {\sqrt{1-{2M}/{r}}}}+e^{-\frac{i\omega\Delta\tau}{\sqrt{1-{2M}/{r}}}}\biggr)\, \nonumber\\&&\times\,\biggl(\frac{|\overrightarrow{R}_l(\omega|r)|^2}{{1-e^{-{2\pi\,\omega}/{\kappa}}}} +\frac{|\overleftarrow{R}_l(\omega|r)|^2}{{e^{{2\pi\,\omega}/{\kappa}}-1}}\biggr)\;, \end{eqnarray} and \begin{eqnarray} \chi^F(x\,(\tau),x\,(\tau')\,)&=&\frac{1}{8\pi}\sum_{lm}\,|Y_{lm}(\theta,\varphi)|^2\, \int_{-\infty}^{+\infty}\frac{d\omega}{\omega}\,\biggl(e^{\frac{i\omega\Delta\tau} {\sqrt{1-{2M}/{r}}}}-e^{-\frac{i\omega\Delta\tau}{\sqrt{1-{2M}/{r}}}}\biggr)\, \nonumber\\&&\times\,\biggl(\frac{|\overleftarrow{R}_l(\omega|r)|^2}{{e^{{2\pi\,\omega}/{\kappa}}-1}} -\frac{|\overrightarrow{R}_l(\omega|r)|^2}{{1-e^{-{2\pi\,\omega}/{\kappa}}}}\biggr)\;. \end{eqnarray} By using the above results and Eqs.~(\ref{general form of vf})and ~(\ref{general form of rr}), the contribution of the vacuum fluctuations to the rate of change of the mean atomic energy can be found for an atom held static at a distance $r$ from the black hole \begin{eqnarray} \biggl\langle\frac{dH_A(\tau)}{d\tau}\biggr\rangle_{vf}&=&-\,\frac{\mu^2}{4\pi}\,\biggl\{\sum_{\omega_a>\omega_b} \, \omega_{ab}^2\,|\langle a|R_2^f(0)|b\rangle|^2\,\biggl[\;\;\frac{P(-\,\omega_{ab},r)}{e^{(2\pi\,\omega_{ab})/{\kappa_r}}-1} \nonumber\\ &&\quad\quad\quad\quad\quad\;\;\;\;+\biggl(1+\frac{1}{e^{(2\pi\,\omega_{ab})/{\kappa_r}}-1}\biggr) P(\omega_{ab},r)\,\biggr] \nonumber\\&&-\sum_{\omega_a<\omega_b} \, \omega_{ab}^2\,|\langle a|R_2^f(0)|b\rangle|^2\,\biggl[\;\; \frac{P\,(\omega_{ab}, r)}{e^{({2\pi\,|\,\omega_{ab}|})/{\kappa_r}}-1} \nonumber\\ &&\quad\quad\quad\quad\quad\;\;\;\;+\,\biggl(\,1+ \frac{1}{e^{({2\pi\,|\,\omega_{ab}|})/{\kappa_r}}-1}\biggr)\,P\,(-\,\omega_{ab}, r)\,\biggl]\,\biggr\} \;, \nonumber\\ \end{eqnarray} and that of radiation reaction \begin{eqnarray} \biggl\langle\frac{dH_A(\tau)}{d\tau}\biggr\rangle_{rr}&=&-\,\frac{\mu^2}{4\pi}\,\biggl\{\sum_{\omega_a>\omega_b} \, \omega_{ab}^2\,|\langle a|R_2^f(0)|b\rangle|^2\,\biggl[\;\;-\frac{P(-\,\omega_{ab},r)}{e^{(2\pi\,\omega_{ab})/{\kappa_r}}-1} \nonumber\\ &&\quad\quad\quad\quad\quad\;\;\;\;+\biggl(1+\frac{1}{e^{(2\pi\,\omega_{ab})/{\kappa_r}}-1}\biggr) P(\omega_{ab},r)\,\biggr] \nonumber\\&&-\sum_{\omega_a<\omega_b}\, \omega_{ab}^2\,|\langle a|R_2^f(0)|b\rangle|^2\,\biggl[\;\; \frac{P\,(\omega_{ab}, r)}{e^{({2\pi\,|\,\omega_{ab}|})/{\kappa_r}}-1} \nonumber\\ &&\quad\quad\quad\quad\quad\;\;\;\;-\,\biggl(\,1+ \frac{1}{e^{({2\pi\,|\,\omega_{ab}|})/{\kappa_r}}-1}\biggr)\,P\,(-\,\omega_{ab}, r)\,\biggl]\,\biggr\} \;.\nonumber\\ \end{eqnarray} Consequently, the total rate of change of the mean atomic energy follows \begin{eqnarray} \biggl\langle\frac{dH_A(\tau)}{d\tau}\biggr\rangle_{tot}&=&-\,\frac{\mu^2}{2\pi}\,\biggl[\sum_{\omega_a>\omega_b} \, \omega_{ab}^2\,|\langle a|R_2^f(0)|b\rangle|^2\, P\,(\omega_{ab},r )\;\biggl(1+\frac{1}{e^{(2\pi\,\omega_{ab})/{\kappa_r}}-1}\biggr)\, \nonumber\\&&-\sum_{\omega_a<\omega_b}\, \omega_{ab}^2\,|\langle a|R_2^f(0)|b\rangle|^2\,P(\omega_{ab}, r)\, \frac{1}{e^{({2\pi\,|\,\omega_{ab}|})/{\kappa_r}}-1}\,\biggr]\;. \end{eqnarray} Once again, with the existence of $\omega_a<\omega_b$ term, for static atoms in the Hartle-Hawking vacuum, transitions from ground state to the excited states can occur spontaneously in the exterior region of the black hole. In the spatial asymptotic region, the total rate can be written as \begin{eqnarray} \biggl\langle\frac{dH_A(\tau)}{d\tau}\biggr\rangle_{tot}\,&\approx&-\,\frac{\mu^2}{2\pi}\,\biggl[\sum_{\omega_a>\omega_b} \omega_{ab}^2\,|\langle a|R_2^f(0)|b\rangle|^2\times\nonumber\\&&\quad\;\quad\;\quad\;\quad\; \biggl(1+\frac{1}{4r^2\omega_{ab}^2}\,\sum_{l=0}^\infty\,(2l+1)\,|\,B_l\,(\,\omega_{ab})\,|^2\biggr)\,\biggl(\,1+\frac{1}{e^{(2\pi\omega_{ab})/\kappa}-1} \biggr) \nonumber\\&&\quad\;\quad-\sum_{\omega_a<\omega_b}\omega_{ab}^2\,|\langle a|R_2^f(0)|b\rangle|^2\, \biggl(1+\frac{1}{4r^2\omega_{ab}^2}\, \sum_{l=0}^\infty\,(2l+1)\,|\,B_l\,(\,\omega_{ab})\,|^2\biggr)\,\times\nonumber\\&&\quad\;\quad\;\quad\;\quad\;\frac{1}{e^{(2\pi|\omega_{ab}|)/\kappa}-1} \,\biggr]\;. \end{eqnarray} For an atom at spatial infinity ($r\rightarrow \infty$), $P(\omega_{ab}, r)\rightarrow 1$, and the temperature as perceived by the atom, $T$, approaches $T_H$, and the total rate of change of the mean atomic energy becomes what one would get if the atom in immersed in a thermal bath at the temperature $T_H$. Therefore, an static atom in the spatial asymptotic region outside the black hole would spontaneously excite as if in a thermal bath of radiation at the Hawking temperature. This is consistent with our understanding gained from the calculations of expectation values of energy-momentum tensor~\cite{Candelas80} that the Hartle-Hawking vacuum is not a state empty at infinity but corresponds instead to a thermal distribution of (Minkowski-type) quanta at the Hawking temperature, and therefore it describes a black hole in equilibrium with an infinite sea of black-body radiation. On the other hand, when the atom is held near the event horizon, i.e., when $r\rightarrow 2M$, we have \begin{eqnarray} \biggl\langle\frac{dH_A(\tau)}{d\tau}\biggr\rangle_{tot}\,&\approx&-\,\frac{\mu^2}{2\pi}\,\biggl[\sum_{\omega_a>\omega_b} \omega_{ab}^2|\langle a|R_2^f(0)|b\rangle|^2\times\nonumber\\&&\quad\;\quad\;\quad\;\quad\; \biggl(\,1+\frac{1}{16M^2\omega_{ab}^2}\sum_{l=0}^\infty(2l+1)\,|\,{B}_l\,(\,0\,)\, |^2\biggr)\,\biggl(\,1+\frac{1}{e^{(2\pi\omega_{ab})/\kappa_r}-1}\biggr)\,\nonumber\\&&\quad\;\quad-\sum_{\omega_a<\omega_b}\omega_{ab}^2|\langle a|R_2^f(0)|b\rangle|^2 \,\biggl(\,1+\frac{1}{16M^2\omega_{ab}^2}\sum_{l=0}^\infty(2l+1)\, |\,{B}_l\,(\,0\,)\,|^2\biggr)\,\times\nonumber\\&&\quad\;\quad\;\quad\;\quad\; \frac{1}{e^{(2\pi|\omega_{ab}|)/\kappa_r}-1}\,\biggr]\;. \end{eqnarray} Here one can see that close to the horizon, in addition to the contribution that can be account for by the outgoing thermal radiation emanating from the horizon (refer to Eq.~(\ref{URateEH})), there is another contribution (the thermal term multiplied by the term containing ${B}_l$) that can be regarded as resulting from the incoming radiation from the sea of thermal radiation at infinity and this incoming radiation is however deflected by the spacetime geometry. Notice that the difference between the rate of change of the mean atomic energy in the Unruh vacuum and that in the Hartle-Hawking vacuum is not a simple factor 2 as in the 1+1 dimensional case~\cite{YuZhou07}. The reason is that in the four dimensional case, there are backscatterings by the spacetime curvature so that the outgoing thermal radiation from the event horizon can not travel through the spacetime unaffected and so does the incoming thermal radiation from infinity. \section{Summary} Using the DDC formalism, we have studied the spontaneous excitation of a two-level atom held static outside a Schwarzschild black hole and in interaction with a massless scalar field in the Boulware, Unruh and Hartle-Hawking vacuum respectively, and calculated the contributions of the vacuum fluctuations and radiation reaction to the rate of change of the mean atomic energy. In the Boulware vacuum case, spontaneous excitation can not occur so that the ground state atoms are stable. However, the spontaneous emission rate for excited atoms in the Boulware vacuum is not the same as that in the usual Minkowski vacuum, but very similar to that in the vacuum in a flat spacetime with a reflecting boundary. A noteworthy feature here is that the rate of change of the mean atomic energy is well-behaved at the event horizon, in sharp contrast to the response rate of an Unruh detector~\cite{Candelas80}. Both for the Unruh vacuum and the Hartle-Hawking vacuum, our results show that an atom held static at a radial distance $r$ from a Schwarzschild black hole would spontaneously excite. For the Unruh vacuum, it spontaneously excites as if there were an outgoing thermal flux of radiation (backscattered by the spacetime geometry though) at a temperature characterized by the Tolman relation. For the Hartle-Hawking vacuum, the spontaneous excitation occurs as if the atom were in a thermal bath of radiation at a proper temperature which reduces to the Hawking temperature in the spatial asymptotic region, except for a frequency response distortion caused by the backscattering of the field modes off the spacetime curvature. \begin{acknowledgments} This work was supported in part by the National Natural Science Foundation of China under Grant No.10575035 and the Program for New Century Excellent Talents in University (NCET, No. 04-0784). \end{acknowledgments}
1,116,691,501,035
arxiv
\section{Introduction} \label{S:1} How genetic variation contributes to phenotypic variation is an essential question that must be answered to understand the evolutionary process. The experimental characterisation of the genotype-phenotype (GP) relationship is a formidable theoretical and experimental challenge, but also an expensive task which suffers from severe practical limitations. Computational approaches have been recurrently used to make predictions of phenotypes from genotypes and to uncover the statistical features of that relationship. Advances notwithstanding, an apparently insurmountable problem remains: the astronomically large size of the space of genotypes. The space of possible phenotypic change and the probabilities of such change are directly determined by the architecture of the GP map; to quantify this map will allow better quantification of how the space of phenotypes is explored and answer important questions about the probability of evolutionary rescue or innovation under endogenous or exogenous changes. Progress in our understanding of GP maps at various levels is of relevance for different scientific communities with interests that range from evolutionary theory to molecular design through genomic bases of disease aetiology. Understanding of how RNA, DNA or amino acid sequences map onto molecular function could be of great importance for more fundamental approaches in synthetic biology, biotechnology, and systems chemistry. In a broader ecological context, the way in which generic properties of the GP map shape adaptation have rarely been explored. As of today, the overarching question of whether organismal phenotypes can be predicted from microscopic properties of genotype spaces remains open. In this review, we discuss the state-of-the-art of genotype-to-organism research and future research avenues in the field. The review is structured into four major parts. The first part is constituted by this introduction and Section~\ref{sec:variation}, which puts in perspective how relevant the generation of variation is in the evolutionary process, and introduces important biases arising from the inherent structure of genotype spaces. The second part comprises sections~\ref{sec:models} to \ref{sec:evolutionOFgpmaps}, where we discuss conceptual approaches to the static properties of GP maps and their dynamical consequences, as well as the evolution of GP maps themselves. The field is broad and several aspects have been addressed in previous reviews, so we only briefly summarise topics dealt with elsewhere. Therefore, we will succinctly present computational GP maps and only recapitulate, taking an integrative and explanatory viewpoint, the topological properties of the space of genotypes \cite{reidys:1997,stadler:2006,wagner:2011,ahnert:2017,aguirre:2018,nichol:2019}. Section~\ref{sec:models} constitutes a synthetic overview of GP map models, including paradigmatic examples such as RNA folding, more recent multi-level models such as toyLIFE, and a summary of artificial life examples. Readers familiar with those models can safely skip that section. Those models endow genotype spaces with topological properties that are briefly reviewed in the introduction of Section~\ref{sec:UnivTopology}, which is mostly devoted to discussing possible roots for generic properties of a broad class of GP maps. Attention is subsequently devoted to population dynamics on genotype spaces, which has been a less explored topic. Section~\ref{sec:dynamics} describes transient and equilibrium dynamical features of evolutionary processes. First, it delves into the effects of recombination and mutation bias, and on phenotypic transitions caused by the hierarchical, networked structure of genotype spaces. Then, a mean-field description that incorporates the essentials of GP map topology to clarify major dynamical features is discussed. The section finishes with a derivation of equilibrium properties in the context of statistical mechanics and some applied examples. Section~\ref{sec:evolutionOFgpmaps} discusses the evolution of GP maps themselves by means of two illustrative examples: a scenario where a multifunctional quasispecies emerges and a model of virtual cells incorporating the evolution of genome size. The third part, sections \ref{sec:empirical} and \ref{sec:cancer} is devoted to empirical GP maps and to biological applications, and mostly presents topics under development. Section~\ref{sec:empirical} examines most recent achievements regarding the experimental characterisation of GP and genotype-to-function maps in molecules and simple organisms, and the different possibilities that current and future techniques might allow. It includes a formal discussion on how phenotypes can be inferred from genotypic data and fitness assays, and a discussion of the intimate relationship between fitness landscapes and GP maps. Section~\ref{sec:cancer} exemplifies how concepts and techniques originating in quantitative studies of the GP map can enlighten useful approaches to diseases with a genetic component. The fourth and last part presents a mostly self-contained overview of open questions and difficulties that the field faces, as well as some possible avenues for further progress, in Section~\ref{sec:perspectives}. The paper closes with an outlook in Section~\ref{sec:GOmap} where we reflect on the feasibility of characterising the genotype-to-organism map, and on plausible epistemological difficulties to comprehend the organisation and complexity of full organisms. \section{GP maps and the importance of variation} \label{sec:variation} Darwinian evolution requires heritable phenotypic variation, upon which natural selection acts. Much of traditional evolutionary theory has focused on the role of natural selection, while the study of variation has been much less developed. There are a number of reasons for this difference. Firstly, there is an influential tradition, stemming from the early days of the modern synthesis, that any meaningful change over evolutionary time is ultimately caused by natural selection. One argument in favour of this thesis comes from the simple observation that a heritable phenotype with higher fitness will, over the generations, exponentially out-compete other phenotypes with lower fitness in the same population. Thus, differences in the rate at which mutations arrive will be swamped by the effect of fitness differences (there are much more sophisticated versions of this argument). Another argument, which is often more implicitly than explicitly made, is that a large part of variation is \textit{isotropic}---in other words, it is not biased in one direction or another. Stephen J. Gould, who was critical of this viewpoint, expresses it as follows: ``\textit{variation becomes raw material only, an isotropic sphere of potential about the modal form of a species \ldots [only] natural selection \ldots can manufacture substantial, directional change}'' \cite{gould:2002}. Whether evolutionary trends must primarily be explained by natural selection, or whether anisotropic (biased) variation also plays a key role, is a complex question. While the arguments have moved on considerably since the critique of Gould, especially with the rise of evo-devo \cite{love:2015}, they are far from being settled \cite{laland:2014,stoltzfus:2018}. Ever since the modern synthesis, directed variation has been deemed anathema because it evokes the Lamarckian view of variation to facilitate adaptation. However, as the analysis of GP maps reveals, these maps are a major source of anisotropic variation, even if this variation is not necessarily biased in the most beneficial way for the organism. The second reason why our understanding of variation is relatively underdeveloped is that working out the exact role played by the arrival of variation in evolutionary history is difficult because in nature we typically only observe the final outcomes of an evolutionary process. It is hard to know what variation may have arisen in the past, but not fixed, or what variation could have potentially arisen, but did not. For example, even when all potential variation is isotropic, the non-lethal variation may well be anisotropic, depending on the environment. In this context, the study of GP maps is critical, because they provide access to the way that changes in genotypes, brought on by various kinds of mutations, are translated into phenotypic variation for the biological system that the map describes. They allow us to ask important \textit{counterfactual} questions, such as what is the full spectrum of variation that could potentially arise? Working out how variation affects evolutionary outcomes depends on an understanding of such counterfactuals. A final issue for understanding variation comes from the unfathomable vastness of genotype spaces, whose size grows exponentially with genome length, rapidly leading to hyperastronomical numbers of possibilities \cite{louis:2016}. If these spaces are so unimaginably vast, then it might seem natural to conclude, as many have done, that the variation that appears in evolutionary history is largely contingent upon accidents of history, and unlikely to be repeated (see Ref. \cite{louis:2016} for a discussion). This problem of hyperastronomically large spaces means that only relatively simple GP maps allow global questions about the full spectrum of possible variation to be addressed. Nevertheless, important progress in this direction has been made through the use of GP maps that can be computationally explored and, more recently, through the development of quantitative approaches to shared generic properties. Among the latter, one of the most striking properties is a strong \textit{bias} in the number of genotypes mapping to a phenotype \cite{greenbury:2016,ahnert:2017}. This begs the question: Can this bias, which often extends over many orders of magnitude, affect evolutionary outcomes? Indeed, phenotypic bias, among other non-trivial properties of GP maps, does severely affect not only our understanding of how variation arises through random mutations, but also any accurate representation---be it metaphorical or formal---of evolutionary dynamics at large. \section{Models of the GP map} \label{sec:models} Maynard Smith introduced the notion of a mapping from a genetic space to a molecular structure---and with it the idea of a network linking viable genotypes---as a resolution of an evolutionary paradox pointed out by Salisbury \cite{maynard-smith:1970}. In brief, Salisbury noted \cite{salisbury:1969} that the number of possible amino acid sequences exceeds by many orders of magnitude the number of proteins that ever existed on Earth since the origin of life, and concluded from this fact that functionally effective proteins have a vanishingly small chance of arising by mutation. As a way out of this dilemma, Maynard Smith suggested that the existence of networks of functional proteins are essential to navigate the space of genotypes to produce a sequence of adaptive improvements and to explore new regions that, eventually, secure evolutionary innovation \cite{huynen:1996b}. Formally, the space of genotypes can be defined as a network where nodes represent genotypes, with any two nodes linked if they are mutually accessible through a single point mutation \cite{schuster:1994}. A \emph{neutral network} is therefore an ensemble of connected genotypes with the same fitness, including those with identical phenotypes. The empirical existence of such networks and their role in providing access to new phenotypes \cite{fontana:1998} was unequivocally demonstrated \cite{koelle:2006,schultes:2000} four decades after Maynard Smith's conjecture. Many studies have aimed at probing the statistical structure of the GP relationship, thus relying on the computational exploration of GP maps. Models of RNA secondary structure \cite{hofacker:1994,schuster:1994}, protein secondary structures \cite{lipman:1991,irback:2002}, gene regulatory networks \cite{wagner:2011,payne:2014a}, metabolic networks \cite{barve:2013,hosseini:2015}, protein complexes \cite{johnston:2011,greenbury:2014}, artificial life \cite{ofria:2004}, or multilevel maps such as a toyLIFE, which includes protein structure, regulatory, and metabolic networks \cite{arias:2014,catalan:2018}, have been explored through the years. Computational frameworks often rely on building complete GP maps from exhaustive enumeration of genotypes (or sparse GP maps from large samples) in models with simple genotype-to-phenotype rules as the ones above. To study global properties of a GP map, such as phenotype frequencies, a large number of genotype-phenotype pairs have to be evaluated. With notable exceptions \cite{aguilar:2017,rowe:2009,jimenez:2013}, some of which will be discussed in section~\ref{sec:empirical} of this paper, the exhaustive study of GP maps represents an enormous challenge that has been restricted to systems where the phenotype can be found computationally from the genotypic information. For the sake of simplicity, most GP computational maps assign a unique phenotype to each genotype, in a many-to-one representation. Some maps also take into account environmental factors such as temperature, which modify GP mapping rules and, therefore, include phenotypic plasticity in a streamlined fashion \cite{wagner:2014}. Other implementations also consider phenotypic promiscuity \cite{jensen:1976,aharoni:2005}, that is, the possibility that each sequence maps to more than one phenotype under fixed environmental variables. However, many-to-many GP maps entail an exponentially increasing cost in computation time, so they have been rarely explored in depth (for exceptions see \cite{ancel:2000,barve:2013,wagner:2014,espinosa-soto:2011,deboer:2012,deboer:2014,payne:2014c,rezazadegan:2018,diaz-uriarte:2018}). Creating complete computational frameworks for GP models is a challenge---building complete GP maps for sequences as long as functional molecules in realistic environments is beyond our current computational power. Nevertheless, progress has been steady and significant. For example, and despite the freedom inherent to any definition of phenotype, many generalities have emerged from studying these models, and theoretical arguments to explain some of them have been developed \cite{greenbury:2015,manrubia:2017,garcia-martin:2018}. These studies have led to a relatively sound understanding of the conditions that are behind different phenotype abundances, its relationship with robustness, and the topology of neutral networks \cite{greenbury:2016,aguirre:2009,aguirre:2011}. In this section, we begin by briefly summarising a variety of GP maps that have been computationally studied to date. Some attention is devoted to RNA, a model for which we examine in perspective some of the important lessons learnt and discuss possible future contributions to GP map research. There is a substantial body of literature available, including comprehensive reviews \cite{schuster:2006} that we do not even attempt to summarise here. We finish this part discussing the GP maps of artificial life systems. We would like to remark that, while we focus here on sequence-to-structure and sequence-to-function maps, it is important to highlight that genotype-phe\-no\-type and genotype-fitness maps have also been studied in the context of development \cite{salazar-ciudad:2010,salazar-ciudad:2013,hagolani:2021}. \subsection{One-level GP models} \begin{figure}[ht] \begin{center} \includegraphics[width=7.0cm]{figure1.pdf} \end{center} \caption{Some examples of simple GP maps. For each model, and from left to right, we depict an example phenotype and some of the genotypes in its neutral network (mutations that do not change the phenotype are highlighted in red). (a) RNA sequence-to-structure is the paradigmatic GP map. Mutations that conserve the secondary structure appear in loops with a higher likelihood than in stems. (b) The HP model, both in compact or non-compact realisations, has been studied as a model for protein folding. (c) toyLIFE is a minimal model with several levels \cite{arias:2014}. Sequences of the HP type are read and translated to proteins that interact through analogous HP rules to break down metabolites. (d) Fibonacci's model \cite{greenbury:2015} relies on the separation between constrained and unconstrained positions in sequences to derive some formal properties of simple GP maps. (e) A generalisation of the idea of position-dependent constraints \cite{manrubia:2017} provides a formal understanding of the ubiquitous lognormal distribution for neutral set sizes. (f) A polyomino model used to capture the essentials of quaternary protein structure \cite{greenbury:2014}. (g) Dawkins' biomorphs are defined by genotypes with few parameters that define the generative rules of the structure \cite{dawkins:2003}. Figure modified from Ref. \cite{aguirre:2018}.} \label{fig:GPmodels} \end{figure} Over the past three decades, the GP maps of several simple biological model systems have been studied in great detail. Figure~\ref{fig:GPmodels} summarises the essentials of some of the GP maps we will be discussing. Two classical examples are RNA secondary structure \cite{hofacker:1994,schuster:1994} and the HP model of protein folding \cite{lipman:1991,li:1996}. The HP model represents proteins on a regular lattice as self-avoiding chains of hydrophobic (H) or polar (P) beads. In its compact version the chains are forced to fold into rectangular configurations that leave no empty sites, while in the non-compact version all possible self-avoiding walks in the lattice are considered. The phenotype is defined as the minimum energy of a given configuration calculated from a contact potential between neighbouring (but not in the backbone) beads. Because RNA and HP models are relatively tractable, properties such as the distribution of the number of genotypes per phenotype \cite{stich:2008,louis:2016,shahrezaei:1999}, the phenotypic robustness and evolvability \cite{wagner:2007,holzgrafe:2011} (see Box~\ref{box:definitions}) or the topological structure of neutral networks \cite{aguirre:2011} could be systematically studied and compared \cite{ferrada:2012}. Given the pivotal role proteins play in cellular processes, the protein sequence-to-structure map, of which the HP model constitutes the simplest realisation, is of great general interest \cite{shakhnovich:2006,ciliberti:2007b,chen:2008}. The protein sequence-to-structure map has been also studied using more realistic, multi-parametric contact potentials \cite{mirny:1996,buchler:1999,li:2002,bastolla:2003} and coarse-grained models at different levels, such as the Polyomino model \cite{johnston:2011,greenbury:2014} for protein complexes. Some inferences about local and global properties of the protein sequence-to-structure-to-function GP map have also been made from experimental data \cite{sarkisyan:2016,ferrada:2008,ferrada:2010}, and estimates of neutral set sizes (NSSs) have been obtained from structural data \cite{england:2003}. Breakthroughs in the computational prediction of protein structure from amino acid sequence have recently been achieve by deep learning with artificial neural networks. The AlphaFold 2 system by DeepMind \cite{callaway:2020} outperformed 100 other teams in the 2020 Critical Assessment of Structure Prediction challenge (CASP13), with prediction accuracy rivaling that of experimental structure determination. Enormous resources are required for each evaluation, however, taking days of real time computational for a single protein sequence. If subsequent development can maintain the accuracy while allowing exponential speedups, these computational systems should be able to open up entirely new investigations of the GP map for proteins. \begin{figure} \begin{infobox}[frametitle= Box~\ref{box:definitions}a. Definitions] \begin{description}[leftmargin=5mm]\setlength{\itemsep}{0pt} \item[\it Function] Function is a contentious term \cite{graur:2013,kellis:2014} that is used to mean many things. In this review we are mostly referring to properties of proteins, such as stability, catalytic activity, and binding affinity. \item[\it Genetic correlations] A GP map has this property if two sequences differing at a single site are more likely to generate the same phenotype than two arbitrary sequences \cite{greenbury:2016}. \item[\it Genotype network] A set of mutually connected genotypes that have the same phenotype. This term is usually employed as a synonym of \emph{neutral network,} although in some context a genotype network needs not be neutral---for instance, in the case of GP maps with both a categorical phenotype (e.g. molecular structure) as well as a quantitative fitness (e.g. thermodynamic stability of the structure). \item[\it Genotypic evolvability] Total number of distinct alternative phenotypes that can be reached through point mutations from a single genotype \cite{wagner:2007}. \item[\it Genotypic robustness] Number of point mutations that do not change the phenotype of a specific genotype. It is analogous to the neutrality of a genotype. \item[\it Navigability] Ability to navigate throughout genotype space via neutral mutations. \item[\it Neutral network] A set of mutationally connected genotypes that have the same fitness, including those that have the same phenotype. Often, it refers to the largest connected component of a neutral set. \item[\it Neutral set] A set of genotypes which have the same fitness, including those that have the same phenotype. The \textit{neutral set size} is therefore the number of genotypes that map to a given phenotype. \item[\it Organism] Any individual entity that embodies the properties of life, like a cell, an animal, or a plant. It is a synonym for ``life form''. By extension, it also applies to artificial life forms. \item[\it Phenotype] A property which is encoded in the genotype and is biologically relevant, for example a molecular structure. Though abstract, this broad definition allows a variety of models to be treated with the same terminology. \item[\it Phenotypic robustness] Average genotypic robustness of all genotypes in a neutral network \cite{wagner:2007}. \item[\it Phenotypic evolvability] Total number of distinct alternative phenotypes that can be reached through point mutations from a phenotype's neutral network \cite{wagner:2007}. \item[\it Plasticity] Quality of a genotype leading to the production of more than one phenotype depending on the environment \cite{rezazadegan:2018}. \item[\it Promiscuity] Quality of a genotype leading to the production of more than one phenotype in the same environment. \item[\it Quasispecies] Population structure with a large numbers of variant genomes related by mutations. Quasispecies typically arise under high mutation rates as possible mutants change in relative frequency as replication and selection proceeds \cite{domingo:2019}. \item[\it Shape-space-covering] A GP map has the shape space covering property if, given a phenotype, only a small radius around a sequence encoding that phenotype needs to be explored in order to find the most common phenotypes \cite{schuster:1994}. \item[\it Versatility] A quantitative measure of the rescaled robustness of a specific sequence position \cite{garcia-martin:2018}. \end{description} \label{box:definitions} \end{infobox} \end{figure} \begin{figure} \begin{infobox}[frametitle= Box~\ref{box:definitions}b. Acronyms] \begin{description}\setlength{\itemsep}{0pt} \item[\rm CPMs] Cancer progression models \item[\rm DAG] Directed acyclic graph \item[\rm FACS] Fluorescence-activated cell sorting \item[\rm FPGA] Field-programmable gate array \item[\rm GP] Genotype-to-phenotype \item[\rm MAVEs] Multiplexed assays for variant effects \item[\rm MFE] Minimum free energy \item[\rm MPRAs] Massively parallel reporter assays \item[\rm NSS] Neutral set size \item[\rm OLS] Oligo(nucleotide) library synthesis \item[\rm SCRaMbLE] Synthetic Chromosome Recombination and Modification by LoxP-mediated Evolution \end{description} \end{infobox} \end{figure} A number of models work at levels above sequences. Simple gene regulatory networks act as effective genotypes in models that map them onto phenotypes defined as the steady-state gene expression pattern \cite{wagner:2003,ciliberti:2007}. A metabolic genotype is defined as all chemical reactions an organism can catalyse via enzymes encoded in its genome; the phenotype is defined as viability in minimal chemical environments that differ in their sole carbon sources \cite{matias-rodrigues:2009,samal:2010}. Those two models share the property that most genotypes do not map to any functional phenotype---it has been put forward that such a restrictive relationship may stem from a minimisation of the cost incurred by maintaining a complex functional network \cite{leclerc:2008}. However, genotype spaces where function is sparse still contain large neutral networks that percolate that space and guarantee phenotypic innovation without loss of function \cite{ciliberti:2007,matias-rodrigues:2009,barve:2013}. There are compact \cite{catalan:2018} and non-compact \cite{holzgrafe:2011} versions of the HP model with an overwhelming majority of non-functional genotypes where neutral networks are very small and mostly disconnected; therefore, innovation is severely hindered, if not plainly impossible, in those one-level maps. However, that lack of navigability turns out to be irrelevant if additional, higher levels, are taken into account. \subsection{Multi-level GP models} \label{sec:multilevel} Most computational GP maps studied to date, including those discussed in the previous section, only include one level (or scale) of description, mapping genotypes of different kinds to their corresponding phenotypes (see, however, \cite{serohijos:2014}). But even the simplest organisms include more than one level: RNAs and proteins will perform enzymatic and regulatory reactions that will in turn affect the availability of other molecules inside and outside the cell. If the study of one-level GP maps has led to great changes in our understanding of evolutionary theory, it stands to reason that studying multi-level GP maps will yield equally important insights. It has been shown that multilevel models endowed with biophysically realistic interaction rules lead to the emergence of complex fitness landscapes that permit multiple, equally successful, evolutionary pathways \cite{heo:2008,heo:2011} or the growth of organismal population size when protein-based, functional genotypes are discovered through evolution \cite{zeldovich:2007_PLoSCB}. Recent proposals for multilevel models are the model of RNA-based virtual cells discussed in Section~\ref{sec:VirtualCells}, a model of developmental spatial patterning \cite{khatri:2009,khatri:2019} (see Section~\ref{sec:SMevol}), and toyLIFE \cite{arias:2014,catalan:2018}. toyLIFE is a multi-level model that includes genes, proteins and metabolites, as well as their regulatory and metabolic interactions. toyGenes consist of binary sequences (the genotype) that are first mapped to HP-like proteins. None of these proteins can be obtained from any other through single-point mutations. Proteins interact between themselves, with the genome, and with metabolites. The phenotype is defined by the set of metabolites that a given sequence is able to catabolise. In its three-gene version, phenotype is mostly defined through the first two genes, which admit very few mutations, while the third gene is essentially free to mutate, thus restoring evolvability to the system. Additionally, the existence of promiscuous sequences further enhances navigability when environmental factors such as temperature are considered \cite{catalan:2017tesis}. Promiscuity was recognised long ago as a key property in adaptive processes \cite{jensen:1976} that, as of yet, has not been explored in most GP maps. One of the most interesting results to come out of an early exploration of toyLIFE's metabolic GP map is that adding levels of complexity to a phenotypic definition actually increases robustness \cite{catalan:2018}: proteins can change and become non-functional, and regulatory functions can be altered, while the overall metabolic function remains constant. This suggests that the potential for cells to evolve toward new evolutionary challenges has been significantly underestimated in the past. \subsection{RNA} \label{subsec:RNA} RNA is the most paradigmatic model for studying GP relationships and constructing GP maps \cite{schuster:1994,fontana:1993,schultes:2005,wagner:2005,smit:2006,cowperthwaite:2008,jorg:2008,stich:2011,aguirre:2011,schaper:2014,dingle:2015,garcia-martin:2018}. Two major breakthroughs behind its popularity were the development of empirically based energy models---of which the most widespread is the Turner nearest neighbour energy model \cite{mathews:1999}---, and two fast dynamic programming algorithms to determine the minimum free energy (MFE) secondary structure \cite{zuker:1981} and to compute the partition function \cite{mccaskill:1990} of a sequence. In general, a sequence can fold into a number of secondary structures and the energy models and dynamic programming algorithms have made it possible to select low-energy structures \cite{wuchty:1999}, quantify their free energies \cite{lorenz:2011} and use this to define a GP map in several ways: one GP map definition considers a single structure per sequence, usually the minimum-free-energy structure \cite{schuster:1994}. This will lead to a many-to-one GP map, where each sequence maps to a single structure, but each structure can be generated by a number of different sequences. An alternative definition allows several low-free-energy structures per sequence, which leads to a more complex many-to-many relationship. Together, these different studies defined a range of formal measures to quantify some of the key features of GP relationships, such as plasticity, evolvability, robustness and modularity \cite{ancel:2000}. The results obtained with RNA through the years have served as inspiration and guide to our intuition when faced with other GP maps. \subsubsection{Phenotypic bias in RNA} \label{sec:RNABias} We will start by reviewing results from the commonly studied many-to-one GP map, where the focus is solely on the predicted minimum-free-energy structure of each sequence. The largest exhaustive enumeration performed for RNA sequences, of length $L=20$, yielded 10 orders of magnitude difference in the number of genotypes mapping from the most rare to the most frequent secondary structure phenotypes \cite{schaper:2014}. Approximate calculations of NSSs for longer sequences \cite{dingle:2015,garcia-martin:2018} show that this variance grows rapidly with increasing length. For example, for $L=100$ this difference is expected to be over 50 orders of magnitude: these maps are extremely biased. In an important study \cite{jorg:2008} the NSSs for longer length RNA were calculated using a sampling technique. When comparing to structures in the fRNAdb database for functional non-coding RNA (ncRNA) \cite{kin:2007}, they found, for systems of lengths $L=30$ to $L=50$, that the natural RNA secondary structures were typically among those with larger NSS. These results suggested that the strong bias in the GP map was reflected in the secondary structures found in nature. Another interesting set of studies compared structural features (e.g.\ distributions of stack and loop sizes) of natural secondary structures and those obtained when randomly sampling over sequences. They found that many are quite similar \cite{fontana:1993}, and that natural and random RNA share strong similarities in the sequence nucleotide composition of secondary structure motifs such as stems, loops, and bulges \cite{smit:2006}. Why should random sampling over sequences generate distributions that are so similar to natural RNA, where natural selection would normally be thought to play an important role? The study of much larger datasets of natural RNA from the fRNAdb database---and for lengths ranging from $L=20$ to $L=126$---demonstrated that the distributions of various structural features, and also properties such as the genotypic robustness, are very close to those obtained by random sampling over genotypes \cite{dingle:2015}. Furthermore, the distribution of NSS for natural RNA was found to closely follow the NSS distribution that arises upon random sampling of phenotypes. If one were to simply randomly sample over phenotypes, very significant differences with random genotype sampling (and natural RNA) would be found. By working out these counterfactuals it was therefore possible to demonstrate that the way in which variation arises through a GP map is dramatically different from the naive expectation that all potential variation is equally likely. The close agreement of the distributions found in nature and those found by random sampling of genotypes via the GP map is very surprising given that natural selection is expected to be an important factor in the process that allows a particular functional RNA to fix in a population. The fact that its effect is not really visible for the properties above, at least when compared to a null model of random sampling genotypes, would appear to be strong evidence for the importance of anisotropic variation in determining evolutionary outcomes. However, before this conclusion can be drawn, it is important to remember that evolution does not proceed by random sampling of genotypes. Instead, it typically starts with a particular genotype and phenotype, and alters it via mutations that in turn generate new phenotypes that are either fixed or disappear over the generations in evolving populations. Given the hyper-astronomically large size of these spaces, it is not clear that such a local search should be at all similar to the results of random sampling of genotypes, which is a global property that does not depend on the starting point in genotype space. Still, a counterexample of natural RNA where selection seems to have played a visible effect is that of viroids. Viroids are small, non-coding, circular RNA molecules that infect plants \cite{diener:1971}. Viroids have compact secondary structures that constrain their evolution \cite{elena:2009} and whose preservation seems essential to avoid degradation and inactivation \cite{diserio:2017}, and to minimise the effect of deleterious mutations \cite{sanjuan:2006:MBEI,sanjuan:2006:MBEII}. Viroids bear a number of paired nucleotides well above random expectations \cite{cuesta:2017}, such that the estimated NSSs of typical viroids are significantly below those of random sequences. For example, a typical structure for a circular RNA of length 399 has an average of 230 paired nucleotides and about $10^{91}$ compatible sequences. However, the largest known viroid is {\it Chrysanthemum chlorotic mottle viroid}, which matches that length, but has 280 paired nucleotides and an NSS of about $10^{72}$ genotypes \cite{catalan:2019a}. \subsubsection{Promiscuity in RNA} Beyond the many-to-one GP map, many-to-many GP maps that take into account the MFE structure and suboptimal structures in the Boltzmann ensemble have been studied \cite{ancel:2000, wagner:2014, rezazadegan:2018}. Suboptimal structures can be included according to several criteria: either all structures which fall within a fixed free energy range from the MFE structure \cite{ancel:2000, wagner:2014} are considered or only structures which have the same free energy as the MFE structure up to the energy resolution of the computational model \cite{rezazadegan:2018}. First, a link was found between the suboptimal phenotypes of a sequence in the many-to-many GP map and the phenotypes in the mutational neighbourhood of the same sequence in the corresponding many-to-one GP map \cite{ancel:2000}. Secondly, genotypes with low promiscuity were shown to have MFE structures with higher modularity \cite{ancel:2000}. Finally, it was found that evolving populations encounter a higher number of phenotypes if suboptimal phenotypes are included \cite{wagner:2014}. Altogether, these observations point to the important adaptive role of molecular promiscuity by supplying alternative phenotypes in the absence of mutations, and so redefining the fitness landscape \cite{aguirre:2018}. \subsubsection{Hints from RNA inverse folding algorithms} The characterisation of functional phenotypes by designing sequences that fold into a given RNA secondary structure has been much less explored than the direct fold of given sequences. Finding sequences that yield a particular secondary structure is known as the RNA {\it inverse folding} problem. This is an NP-complete problem even for the MFE structure \cite{schnall-levin:2008}, hence a very demanding computational task. As a consequence, most approaches are based on local search algorithms \cite{churkin:2017}. Actually, RNA inverse folding algorithms are mostly intended for synthetic design, though they have occasionally been used to investigate GP relationships \cite{wagner:2007,borenstein:2006}. However, their use is controversial due to the intrinsic bias of the underlying local search algorithms \cite{szollosi:2009}, which are not complete by definition and therefore produce biased samples under multiple runs. This caveat notwithstanding, there are some inverse folding methodologies that appear more suitable for this purpose. The first method is a {\em soft inverse folding} approach which implements a dynamic programming algorithm to compute the RNA {\em dual partition function} \cite{garcia-martin:2016b}. This partition function is defined as the sum of Boltzmann factors $\sum_{\sigma}\exp(-E(\sigma,\Sigma)/T)$, where $E(\sigma,\Sigma)$ is the energy of the RNA nucleotide sequence $\sigma$ compatible with a target structure $\Sigma$, and $T$ the absolute temperature (in units of energy). An energy weighted sampling from the low energy ensemble of sequences that are compatible with the given secondary structure is performed to calculate this partition function. While this approach is not particularly practical for synthetic design, it provides insights into molecular evolution. This theoretical abstraction and the measures derived from it, such as the {\em expected dual energy}, can provide useful information about general properties of the phenotypes without exploring the whole genotype space. Computational analyses based on the nearest neighbour energy model over all the RNA sequences in the Rfam database \cite{kalvari:2018} indicate that natural RNAs fold into secondary structures with energy higher than expected for sequences with the same length and GC content. Possible explanations for this observation are either that functional RNAs are not under evolutionary pressure to be highly thermodynamically stable or that sequence requirements prevent reaching minimum folding energies. On the other hand, experimental studies confirm that even random sequences frequently acquire compact folds similar to those of natural RNAs. Empirical observations further indicate that natural selection could be a determinant factor to achieve unique, stable tertiary folds---i.e. without major competing phenotypes---under natural conditions \cite{schultes:2005}. Besides, the controlled bias in this sampling methodology provides a delimited context to evaluate the properties that characterise a functional RNA with respect to sequences with similar structure. Simulations using this approach indicate that bacterial ncRNAs are more plastic and less robust than other sequences with similar structure \cite{garcia-martin:2016b}. Although the samples returned by this algorithm are representative of the low energy ensemble of sequences of the given structure, the MFE structure of individual sequences is not necessarily the target structure. However, the proportion of alternative MFE structures of the sampled sequences is the distribution of {\em competing phenotypes} in the low energy ensemble of the target structure, which can in turn be interpreted as an estimate of the structures that are likely to coexist with that phenotype in a many-to-many GP map. Similar algorithms for computing and sampling from the {\em RNA dual partition function} with additional constraints have been developed and used to determine the neutral path between sequences in the same phenotype \cite{barrett:2018}. The second methodology is complete inverse folding based on constraint programming \cite{garcia-martin:2013}. The constraint programming paradigm avoids exploring the whole sequence space when structural, sequence or environmental restrictions are included. These restrictions comprise, among many others, GC content, sequence motifs, multiple local and global structures and folding temperatures. Rather than slowing down the search, each constraint increases the speed of this algorithm. This algorithm can potentially retrieve all sequences that meet the requirements or conclude that no solution exists. In practice, the running time depends on the sequence space defined by the given constraints. These features make it appropriate for the study of genotype-phenotype-function relationships of moderately small functional RNAs with known moieties, or of regulatory RNA elements like riboswitches and thermoswitches. Some examples of the performance of complete inverse folding based on constraint programming are the computationally-based suggestion that the conserved GUH (no G) motif in the hammerhead ribozyme type III cleavage site of {\it Peach latent mosaic viroid} is due to structural, rather than functional, requirements \cite{dotu:2014}, or that natural thermoswitches do not seem to be optimised to maximise the probability difference between the active and inactive structures at the corresponding folding temperatures \cite{garcia-martin:2016a}. \subsection{Artificial life} Evolutionary processes have not only been studied in biology, but also in man-made systems. Some models were designed to simulate biological evolution computationally and mimic biological properties. A widely used example is the digital model of a biological organism called Avida \cite{ofria:2004}. Avida organisms are pieces of code which can self-replicate and evolve towards optimal usage of computational resources. Richard Dawkins introduced a different form of artificial life to study evolution: biomorphs \cite{dawkins:2003} are two-dimensional stick figures produced recursively from a genotype, which consists of nine integer numbers. These biomorphs resemble abstract animal or plant shapes. Lindenmayer systems are another famous recursive model which can produce plant-like figures \cite{lindenmayer:1968a,lindenmayer:1968b}. These model systems are abstractions of biological organisms, but they all imitate properties of biological systems: the recursive branching rules in Lindenmayer's systems and later in Dawkins' biomorphs were inspired by plant development, whereas Avida digital organisms have a metabolism and compete, just like bacteria \cite{ofria:2004,lindenmayer:1968b,dawkins:2003}. However, evolutionary principles have been applied even more generally: the study of programmable electronic hardware has been addressed using the GP framework \cite{raman:2011}. Circuit configurations were treated as genotypes and the function which a circuit computes as the corresponding phenotype. Here we will focus on results for four artificial life models: Avida organisms \cite{fortuna:2017}, biomorphs \cite{dawkins:2003,martin:2020}, the 2PD0L model \cite{lehre:2005,lehre:2007}, which is based on Lindenmayer's systems, and FPGAs \cite{raman:2011}, a type of programmable electronic circuits. These studies have focused on different properties, which makes a direct and quantitative comparison difficult. However, similarities between these artificial life GP maps and molecular sequence-to-structure GP maps exist \cite{fortuna:2017,lehre:2005,lehre:2007,raman:2011}: first, in three of these four systems the number of genotypes mapping to a given phenotype was estimated and found to vary significantly between phenotypes \cite{fortuna:2017,raman:2011,martin:2020}. For the fourth model a related quantity, the neutral set diameter, was also found to differ between phenotypes \cite{lehre:2007}. Such a heterogeneity, or phenotypic bias, in the distribution of genotypes over phenotypes has long been observed in molecular structure GP maps \cite{schuster:1994,li:1996}. Second, a high degree of genotypic robustness was observed, which enables the formation of neutral networks \cite{fortuna:2017,raman:2011,lehre:2007,martin:2020}. This property was also first found in molecular structure GP maps \cite{lipman:1991} and is referred to as genetic correlations \cite{greenbury:2016}. A third shared property follows from the vastly different NSS: the probability of transitioning from a larger to a chosen smaller neutral set by point mutations is much smaller than that in the reverse direction. This asymmetry is known from molecular structure GP maps \cite{fontana:1998b} and has been confirmed for two of the artificial life GP maps: Avida \cite{fortuna:2017} and the 2PD0L model \cite{lehre:2005}. In addition to these shared properties, there are points in which the various artificial life systems differ. In Avida, a high fraction of genotypes is considered inviable because the organisms are unable to reproduce \cite{fortuna:2017}, whereas in the biomorphs system all genotypes produce well-defined drawings and all stick figures are viable until an external decision is made about the fitness of specific shapes. In molecular GP maps the fraction of viable genotypes also depends on the system: in studies of model proteins, a large fraction of genotypes does not fold into a unique structure and is considered unstable, whereas for RNA secondary structure a minimum free energy structure is found for most sequences \cite{ferrada:2012}. Further comparisons could be made once quantities defined for GP maps, such as phenotypic robustness and evolvability, NSS, and mean-field mutation probabilities, are evaluated consistently for all of these and further artificial life models. Commonalities between artificial life and molecular structure GP maps dominate the picture at present, but future research may also identify differences between these two groups of models. \section{The universal topology of genotype spaces} \label{sec:UnivTopology} Some of the results highlighted in the former section hint at the possibility that any sensible GP map (and, by extension, artificial life system) is characterised by a generic set of structural properties that appear repeatedly, with small quantitative variations, regardless the specifics of each map. Extensive research performed in recent years has confirmed this possibility to an unexpected degree. Some of the commonalities documented are navigability, as reflected in the ubiquitous existence of large neutral networks for common phenotypes that span the whole space of genotypes, a negative correlation between genotypic evolvability and genotypic robustness, a positive correlation between phenotypic evolvability and phenotypic robustness, a linear growth of phenotypic robustness with the logarithm of the NSS, or a near lognormal distribution of the latter. There are recent and comprehensive reviews of the properties measured and shared by different GP maps \cite{reidys:1997,stadler:2006,wagner:2011,ahnert:2017,aguirre:2018,nichol:2019}. In the following sections, we discuss new views on the plausible roots of this seemingly universal class of GP maps. \subsection{Possible roots of universality in GP maps} \label{sec:PossibleRoots} The question obviously arises: Why are structural properties of GP maps unaltered by the details of the mapping? Part of the answer must lie in the topology of the very high dimensional spaces governing the relationship between genotypes and phenotypes. Our intuitions often fail us here because these spaces are highly interconnected. Although their volumes grow exponentially with sequence length, distances are linear. For example, if one made every RNA of length $L=79$, the molecules would weigh more than the Earth \cite{louis:2016}. Yet none of those strands is more than 79 point mutations away from any other. One way this interconnection manifests itself is through the property of shape-space covering, a term first introduced for GP maps in the RNA context \cite{schuster:1994}, and borrowed from its original use in immunology \cite{perelson:1979}. It captures the fact that many phenotypes are only a handful of mutations away from one another. While this property has been best studied in the secondary structure RNA GP map, it has also been shown to be present in the HP model \cite{bornberg-bauer:1999,ferrada:2012}, toyLIFE \cite{catalan:2018}, the polyominoes \cite{greenbury:2014}, and a model of gene expression \cite{khatri:2009} (where it is described as ergodicity of phenotypic exploration). Shape-space covering suggests that no matter where you start, many other phenotypes are in principle close by in terms of Hamming distance. In the cases above, this holds even if the search begins in an arbitrary genotype. In GP maps where function is sparse in genotype space \cite{ciliberti:2007,matias-rodrigues:2009,barve:2013}, phenotypes are still close to each other, but links are established through a limited number of genotypes that might take a long time to find through random walks on the neutral network. \subsection{Constrained and unconstrained sequence positions. Formalising neutrality and evolvability} \label{sec:ConstrainedPositions} The intuitions above have received quantitative support from analytically tractable, streamlined GP maps which aim to capture the essentials of generic GP map features. These models, the results attained and the clues they provide are summarised in this section, which might appear slightly technical to the non-familiar reader but clarifies possible constructive principles of evolutionarily apt GP maps. Highly simplified, abstract GP maps can reproduce many of the generic properties discussed \cite{greenbury:2015,weiss:2018}. These simplified maps hint at two major possible causes underlying structural universality: (i) the partition of sequence regions into constrained and unconstrained parts and (ii) non-local interdependence of sequence positions with regard to their constraints (as sketched in Fig.~\ref{fig:GPmodels} (d,e)). Let us illustrate how to derive general results with a simple example. Consider a sequence of length $L$ whose first $\ell$ positions are fully constrained (changing any of those positions amounts to changing the phenotype) and the remaining $L-\ell$ positions are neutral (changes do not affect phenotype). If every position in the sequence admits $k$ possible values ($k=2$ for a binary alphabet, formed for example by symbols $\{0, 1\}$ and $k=4$ in quaternary alphabets, as $\{$A, C, G, U$\}$), then there are $k^{\ell}$ different phenotypes for every value of $\ell$, each of size $k^{L-\ell}$. Using a rank-ordering of the sizes of phenotypes and some simple algebra, it is easy to conclude that the probability $p(S)$ that a phenotype has size $S$ is $p(S) \propto S^{-\alpha}$, with $\alpha=2$ \cite{manrubia:2017}. If the restriction of a position being fully constrained or neutral is relaxed, different values of the exponent $\alpha$ can be obtained. The exponent also changes if a stop codon (equivalent to considering an $\ell$-dependent amount of lethal mutations) is introduced \cite{greenbury:2015}. \begin{figure}[ht] \begin{center} \vspace{-0.3cm} \includegraphics[width=6cm,clip]{figure2.png} \vspace{-1cm} \end{center} \caption{Predicted and measured properties of RNA phenotypes. (a) Log-log-log histogram of the estimated abundance vs. actual abundance of four-letter RNA of length $L=16$ phenotypes using versatility as defined in the main text \cite{garcia-martin:2018}; (b) NSSs estimated using \cite{jorg:2008} natural RNAs of length $L= 100$ obtained from the ncRNA database \cite{dingle:2015}. Random evolutionary search is highly skewed towards the largest phenotypes, as evidenced by the predicted shape of the full, lognormal distribution (solid curve): phenotypes of small and typical sizes are not found in nature.} \label{fig:Versatility} \end{figure} Interestingly, the shape of the distribution $p(S)$ changes to a lognormal function if, in the examples above, constrained sites can be arbitrarily distributed along the sequence. In general, the positions of a sequence are neither constrained nor neutral, but versatile in varying degrees. Let us define the {\it versatility} $v_i$ of position $i$ in a sequence as the average number of alphabet letters at that site that do not modify the phenotype. This extends the ideas above and provides a simple estimation of neutral set size $S$, as $S = v_1 v_2 v_3 \dots v_L$. This estimated value has been shown to be a very good approximation to the NSS in several GP models such as RNA, HP and toyLIFE \cite{garcia-martin:2018} (Fig. \ref{fig:Versatility}a). In all those cases and several others, the distribution $p(S)$ is compatible with a lognormal, which can be analytically derived under very generic assumptions in the case of RNA \cite{manrubia:2017} (Fig. \ref{fig:Versatility}b). Moreover, the results suggest that this approximation can be extrapolated to larger sizes. Additional properties, such as genotypic and phenotypic robustness, can be analytically obtained in such effective models \cite{greenbury:2015}, which constitute a sound first step towards deriving a formal theory of genotype spaces and their universal properties. Generic biological sequences display the characteristics above in almost every biological context: exons and introns correspond to constrained and unconstrained regions, as do genes and noncoding intergenic sequences. Start and stop codons as well as interactions between transcription factors and their targets are examples of the interdependence of one sequence region on the constraint of another. As a result it is likely that the same GP map properties we observe in abstract model systems also hold for much more complex and biologically realistic phenotypes. The challenge in these more complex GP maps, however, is the vast size of the genotype space. A protein of 300 residues has a sequence space of size 20$^{300}$. Approaches that can estimate the structural properties of a GP map from relatively small samples are therefore essential. Knowledge of these properties is not just interesting for the study of GP maps, but also has potentially useful applications \cite{dingle:2015}. Being able to measure properties such as the phenotypic robustness, evolvability, and neutral network size of phenotypes in more complex GP maps would therefore provide a powerful methodological tool for the prediction of evolutionary pathways. The division of sequences into constrained and unconstrained regions is also likely to make prediction of structural GP map properties from local samples easier. This is because a division of sequences into constrained and unconstrained regions implies that many sequence positions are largely independent of each other with regard to their phenotypic effect. While important interdependencies remain, which particularly affect evolvability, the fact that interdependent sequence positions are likely to constitute a relatively small fraction of the total sequence means that a sampling approach is feasible for the purpose of estimating neutral network sizes and phenotypic robustness. \section{Evolutionary dynamics on genotype spaces} \label{sec:dynamics} In the previous sections we have discussed the static properties of genotype spaces, their plausible universality and some basic principles that may underlie their topology. Such findings are relevant by themselves, but a further aim is to uncover the consequences of genotype space architecture in evolutionary dynamics. Evolution can be pictured as the navigation on the space of all possible genotypes \cite{maynard-smith:1970}, and GP maps describe the way different phenotypes are organised in such a space \cite{alberch:1991}. This organisation and the intrinsic structure of GP maps affects, among others, the ability to find genotypes and phenotypes in evolutionary searches \cite{schaper:2014,cowperthwaite:2008}, as well as the rate of adaptation \cite{draghi:2010,manrubia:2015}. Early studies of dynamics on neutral networks quantified the trend of populations to maximise genotypic robustness by demonstrating that mutation-selection equilibrium is solely determined by the network topology \cite{nimwegen:1999}. Still, the time to reach equilibrium is an inverse function of the mutation rate \cite{aguirre:2009}. Neutral networks in GP maps, as well as in a few instances where this property could be quantified, are assortative \cite{aguirre:2011}: the neutrality of genotypes one mutation away from each other is positively correlated. As a result, the dynamics is naturally canalised towards maximally connected regions \cite{ancel:2000}, resulting in an acceleration in the rate of accumulation of neutral mutations with time \cite{manrubia:2015}. In more recent analyses, attention has turned towards the effect of phenotypic bias in adaptation, as we have already discussed by means of enlightening studies with RNA. The question has been also investigated using a modified version of toyLIFE to model pattern-formation in regulatory networks \cite{catalan:2017tesis,catalan:2020} aimed at finding out how evolution chooses between two {\em a priori} equally fit phenotypes. It turns out that evolutionary dynamics at the phenotypic level cannot be well described by a Markovian process between phenotypes \cite{manrubia:2015}, because of the nontrivial topology of each phenotype's neutral network \cite{aguirre:2018}. As a matter of fact, the escape time from one phenotype does not follow an exponential distribution, as most evolutionary models assume. This is one instance of the so-called phenotypic entrapment \cite{manrubia:2015}, in which the trend of populations to become trapped in increasingly robust regions of a phenotype neutral network results in a long-tailed distribution of escape times: either the population escapes very fast, or takes a very long time to do it. Accounts of evolution on neutral networks driven by point mutations and the corresponding mathematical formalism can be found elsewhere \cite{wilke:2001BMB,reidys:2001,aguirre:2018}, though some essentials will be also described here. In this section we will mainly discuss the effects of a largely disregarded but essential evolutionary mechanism (recombination) and how mutational bias affects isotropic searches. We continue with evolutionary dynamics on genotype and phenotype networks defined by point mutations, where if we make an ergodic assumption that all typical phenotypes are locally accessible we are led in a natural manner to the formulation of the statistical mechanics of phenotypic evolution. We close by discussing a number of applications where these ergodic assumptions are most appropriate. The approaches in this section differ in the formalism used (complex networks at large, mean-field effective models and statistical mechanics) but all converge in the main emerging lesson: the size of a phenotype plays a role in evolution comparable to that of fitness. Quantification of their relative weight through formal approaches might eventually settle the false dichotomy between neutralism and adaptationism. \subsection{Robustness and recombination} \label{sec:Recombination} Genotypic robustness is a property of the GP map that quantifies to what extent functional genotypes can be maintained in the presence of random mutations \cite{visser:2003,lenski:2006,masel:2010,wagner:2005}. Specifically, consider a genotype encoded by a sequence $\sigma$ of length $L$ that admits a total of $(k-1)L$ single point mutations (recall that $k$ is the size of the alphabet). Genotypes are classified to be either viable (functional) or lethal (non-functional). Then the genotypic robustness $r_\sigma$ of a viable genotype is defined as the fraction of mutations that maintain viability \cite{chen:2009}, $r_\sigma = n_\sigma/{(k-1)L}$, where $n_\sigma$ is the number of viable mutational neighbours. The population-averaged robustness is correspondingly defined as \begin{equation} \label{Krug_robustness} r = \sum_{\sigma \in V} r_\sigma \nu_\sigma^\ast, \end{equation} where $V$ denotes the set of viable genotypes and $\nu_\sigma^\ast$ is the stationary frequency of genotype $\sigma$. Two limiting cases are of particular interest. If the product of population size $N$ and mutation rate $U$ per individual and generation is small, $N U \ll 1$, the population is monomorphic and performs a random walk on the network of viable states. The stationary frequency distribution is then uniform and (\ref{Krug_robustness}) reduces to \begin{equation} \label{Krug_uniform} r_0 = \vert V \vert^{-1} \sum_{\sigma \in V} r_\sigma, \end{equation} where $\vert V \vert$ is the number of viable genotypes. On the other hand, when $NU \gg 1$, the stationary frequency distribution is determined by mutation-selection balance and can be shown to be given by the leading eigenvector of the adjacency matrix of the network of viable genotypes \cite{bornberg-bauer:1999,nimwegen:1999}, see also Section~\ref{sec:PhenotypicTransitions}. The population robustness $r$ is related to the corresponding eigenvalue and exceeds the uniform robustness $r_0$ whenever the network is inhomogeneous. This implies that selection in large populations increases robustness by focusing the population in highly connected regions of the network. Numerical studies of recombining populations on various types of genotype networks have indicated that recombination enhances the focusing effect of selection and thus substantially increases genotypic robustness \cite{azevedo:2006,hu:2014,huynen:1994,singhal:2019,szollosi:2008,xia:2002}. Recently, a systematic and largely analytic investigation of the relationship between recombination and genotypic robustness within the framework of deterministic mutation-selection-recombination models has been presented \cite{klug:2019}. As a simple but informative example, consider the space of binary sequences $\{0,1\}^L$ endowed with a `mesa' landscape where genotypes carrying up to $\eta$ 1's are viable and all others are lethal \cite{wolff:2009}. The genotypes on the brink of the mesa carry exactly $\eta$ mutations and have robustness $r_\sigma = \eta/L$, whereas all others have robustness $r_\sigma = 1$. Combinatorial considerations show that the uniform robustness $r_0 \approx 2\eta/L$ for large $L$ and $\eta < L/2$, reflecting the fact that a large fraction of genotypes are located at the brink for purely entropic reasons. The maximal robustness that can be achieved through selection alone is \cite{wolff:2009} \begin{equation} \label{Krug_kL} r \approx 2 \sqrt{\frac{\eta}{L}\left( 1 - \frac{\eta}{L} \right)} \;\; \textrm{for} \;\; \eta < L/2, \end{equation} which exceeds $r_0$ but is small compared to unity when $\eta \ll L$. Thus selection only partly counteracts the entropic outward pressure and as a consequence a large part of the population is still located near the brink under mutation-selection balance. By contrast, in the presence of recombination $r \to 1$ for small mutation rates, because the contracting property of recombination efficiently transfers the population to the interior of the mesa where all genotypes are surrounded by viable mutants. \begin{figure*}[t] \begin{center} \includegraphics[width=\textwidth]{figure3.pdf} \end{center} \caption{\label{JK-Fig2} Genotype network generated by assigning viable genotypes at random with probability $p=0.2$ to binary sequences of length $L=8$. The largest connected component of viable genotypes is shown in the centre of each panel, and smaller components and isolated nodes are arranged in a ring surrounding the central component. (a) Network structure visualised by the recombination weight $\omega_\sigma$. Node areas are proportional to $\omega_\sigma^6$ and the recombination centre is marked in purple. (b) Stationary frequency distribution of a non-recombining population. Node areas are proportional to the stationary frequency $\nu_\sigma^\ast$ of the respective genotype, and the edge width between neighbouring genotypes $\sigma$, $\tau$ is proportional to $\max[\nu_\sigma^\ast, \nu_\tau^\ast]$. (c) Same as (b) for a recombining population. Note that the population is much more strongly concentrated on the recombination centre than in panel (b). In panels (b) and (c) the mutation rate per site is $\mu = U/L = 0.001$. Courtesy of Alexander Klug.} \end{figure*} Simulations on different types of random genotype networks show that the massive enhancement of robustness found for the mesa landscape is generic, and typically a recombination rate on the order of the mutation rate suffices to achieve this effect. It is not obvious that the focusing of the population towards the centre of its genotypic range by recombination should generally increase robustness in this case, because viable and lethal genotypes are randomly interspersed in the network. To rationalise the observed increase in robustness it is useful to quantify the likelihood of a genotype $\sigma$ to be created by recombination through its \textit{recombination weight} $\omega_\sigma$ defined by \begin{equation} \label{Krug_recombination_weight} \omega_\sigma = \frac{1}{\vert G \vert} \sum_{\kappa \in V, \tau \in V} R_{\sigma \vert \kappa \tau}. \end{equation} Here $R_{\sigma \vert \kappa \tau}$ denotes the probability that $\sigma$ is generated by crossover from $\kappa$ and $\tau$ and $\vert G \vert$ is the total number of genotypes. The normalisation ensures that $\omega_\sigma \in [0,1]$, and the recombination weights sum to $\sum_\sigma \omega_\sigma = \vert V \vert^2/\vert G \vert$. The genotype that maximises $\omega_\sigma$ is called the \textit{recombination centre} of the network and provides a good predictor for the point of concentration of the recombining population in the limit $U \to 0$ (see Fig.~\ref{JK-Fig2} for an example). Moreover, for two classes of random, percolation-type genotype networks and one empirical fitness landscape, the recombination weight $\omega_\sigma$ was found to be positively correlated with the genotypic robustness $r_\sigma$. If this correlation were a generic feature of GP maps, it would constitute a mechanistic explanation for how recombination acts to enhance genotypic robustness. Future work should therefore elucidate the conditions on the topology of the genotype network required for such a correlation to be present. It is not difficult to construct counterexamples where the recombination centre has low robustness, e.g., by placing a hole of lethal genotypes at the centre of a mesa landscape. Only the investigation of specific, biophysically motivated GP maps such as RNA secondary structures or lattice proteins will clarify whether or not such instances are statistically relevant. More broadly, it appears that a common perspective on recombination, robustness and evolvability \cite{masel:2010,lenski:2006,wagner:2005} may help to develop and test novel hypotheses about the evolutionary origins of these important biological phenomena. \subsection{Mutation bias} Some regions of genotype space exhibit biases in the mutations they contain. For instance, GC-rich regions have more G$\leftrightarrow$C transversion (purine-to-pyrimidine or pyrimidine-to-purine) mutations than transitions (pyrimidine-to-pyrimidine or purine-to-purine mutations). This may interact with biases in the generation of genetic variation, because some mutations occur more frequently than others. For instance, the rate of A$\leftrightarrow$G transitions exceeds the rate of T$\leftrightarrow$C transitions in transcribed human genes, whereas there is no significant difference in non-transcribed regions \cite{green:2003}. Furthermore, CpG dinucleotides---regions of DNA where C follows G---are considered ``hot spots'' for G$\rightarrow$A and C$\rightarrow$T transition mutations \cite{nachman:2000}. Other forms of mutation bias such as deletion bias and strand-specific bias have been reported in bacterial genomes \cite{paul:2013,mira:2002}. Under certain population genetics conditions, mutation bias can be a orienting factor in adaptive evolution \cite{mccandlish:2014,stoltzfus:2017}, and several experimental evolution studies indicate that mutation bias can influence trajectories of adaptive protein evolution \cite{lozovsky:2009,rokyta:2005}. It is possible to get a better understanding of how such mutation biases affect the outcomes and mutational trajectories of adaptive evolution by studying their impact on the navigability of GP maps. Instead of the classic depiction of a GP map in which all the possible mutations are equally likely to occur, one could consider regions of the genotype space being differentially prone to distinct kinds of mutations. Ultimately this would affect the probability of traversing different edges in the genotype network and, therefore, its navigability. In this context, a {\it mutation bias weight} could be formally defined and introduced into a more general formulation of genotype networks, by biasing the accessibility of different genotypes. Understanding the potential evolutionary implications of mutation biased GP maps could provide us with valuable information about the nature of the systems they represent. For example, if a bias towards certain kinds of mutations enhances the ability to find the adaptive peaks of a certain GP map, a testable prediction could be that adaptive genotypes are more likely to evolve in regions of the genome that are prone to that particular kind of mutation. Moreover, integrating mutation bias into the study of GP maps can change properties such as robustness and evolvability \cite{cano:2020,sella:2005}. Both robustness and evolvability are based on the structure of genotypic neighbourhoods, and this structure can change if mutation bias is considered. For instance, a genotype might seem highly robust when most of its neighbours in the genotype space map onto the same phenotype. However, if there is a sufficiently high mutation bias towards mutations that do not preserve that phenotype, robustness would be diminished. The same principle can apply to evolvability. \subsection{Phenotypic transitions as competitions between networks} \label{sec:PhenotypicTransitions} \begin{figure*} \begin{infobox}[frametitle= Box~\ref{box:networks}: Genotype spaces as networks of networks] {Populations evolve in steadily changing environments where the impact of internal and external perturbations can rarely be considered in full. Often, nonlinear responses to small external changes hinder predictability, as weak perturbations might trigger critical transitions that strongly influence the fate of whole ecosystems \cite{may:1977,scheffer:2001}. Complex network theory and the tools associated to it offer a powerful framework to tackle this type of dynamical systems, since a multitude of natural systems can be modelled as nodes (agents) connected by links (interactions). While network science has largely focused on single networks, in the last decade the study of dynamical properties on networks of networks or, in a more general way, on multilayer networks \cite{kivela:2014}, has attracted wide attention \cite{gao:2011,quill:2012}. One important motivation has been the finding that robustness, synchronisation or cooperation lead to different behaviour when studied in isolated or in interconnected networks \cite{buldyrev:2010,aguirre:2014,gomezgardenes:2012,iranzo:2016}. However, the main reason for this change of perspective has been to realise that many natural systems, beyond displaying a network-like organisation, are also made of interacting and competing networks at very different scales, from the molecular level to supranational organisations \cite{buldu:2019}. The extent to which network science can foster our knowledge and comprehension of the evolution and adaptation of heterogeneous populations in an ever changing biosphere is a relevant open question. In particular, the {\it theory of competing networks} can be used to analyse the evolutionary dynamics of populations in a space of genotypes that can be regarded as a network of networks \cite{yubero:2017}. From this viewpoint, population evolution is described as a competition for resources of a certain kind, where the competitors are whole networks instead of independent nodes \cite{aguirre:2013}. } \label{box:networks} \end{infobox} \end{figure*} Formal studies of the way the structure and navigability of GP maps affects evolutionary dynamics can provide insights into the mechanisms underlying adaptive evolution, robustness and the emergence of phenotypic innovations. In the previous two sections, it has been shown that links between genotypes in a genotype network are weighted: microscopic mechanisms such as recombination and mutation bias modify the likelihood of transitions between pairs of genotypes. Constant link weights of a generic transition matrix {\bf M} correctly describe mutation bias, but cannot account for the effects of recombination, since in the latter case they depend on the abundances of each genotype $\nu_\sigma$ in a nonlinear way, and in general are a time-dependent quantity. The simultaneous consideration of point mutations and recombination in a network framework remains as a topic for future studies. In the following, we summarise a mutation-selection evolutionary process on a network of genotypes subject only to point mutations using tools from complex network theory. Consider a vector $\vec n(t)$ whose components are the population of individuals at each node at time $t$ (upon normalisation, each component $n_{\sigma}(t)$ is the frequency of the genotype $\nu_{\sigma}(t)$). Then, \begin{equation} \label{eq:M*n} \vec n(t+1)=\textbf{M} \vec n(t) \end{equation} represents the dynamics of the population, where {\bf M} is a transition matrix with information on the fitness of each genotype, on the mutation and replication process, and on the weighted topology of the network. $\vec n(t)$ describes the distribution, at each time $t$, of the population of sequences on the space of genotypes. As already stated, mutation-selection equilibrium is independent of the initial state and given by the eigenvector $\vec u_1$ associated to the largest eigenvalue $\lambda_1$ of {\bf M}. Furthermore, $\lambda_1$ yields the growth rate of the population at equilibrium, and $\vec u_1$ is also a measure (known as eigenvector centrality) of the topological importance of a node in a network \cite{newman:2010}. In the context of the theory of competing networks, any dynamics that takes place on networks interconnected through a limited number of links (networks of networks), can be often characterised as a competition where the contenders are whole networks, and where eigenvector centrality represents the resource that the agents compete for (see Box~\ref{box:networks}). The final outcome of such a struggle for centrality strongly depends on the internal structure of the competing networks and on the links connecting them \cite{aguirre:2013}. On the other hand, it has been shown \cite{aguirre:2015} that even when environmental perturbations are weak, populations may suffer critical transitions in their genomic composition when the fraction of lethal mutations (i.e.~of zero-fitness genotypes) is sufficiently high---of the order of that observed in natural populations \cite{eyre-walker:2007}. A recent analysis of these results suggested that the space of genotypes can be regarded as a network of networks in ``competition'' to attract population \cite{yubero:2017}, and that knowledge of the topology of the space of genotypes entails a certain predictive capability of the future evolutionary dynamics of the population under study. In fitness landscapes with a large fraction of lethal genotypes (as it could be the case of the non-compact HP model, GP maps for gene regulatory networks, or models for metabolism), the space of genotypes is formed by many subnetworks connected through narrow adaptive pathways. This topology induces drastic transitions of population from one subnetwork to another, occasionally causing the extinction of the population. The key topological element underlying sudden genomic shifts is the high heterogeneity in the network describing and linking viable genotypes. This topology can arise under a significant fraction of lethal mutations (or non-viable genotypes), but the same phenomenon is observed in rugged fitness landscapes. \subsubsection{An empirical test of the theory: transition forecast} \label{sec:empiricaltest} It is highly likely that large molecular populations able to evolve fast, such as RNA viruses, can provide an empirical test of this predicted critical behaviour. The enormous advances of high-throughput sequencing allow for a very precise description of the populations at the molecular level, and in particular of the abundances of the coexisting genotypes. This information might be used to build the space of sequences associated to a population that evolves in a changing environment, and thus a proxy of the network of genotypes where the population evolves. Applying the theory of competing networks it is conceivable that the eigenvalues of the different subnetworks and the centrality of the connector nodes would provide valuable information on how environmental variability affects the sharpness of the transitions and on the chances that the population could survive. The combination of tools from complex networks theory and the last decades' research on state shifts in the biosphere \cite{barnosky:2012,brook:2013} might eventually lead to a prediction of the time left until the transition occurs. This prediction is important because, once a tipping point takes place, it becomes very difficult, if not impossible, to return to the previous state. At present, a wide variety of early warning signals for state shifts has already been characterised, but none of them yields precise information about the time left before the tipping point is reached \cite{scheffer:2009,scheffer:2012}. However, calculations of the minimal distance between the first and second eigenvalues associated to the transition matrix {\bf M} could be used to obtain a first estimation of the time to the transition \cite{aguirre:2015}. In an evolving population, the relative abundances of the different genotypes could be used as an approximation of the eigenvector $\vec{u}_1$; a measure of the growth rate of the population at equilibrium could yield the largest eigenvalue $\lambda_1$, and $\lambda_2$ might be estimated by quantifying how resilient the population is to external perturbations \cite{dai:2012}. A sufficiently precise measurement of these quantities would represent a very fruitful connection between actual evolving populations and a dynamical description of possible sudden evolutionary transitions. On a related note, regarding the space of genotypes as a network of networks entails a more coarse-grained, effective model where each genotype network can be considered as a single node, and where the dynamics can be simplified to account only for changes in the phenotype. Links in this higher-level description would have a weight proportional to the within- and between-phenotypes links. At odds with the description at the genotype level though, transitions between phenotypes are no longer symmetrical \cite{fontana:2002,cowperthwaite:2008}, nor is the dynamics describing these transitions Markovian any more \cite{huynen:1996,manrubia:2015}. \subsection{A mean-field description of phenotype networks} The qualitative properties of a high-dimensional evolutionary search are inherent to navigable GP maps and very likely responsible for some of the generic features described in Section~\ref{sec:UnivTopology}. Despite all caveats that the complex dynamics at the genotype level may raise due to its non-Markovian nature \cite{huynen:1996,manrubia:2015}, the high dimensionality of genotype spaces helps us understand why a simple mean field model \cite{schaper:2014}, which averages over much of the local structure of a neutral set, succeeds in capturing some of those generic, dynamical properties. The model works with $\phi_{\xi \chi}$, the probability that a point mutation for genotypes that map to phenotype $\xi$ generates a genotype for phenotype $\chi$, averaged over all genotypes that generate $\xi$. By measuring the $\phi_{\xi \chi}$, a weighted network between all the phenotypes can be defined, with $\phi_{\xi \chi}$ as the weights. This allows for a much simpler dynamics that ignores the individual genotypes, and so analytic results can be derived for many properties in dynamical regimes ranging from the monomorphic to the fully polymorphic limits. Interestingly, for RNA, as well as for a number of other GP maps \cite{greenbury:2016}, it was found to a good first approximation that if $\xi \neq \chi$ then \begin{equation} \phi_{\xi\chi} \approx f_\chi, \end{equation} where $f_\chi$ is the global frequency of phenotype $\chi$, i.e., the fraction of genotypes that map to $\chi$. Since the $f_\chi$ range over many orders of magnitude, so do the $\phi_{\xi\chi}$. In contrast to the case where $\xi\neq\chi$, the robustness of phenotype $\xi$ is $\phi_{\xi\xi} \propto \log(f_\xi)$, and so varies much less with NSS. This property of the robustness is critical for neutral exploration. The mean field model predicts that for many different starting phenotypes $\xi$, the probability that a different phenotype $\chi$ will appear as potential variation will scale as $f_\chi$. For several GP maps, this simplified model does an excellent job at predicting the rates at which variation arises in full GP map simulations. Since NSS, or equivalently $f_\chi$, varies over many orders of magnitude, this argument predicts that, to first order, the rate at which variation arises will also vary over many orders of magnitude. Therefore, even though the set of physically possible variations may be very large, only a tiny fraction of the most frequent phenotypes will ever be presented to natural selection. This \textit{arrival of the frequent} effect \cite{schaper:2014} is therefore very strong. Fundamentally it is a non-steady state effect, since the longer an evolutionary run proceeds, the more the potential variation with lower $f_\chi$ becomes likely to appear. The arrival of the frequent differs from the {\em survival of the flattest}, \cite{wilke:2001Nat} which describes the situation where a fitness peak with lower fitness can nevertheless dominate over a higher fitness peak with a lower NSS. The latter effect can be analysed in a steady-state framework, whereas the former effect cannot. Let us return in this context to the question of why so many structural features, as well as the genotypic robustness of RNA secondary structures, are so accurately predicted by a null model that ignores selection entirely. The arguments above suggest that even in the more complex situation of RNA evolution in nature, variation will nevertheless to a good first approximation arise with a probability proportional to its NSS. Since this rate varies by so many orders of magnitude, this arrival of the frequent effect determines what natural selection can work with, and so tends to dominate over local fitness effects. Rare phenotypes will almost have no bearing on evolutionary dynamics: they will hardly be found by a population searching for an adaptive solution and, if they are found, they will be quickly lost due to mutations. This is akin to an entropic effect in statistical physics: dynamics tend to favour macrostates with a larger set of microstates. In other words, natural selection can only act on variation that has been pre-sculpted by the GP map. For the case of RNA described in Section~\ref{sec:RNABias}, it appears that it mainly works by further refining parts of the sequence. This picture of the primacy of variation stands in sharp contrast to more traditional arguments about the importance of natural selection as an ultimate explanation of any evolutionary trends. It also raises many open questions. Are there other GP maps for which we can see such dramatic effects in nature? There are certainly conditions where this primacy of variation is incorrect. But how, and when does this GP map based picture of pre-sculpted variation break down? The exceptional case of viroids, discussed in Section \ref{subsec:RNA} might be one such example, and provide clues to seek answers to the latter question. \subsection{Equilibrium properties and statistical mechanical analogies in the weak mutation regime} \label{sec:SMevol} The broad question of optimisation in evolution, such as the existence of a Lyapunov function, describing a general dynamics and approach to equilibrium was first addressed by Iwasa \cite{iwasa:1988} in his definition of ``free fitness'', in analogy to the free energy of statistical mechanics, and then later rediscovered for the particular case of the weak mutation regime \cite{sella:2005} and in the context of the evolution of quantitative traits \cite{barton:2009,barton:2009a}. The key insight, is that, at finite population size, not fitness itself but a combination of fitness and Shannon entropy (weighted by $1/N_e$, where $N_e$ is the effective population size) is optimised over the evolutionary degrees of freedom of interest. This perception is consistent with the mean-field description reviewed in the previous section, where it has been shown that phenotypic bias is at least as relevant as phenotype fitness in evolutionary dynamics. From a statistical mechanics viewpoint, and in the weak mutation regime ($N_eU\ll1$, $N_eU\ln(N_es)\ll1$, where $s$ is the gain in fitness brought about by a mutation), populations are approximately monomorphic and the degrees of freedom of interest are the different alleles, codons or genotypes, which are fixed, or not, in the population; evolution can be described by a Markov process, where populations effectively jump sequentially between adjacent genotypes by a substitutional process \cite{mccandlish:2014}, where in equilibrium (assuming uniform mutation and the fixation probability given by the diffusion approximation) the probability of occupation is given by the Boltzmann distribution \begin{equation}\label{Eq:GenotypeBoltzmann} p_\sigma = e^{2N_eF_\sigma}/Z, \end{equation} where $F_\sigma$ is the fitness of genotype, allele, or codon $\sigma$, and $Z$ is the partition function. It is clear that $N_e$ plays the role of inverse temperature, such that fitness dominates at large population sizes (low temperature) and genetic drift for small population sizes (high temperature). Many of the calculational tools of statistical mechanics and generating functions then carry over to evolutionary problems \cite{barton:2009a} under usual ergodic assumptions. The statistical mechanical analogy finds particular use in understanding the evolution of phenotypes arising from GP maps. Here, selection acts on phenotypes, but mutation and variation arise at the level of genotypes. Keeping in mind the many-to-one nature of most GP maps and phenotypic bias, the Boltzmann distribution of genotypes can be recast in terms of a Boltzmann distribution of phenotypes \cite{khatri:2009,khatri:2015}; as each genotype mapping to a given phenotype must by definition have the same fitness, the probability of each phenotype is the Boltzmann factor ($e^{2N_eF(\xi)}$) weighted by the degeneracy of each phenotype $\Omega(\xi)=k^L f_\xi$, giving \begin{equation}\label{Eq:PhenotypicBoltzmann} p(\xi) = e^{2N_e\Phi(\xi)}/Z, \end{equation} where \begin{equation}\label{Eq:FreeFitness} \Phi(\xi) = F(\xi)+\frac{S(\xi)}{2N_e} \end{equation} is the free fitness of phenotypes, $S(\xi)=\ln(\Omega(\xi))$ being the Boltzmann or \textit{sequence entropy} of phenotypes. We see that for small populations phenotypes with larger sequence entropy are favoured by genetic drift in evolution. \subsubsection{Statistical mechanics of the evolution of transcription factor-DNA binding} The ideas above first formally found application in simple biophysical models of transcription factor DNA binding \cite{berg:2004,mustonen:2005,mustonen:2008}, where the degeneracy of binding affinities can be exactly quantified under simplifying assumptions. It is typically found that for a given transcription factor, the amino acids at the binding interface tend to have strong preference to bind a single nucleotide; it is then mismatched nucleotides that control the binding affinity, as these are strongly destabilising, since hydrogen bonding is disrupted at the interface, as well as the lost hydrogen bonds with water molecules. A simple model of transcription factor binding, assumes binding between protein and DNA can be reduced to either a quaternary or binary alphabet, where the binding energy $E$ is proportional to the number of mismatches, or Hamming distance, $h$, $E=\epsilon h$. The degeneracy function is then related to the binomial coefficient \begin{equation}\label{Eq:BinomialDegeneracy} \Omega(h) \propto \binom{L}{h}(k-1)^h. \end{equation} This simple combinatorial argument shows that there is a huge degeneracy, or phenotypic, bias towards poor binding in this genotype-phenotype map. This methodology has been used to infer the effective genome-wide fitness landscape for transcription factor DNA binding in {\it Escherichia coli} and yeast \cite{mustonen:2005,mustonen:2008,haldane:2014}, suggesting that on average binding is under stabilising selection, with monotonically decreasing fitness with decreasing binding affinity. This simple model of transcription factor DNA binding suggests that smaller populations bear a significantly greater drift load under stabilising selection, than would be predicted if we assumed evolution based on phenotypes only \cite{khatri:2015,khatri:2015a}; while selection pushes populations to larger binding affinities, there is an opposing sequence entropic pressure for poorer binding. In equilibrium, these opposing tendencies are balanced, and it is the free fitness that is maximised, not fitness. This effect of sequence entropy on drift load is significantly greater than would be expected for a trait under stabilising selection, which ignores any degeneracy (see Box~\ref{box:quadraticlandscape}). \begin{figure*} \begin{infobox}[frametitle= Box~\ref{box:quadraticlandscape}: Quadratic free fitness landscape] {We examine a quadratic free fitness landscape, which is a simple description for the statistical mechanics of transcription factor DNA binding. For moderately large $L$, we can make the standard Gaussian approximation of the binomial distribution in the degeneracy function \eqref{Eq:BinomialDegeneracy}, such that the sequence entropy function (up to a constant) is approximately quadratic: \begin{equation}\label{Eq:QuadraticSequenceEntropy} S(h) \approx \frac{k}{2\langle h \rangle}(h-\langle h\rangle)^2, \end{equation} where $\langle h\rangle = (k-1)L/k$ is the mean of the binomial distribution ($\langle h\rangle=L/2$ for binary, and $\langle h\rangle=3L/4$ for quaternary alphabets). Further, if we assume that the fitness landscape of binding affinities is quadratic of the form $F(h)=-\frac{1}{2}\kappa_Fh^2$, whose maximum is for the best binders ($h=0$), then from Eq.~\eqref{Eq:FreeFitness} the free fitness of binding energies/Hamming distance $h$ is then itself quadratic with new curvature $\kappa=\kappa_F+\frac{k}{2\langle h\rangle N_e}$ and with shifted maximum $h^*=\frac{k}{2N_e\kappa}$: \begin{equation}\label{Eq:FreeFitnessTF} \Phi(h) = \frac{1}{2}\kappa(h-h^*)^2. \end{equation} This new maximum is shifted to poorer binding affinities and represents the balance between selection and sequence entropy: \[ \frac{\mathrm{d}\Phi}{\mathrm{d} h} = \frac{\mathrm{d} F}{\mathrm{d} h} +\frac{1}{2N_e}\frac{\mathrm{d} S}{\mathrm{d} h} =0. \] It is instructive that the drift-load for this simple GP map varies as $D\sim N_e^{-1}$, a far stronger dependence on population size than if we considered evolution on only a phenotypic landscape $F(r)$, which would vary as $D\sim N_e^{-1/2}$; this significant difference arises as the sequence entropic pressure causes the peak of the phenotypic distribution to shift, whilst ignoring this would simply give rise to a broadening of the distribution. } \label{box:quadraticlandscape} \end{infobox} \end{figure*} \subsubsection{Evolution of genotypic divergence and reproductive isolation} \begin{figure*}[htbp] \includegraphics[width=0.65\textwidth]{figure4.jpg} \caption{Divergence for a simple GP map of transcription factor DNA binding. After a geographic split a once unique species evolves into two independent ones. Within simulations the fitness of various hybrid combinations of loci from each lineage can be calculated and number of inviable combinations (incompatibilities) recorded. Numerical results show that incompatibilities arise more quickly in smaller populations \cite{khatri:2015a}.} \label{fig:GPmap} \end{figure*} One consequence of this significantly larger drift load is that in (allopatrically) diverging populations this gives rise to the prediction that reproductive isolation arises more quickly due to common ancestors already having more maladapted transcription factor-binding site pairs on average \cite{tulchinsky:2014,khatri:2015,khatri:2015a} (Fig.~\ref{fig:GPmap}); if the common ancestor has a binding affinity closer to being deleterious (but kept in check by stabilising selection) then in hybrids Dobzhansky-Muller incompatibilities \cite{dobzhansky:1936,muller:1942,bateson:2009}, which are incompatible combinations of transcription factors and DNA binding sites, arise more quickly after divergence. In particular, this mechanism is broadly consistent with trends seen in field-data \cite{fitzpatrick:2004,stelkens:2010,cooper:1997} and diversification rates in phylogenetic trees \cite{coyne:1998,barraclough:2001,nee:2001}, and so gives a robust explanation of how stabilising selection can give rise to this population size effect in speciation, without requiring passing through fitness valleys as do models based on the founder effect \cite{lande:1979,lande:1985,barton:1984,barton:1987}. This model also predicts that those transcription factor-DNA binding site pairs, which are under weaker selection across a genome, would for the same reason give rise to a greater contribution to reproductive isolation, as the balance between selection and sequence entropy would be shifted to give common ancestors with weaker binding on average \cite{khatri:2015a}. \subsubsection{Marginal stability of compact proteins} Equilibrium statistical mechanical ideas also have the potential to explain the observed marginal stability of compact proteins. Various databases show that proteins have stabilities (measured through free-energy differences) of order $\Delta G\sim-10$ kcal/mol, which is only a few hydrogen bonds \cite{zeldovich:2007}, when their potential maximum stability could be orders of magnitude greater. Although adaptive explanations have been suggested related to the necessity of protein flexibility \cite{zavodszky:1998}, a more straightforward explanation is offered in light of the free fitness of phenotypes; for a given chain length there are many more sequences that give poor protein stability and so this sequence entropy pressure balances the tendency of natural selection to choose proteins of higher stability. Simulations and theory of the evolution of protein folding \cite{taverna:2002,bloom:2007,zeldovich:2007,goldstein:2011,serohijos:2012} reproduce this marginal stability, with a particular property that the distribution of $\Delta\Delta G$, the change caused by mutations in the stability of a protein, is roughly independent of $\Delta G$, the stability of the protein. The marginal stability of proteins shows an interesting behaviour as function of population size $N_e$; as we might expect, as the population size is increased, selection dominates genetic drift, and simulations \cite{goldstein:2011,wylie:2011} show that the average stability \begin{equation} \langle\Delta G\rangle\sim -k_BT\ln(N_e) \label{eq:scaling} \end{equation} in the weak mutation limit ($\mu N_e\ll 1$). This result can be obtained from simple scaling arguments \cite{wylie:2011}, and it has been theoretically shown that, under global selection against misfolding, a broader scaling relationship between protein folding stability, protein cellular abundance, and effective population size holds \cite{serohijos:2013}. Also, Eq. (\ref{eq:scaling}) can be shown to arise from similar reasoning to that in Box~\ref{box:quadraticlandscape}, as a result of a balance between selection and sequence entropy enhanced genetic drift at a given population size $N_e$ \cite{khatriUnpub:2019}. \subsubsection{Free fitness, universality and developmental system drift} The question naturally arises: how universal or general is this effect in the genetic divergence of populations under stabilising selection? In developmental biology it is commonly found that closely related species that have similar organismal level phenotypes, such as body plans, nonetheless have diverged in the regulatory networks that control this patterning \cite{matute:2010,verster:2014,wotton:2015}. This cryptic variation is known as ``developmental system drift'' \cite{true:2001,haag:2014} and is a potential source of hybrid incompatibilities and ultimately reproductive isolation as previously explored by using simple gene regulatory networks \cite{johnson:2000,johnson:2001}. However, in analogy to transcription factor DNA binding, if the GP map of developmental patterning has large degeneracy or phenotypic biases, we may also expect to find a rapid increase in the rate that hybrid incompatibilities arise as the population size decreases. To explore this, a previously studied multi-level GP map for developmental spatial patterning \cite{khatri:2009} was used, in which gene regulation is manifested by multiple transcription factor DNA binding interactions, each described by a Hamming distance model as described above. Importantly, the GP map has the property that stabilising selection acts to maintain a body patterning phenotype, but the underlying genotypic degrees of freedom and molecular (binding energy) phenotypes can drift. In addition, the GP map has the essential property that allows an equilibrium analysis \cite{khatri:2009} that it is ergodic for small population sizes, which is in itself surprising given its state space is many orders of magnitude greater than can be explored in any realistic or relevant evolutionary timescale \cite{mcleish:2015}. This ergodic property is closely related to the idea of space-shape covering, where the property of high dimensional maps means many phenotypes are potentially accessible from each other by only a few mutations. The main result is that in this more complex GP map, reproductive isolation also arises more quickly for small populations \cite{khatri:2019}, which is related to the strong phenotypic bias. In addition, analogous to transcription factor DNA binding, it is also found that the molecular binding energy phenotypes---that underlie the organismal level patterning phenotype---which are under weakest selection, are most likely to give rise to the earliest hybrid incompatibilities. Altogether, these results point to a universal picture to understand divergence between populations and the role of population size for strongly conserved traits; high-fitness phenotypes tend to be also highly specified, which means in converse low fitness phenotypes will have a large relative degeneracy or phenotypic bias. This means that the balance between fitness and sequence entropy, embodied by the maximum of free fitness, will be a strong feature of the equilibrium probability distribution of strongly conserved phenotypes, which are under stabilising selection. For simple biophysical traits like transcription factor DNA binding or protein stability, it is clear this is true since there will always only be a few sequences that give maximum affinity or stability, however, this has been found to be true even in a more complex GP map for developmental system drift. It is likely that in some way the sequence entropy constraints of transcription factor DNA binding propagate up in determining the sequence entropy of the organismal level patterning phenotype. The open questions are: how universal is this phenomenon?, will far more complex GP maps also show this behaviour?, will such maps maintain their property of ergodicity?, and is there a broad theoretical framework that can address this question without the more complex and computationally intensive simulations needed to address the former? Beyond an equilibrium analysis, there is the open question of dynamics and adaptation in GP maps \cite{manrubia:2015,khatri:2015,schaper:2014,nourmohammad:2013}, as well as extending this formalism to the strong mutation regime, yet still at finite population size (as compared to the infinite population size, quasispecies regime \cite{iwasa:1988,barton:2009,nourmohammad:2013,khatri:2018}). \section{GP maps as evolving objects}\label{sec:evolutionOFgpmaps} Important new insights on quantitative features of adaptation have been obtained by studying evolutionary processes with realistic, highly nonlinear GP maps, as presented in previous sections. However, the concept of a predefined GP map on which evolutionary processes occur is not realistic. Not only should we expect the phenotype-to-fitness relation to vary due to environmental fluctuations---changing the fitness landscape into a seascape \cite{mustonen:2009}---, but it is the GP map itself that is subject to evolution. Indeed, one might argue that in the long term what occurs is the evolution \emph{of} the GP map, rather than a simple adaptation on a sort of preexisting genotype space. By evolution of the GP map we mean two things: first, that the assignment of phenotypes to genotypes is a dynamic process that depends on context. As a consequence, the same genotype can present very different phenotypes during the course of evolution. And, second, that the dimensionality of the map changes during the course of evolution \cite{zeldovich:2007_PLoSCB}. Indeed, duplications, deletions or large-scale chromosomal rearrangements, among others, are very frequent and often related to the acquisition of new or different phenotypic features \cite{kent:2003}. In the following we will explore two computational models where the GP map itself is allowed to evolve, each demonstrating one of the features of GP map evolution mentioned above. We focus our discussion on the evolution of mutational neighbourhood, that determines which phenotypes are accessible from an evolved (evolving) genotype. As we will see, evolved populations in these two models fine-tune their mutational neighbourhoods so that adaptive phenotypes arise more frequently as a result of mutation. It appears that the explicit consideration of the mutational neighbourhood determined by the evolution of the GP map is essential for understanding not only long and short term evolution, but also the functioning of present day organisms. \subsection{Evolution of a multifunctional quasispecies in an RNA world model} \label{sec:Quasispecies} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{figure5.png} \caption{The region of the genotype space selected by a population within an RNA world model is highly special as compared to controls. (a) Secondary structure of a replicase ($+$ strand) and its $-$ strand which is optimized for maximum replication rate. (b) Functional classes in the 1-mutation-away mutational neighbourhood. Black: replicases; yellow: parasites; green: helpers; red: stallers; gray: junk; blue: unclassified. From left to right, pie charts correspond to a purely evolved replicase, a replicase optimized for maximum replication rate, and an average random replicase, respectively. Strong reduction of replicases and parasites and strong over-representation of helpers convey robustness to high mutation rates. (c) 1- to 10-mutations-away neighbourhoods ($x$-axis of each functional type) of the evolved GP map: at larger mutational (and therewith spatial) distances, frequencies of helper mutants decline drastically, whereas the frequency of stallers increases, thereby preferentially helping the ancestor and stalling others, including parasites.} \label{fig:evolvedRNAmap} \end {figure*} The RNA World model \cite{gilbert:1986} envisages a plausible scenario for the origin and early evolution of life. Understanding how the RNA World could have arisen involves explaining how diverse molecular function might emerge in the absence of faithful replication. Interestingly, it has been suggested that phenotypic bias could have played a main role in solving this problem \cite{briones:2009}. Evolution and selection become possible only once the replication machinery is in place. In perspective, two alternative approaches have been extensively used for studying evolution of the RNA world: those that study the evolution on the RNA-sequence-to-secondary-structure GP map and those studying the impact of spatial pattern formation on what is selected. While the former class of models study the RNA world using the GP map with predefined fitness criteria, the latter explores the eco-evolutionary dynamics of replicator interactions without a predefined fitness. These two approaches have been combined \cite{takeuchi:2008,colizzi:2014} in a case study of the evolution of the qualitative, emergent functional properties of mutational neighbourhoods. In this model, RNA sequences are embedded in a 2D grid and interact with their closest neighbours by complementary base pairing, forming complexes. If one of the molecules $X$ folds into a structure with pre-dened motifs and binds to a molecule $Y$, replication can occur and the complementary strand of the molecule $Y$ is formed. No fitness is explicitly defined, and therefore it arises as an emergent property of the population. Because of the spatial embedding, the interactions that occur are shaped by emergent spatial structures. Such emergent spatial structures constitute a new level of selection and deeply affect the evolutionary outcome of replicators (as it has been shown in previous examples related to the RNA world \cite{boerlijst:1991,takeuchi:2009}). At all mutation rates studied, replicases rapidly evolve symmetry breaking between the complementary RNA strands, with one strand having replicase functionality and the complementary strand evolving an optimal template function---i.e. optimally binding the replicase. This symmetry breaking is also seen in non-GP-map-based toy models \cite{takeuchi:2017,dunk:2017}. The stationary phenotypic composition of the population, however, strongly depends on mutation rate. At high mutation rates, only one, highly polymorphic quasispecies of replicases exists, whereas at lower mutations rates multiple quasispecies coexist. At intermediate mutation rates, there is coexistence between replicases and parasites, RNA molecules that act as templates for the replicases but which have no catalytic function themselves. At lower mutation rates, two different replicase-parasite communities coexist. Finally, at the lowest mutation rates these communities compete with each other sometimes going to extinction. \begin{figure} \begin{infobox}[frametitle= Box~\ref{box:mutationalclasses}: Emergent functional classes in an RNA world model] {In the RNA world model here described \cite{takeuchi:2008,colizzi:2014}, molecular phenotypes were determined for pairs of complementary sequences ($+$ and $-$ strands), based on specific structure motifs. Under evolution, the following phenotypes emerge: \\ {\bf Self-replicases} can replicate both other molecules as well as themselves. \\ {\bf Parasites} are RNA sequences that only work as templates and have no replicase ability. \\ {\bf Helpers} can replicate other molecules but cannot be replicated. \\ {\bf Stallers} can engage molecules in complexes, but can neither replicate them nor be replicated. \\ {\bf Junk} cannot form complexes and are therefore mostly inert. } \label{box:mutationalclasses} \end{infobox} \end{figure} Let us focus now on the mutational neighbourhood of the replicases that evolve at the highest sustainable mutation rates. The functional composition (see Box~\ref{box:mutationalclasses}) of the mutational neighbourhood of such evolved replicases is compared to two controls, i.e. a replicase that has been optimised for its replication rate, and randomly sampled replicases (Fig.~\ref{fig:evolvedRNAmap}b). In the mutational neighbourhood of the evolved replicases, replicases are scarce, parasites are missing, helpers are over-represented, and non-viable stallers are above average. In contrast, the controls have many replicases and parasites, and much fewer helpers (Fig.~\ref{fig:evolvedRNAmap}b). The advantages of the multifunctional organisation of the mutational neighbourhood can be understood as follows. Non-viable mutants tend to be spatially close to their ancestor. Thus the helpers, in the close mutational neighbourhood, tend to help their ancestor and siblings rather than others. The non-viable helpers are essential for survival: if they are eliminated, the whole system goes extinct. This advantage of helpers is true only because there are no parasites in the mutational---and therefore spatial---neighbourhood. In contrast, stallers are detrimental for the system, but less so for their ancestor, for whose survival they are essential. This is because there are fewer stallers in the close neighbourhood than farther away (Fig.~\ref{fig:evolvedRNAmap}b and c) and they therefore hinder others more than the ancestor. In particular, they stall parasites if they emerge farther away. Indeed, if stallers are killed, parasites invade the system forming the two-species system characteristic of lower mutation rates. In this scenario, functions were not pre-conceived, but emerged. Because the implemented GP map is actually the classical RNA GP map, and only point mutations are considered, the evolutionary dynamics could be seen as evolution \emph{on} this fixed GP map. However, the GP map described in terms of these functions, and their structural implementation, fits better in the conceptualisation of evolution \emph{of} the GP map, where some phenotypes (and thus functions, like helpers or stallers) evolve not as separate lineages, but as mutants in the evolved mutational neighbourhood of a replicase. \subsection{Evolution of genome size and evolvability in virtual cells} \label{sec:VirtualCells} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{figure6.png} \caption {Virtual cell and evolutionary dynamics. (a) Scheme of a virtual cell. (b) Common ancestor through time. Red line is the average fitness in three standard environments; shaded grey area depicts genome size, showing initial genome inflation followed by streamlining (gene loss). (c) Mutational neighbourhood of the ancestor in various time periods, colour coded according to inset: during evolution the mutational neighbourhood changes from the initial fitness distribution of the shaded grey area to a more pronounced ``U-shape'', with peaks at neutral mutations (right side) and strongly deleterious mutations (left side).} \label{fig:virtualcell} \end {figure*} Now we explore {\it virtual cells} (Fig.~\ref{fig:virtualcell}a), a second model where we allow the dimensionality of the GP map to change while fixing a fitness function \cite{cuypers:2012,cuypers:2014,cuypers:2017}. The system consists of a genome with genes coding for enzymes, pumps and transcription factors, as well as transcription factor binding sites. The transcribed genes form a simple metabolic network, which pumps in resources and transforms them into energy and building blocks. The external resource fluctuates, and fitness is homeostasis: the energy carrier and internal resource have to be close to a preset value. Average homeostasis over a cell's lifetime determines its fitness at replication. Mutations include changes in parameter values as well as gene duplications, deletions and large chromosomal rearrangements. Thus, genome size is variable. Figure~\ref{fig:virtualcell}b summarises the dynamics of evolved virtual cells at different stages. Early in evolution, genome size expands dramatically and immediately declines sharply. Although this transient is not generic, it occurs in those evolutionary runs which later reach high fitness. Interestingly, genome expansion does not entail an immediate fitness benefit, since there is no difference, in this time frame, between runs in which the common ancestor does and does not expand its genome. Subsequently, gene loss dominates the evolutionary dynamics most of the time, and often conveys increases in fitness. The mutational neighbourhood, represented here as the fraction of mutants with decreasing fitness, has a characteristic ``U-shape'' (Fig.~\ref{fig:virtualcell}c), which becomes more pronounced during evolution. The fraction of neutral mutants remains the same despite an overall fitness increase, whereas the fraction of lethal mutations increases and the fraction of slightly deleterious mutations decreases. Through this process, populations become highly evolvable. After a drastic environmental change (here implemented as a change in basic parameters) it sometimes takes only a few minor mutations to recover from nearly zero fitness to a value comparable to that previous to the environmental change; in other cases, a relatively fast recovery of fitness is mediated by genome expansion. After repeated environmental switches, evolvability through few mutations becomes common. Such fast evolvability turns out to be easier to evolve than regulatory mechanisms to adapt to changing environments \cite{cuypers:2017}. These results are consistent with experimental reports. Phylogenetic reconstructions of long-term evolution show surprisingly large genome sizes of common ancestors (LUCA and LECA) and evolutionary dynamics dominated by gene loss \cite{koonin:2007}. The U-shape mutational neighbourhood has been observed in yeast \cite{wloch:2001} and viruses \cite{sanjuan:2004}. Lastly, fast adaptation to environmental changes mediated by few mutations or by genome expansion are well documented in many evolutionary experiments, for instance in yeast \cite{yona:2012}. Antibiotic production in {\it Streptomyces} is done by highly unfit mutants \cite{zhang:2019}, an evolutionary signature resembling the multifunctional quasispecies described in the RNA example. It is remarkable that all these surprising evolutionary signatures emerge in a minimal cell model, suggesting that they are generic features of Darwinian evolution, if genome organisation and the GP map are allowed to evolve. \section{Empirical genotype-to-phenotype and genotype-to-function maps} \label{sec:empirical} Technological advances are facilitating the experimental characterization of GP maps at ever-increasing resolution and scale \cite{devisser:2014,payne:2019}. The phenotypes of such maps include the activity or binding specificity of macromolecules such as RNA and proteins \cite{olson:2014,pitt:2010,sarkisyan:2016,diss:2018}, the exonic composition of transcripts \cite{julien:2016}, the spatiotemporal gene expression pattern of regulatory circuits \cite{schaerli:2014,schaerli:2018}, as well as the function and flux of metabolic pathways \cite{bassalo:2018}. In some cases, it is even possible to measure organismal fitness {\it en masse} \cite{li:2016,puchta:2016,venkataram:2016,rotem:2018}. When combined with a mapping from phenotype to fitness, biophysical GP maps provide a principled approach to constructing a fitness landscape over the space of genotypes. In situations where an empirical genotype-fitness map is available but a mechanistic understanding of its structure is lacking, one may instead try to infer the hidden phenotypic level from the genotype-fitness data. Ideally the inferred phenotypes can be interpreted biologically, but even when this is not the case, the introduction of an intermediate phenotypic layer helps to organise the high-dimensional genotypic data set and to reduce its complexity. In this section, we begin with GP maps that have been empirically characterised. Sometimes, the quantity that is experimentally accessible is fitness, and not phenotype. We discuss how empirical data of that kind can be used to infer the structure of fitness landscapes and some properties of the phenotypic level. Then, we delve into the characterisation of GP and genotype-to-function maps in virus populations, discussing as well the implications of those maps in evolutionary dynamics under constant and variable environments. Next we address the inference of intermediate phenotypes from genotype-fitness data and, finally, we discuss approaches to the experimental characterisation of GP maps that may be relevant to synthetic biologists. \subsection{Empirical GP maps} There are three main approaches for constructing empirical GP maps \cite{devisser:2014,payne:2019}: (i) a combinatorially complete map is constructed using all possible combinations of a small set of mutations, such as those that occurred along an adaptive trajectory in a laboratory evolution experiment or in natural history \cite{weinreich:2006}; (ii) a deep mutational scan assays the phenotypes of all single mutants, as well as many double- and triple-mutants of a single wild-type genotype \cite{fowler:2014}; and (iii) an exhaustively-enumerated map is constructed from all possible genotypes---something which is only possible for very small genotype spaces \cite{rowe:2009,jimenez:2013,payne:2014b}. In some cases, such as with antibody repertoires \cite{adams:2019,miho:2019} or viral populations \cite{hinkley:2011,acevedo:2014}, a fourth method of construction is possible. Specifically, one can directly construct a small portion of an empirical GP map by collecting a large number of genotypes with a particular phenotype from nature (e.g., the ability of an antibody to bind an antigen). Below, we describe a recent example from each of the three main categories, highlighting the biological insights gained from the construction and analysis of such maps. \subsubsection{A combinatorially complete map} Alternative splicing is a key step of post-transcriptional gene regulation, and exonic mutations that affect splicing are commonly implicated in disease \cite{daguenet:2015}. All possible combinations of mutations that occurred in the evolution of exon 6 in the human {\it FAS} gene since the last common ancestor of humans and lemurs have been analysed \cite{baeza:2019}. A total of 3,072 genotypes were assayed for the percentage of transcript isoforms that included the exon. This phenotype of ``percentage spliced-in" varied from 0\% to 100\% among the 3,072 genotypes, indicating that in combination, these mutations are capable of producing the full range of exon inclusion levels. Importantly, the phenotypic change induced by a mutation depended non-monotonically upon the phenotype of the genotype in which the mutation was introduced, such that mutations to genotypes near the full-exclusion or full-inclusion phenotypic bounds had the smallest effects, whereas mutations to genotypes with intermediate inclusion levels had the largest effects. The resulting biological insight is that the evolution of an alternative exon from a constitutive exon will require several mutations, because mutation effect sizes are smallest when the exon is near full inclusion. This observation led to the mathematical derivation of a scaling law that applies to this and possibly other GP maps, and that may aid in the development of drugs aimed at targeting splicing for therapeutic benefit, by helping to predict drug-sensitive splicing events. Another example of a combinatorially complete map will be explored more fully in Section \ref{sec:viruslandscape}. \subsubsection{A deep mutational scan assay} Amino acid metabolism is fundamental to life, and is driven by complex metabolic and regulatory pathways. A deep mutational scan of nineteen genes involved in four pathways that affect lysine flux in {\it E. coli} was performed \cite{bassalo:2018}. The resulting GP map consisted of 16,300 genotypes, each of which was assayed for its resistance to a lysine analogue that induces protein misfolding and reduces cell growth. The phenotype was therefore grown in the presence of the analogue. Several resistance-conferring mutations were identified, including mutations in transporters, regulators, and biosynthetic genes. For example, such mutations were often observed in a lysine transporter called LysP. These were relatively evenly distributed across the gene, suggesting that loss-of-function mutations were a common evolutionary path toward abrogated transport of the lysine analogue. More generally, this study represents a proof-of-concept that deep mutational scanning experiments can be scaled up from individual macromolecules to regulatory and metabolic pathways. \subsubsection{An exhaustively enumerated map} \label{sec:EEM} Binding of regulatory proteins to DNA and RNA molecules are central to transcriptional and post-transcriptional gene regulation, respectively. The robustness and evolvability of these two layers of gene regulation has been studied via a comparative analysis of two empirical GP maps \cite{payne:2018}. At the transcriptional level, interactions between DNA and transcription factors were considered, where a genotype was a short DNA sequence (a transcription factor binding site) whose phenotype was its molecular capacity to bind a transcription factor. At the post-transcriptional level, interactions between RNA and RNA binding proteins were analysed, where a genotype was a short RNA sequence (an RNA binding protein binding site) whose phenotype was the capacity to bind an RNA-binding protein. Though robustness at both layers of gene regulation was comparable, there were marked differences in evolvability, which were suggestive of qualitatively different architectural features in the two GP maps. Specifically, the genotype networks of binding sites for RNA binding proteins were separated by more mutations than the genotype networks of binding sites for transcription factors, rendering mutations to the binding sites of RNA binding proteins less likely to bring forth phenotypic variation than mutations to the binding sites of transcription factors. These observations are consistent with the rapid turnover of transcription factor binding sites among closely related species, as well as with the relatively high conservation levels of binding sites for RNA binding proteins. This comparative analysis may therefore help to explain why transcriptional regulation is more commonly implicated in evolutionary adaptations and innovations than post-transcriptional regulation mediated by RNA binding proteins. \subsection{Empirical fitness landscapes and adaptive dynamics of viral populations}\label{sec:viruslandscape} The topography of fitness landscapes is key to understand evolutionary dynamics, and recent studies have focused on epistasis as a measure of landscape ruggedness (see Box~\ref{box:epistasis}). Two different experimental approaches have been taken to characterise the ruggedness of fitness landscapes through epistasis: a first, simpler approach is to analyse the epistasis among random pairs of mutations \cite{elena:1997,bonhoeffer:2004,sanjuan:2004,lalic:2013}, while a more exhaustive approach relies on reconstructing a combinatorial fitness landscape that includes all possible combinations among a set of $m$ mutations \cite{devisser:2014}. Usually, these $m$ mutations have been observed during experimental evolution and adaptation of populations to novel environments. Such empirical landscapes have been characterised for bacteria \cite{lunzer:2005,weinreich:2006,poelwijk:2007,dawid:2010,chou:2011,khan:2011}, protozoa \cite{lozovsky:2009}, fungi \cite{devisser:2009,hall:2010}, and human immunodeficiency virus type-1 (HIV-1) \cite{dasilva:2010,hinkley:2011,kouyos:2012,dasilva:2014}. \begin{figure} \begin{infobox}[frametitle= Box~\ref{box:epistasis}: Epistasis and fitness landscapes] { \begin{center} \includegraphics[width=0.5\textwidth]{figure7_Box.png} \end{center} \label{fig:epistasis} Epistasis means that the phenotypic effects of a mutation depend on the genetic background (genetic sequence) in which it occurs \cite{poelwijk:2007}. Whereas the concept applies mainly to any phenotypic trait, in the evolutionary context epistasis for fitness is of primary importance, and we will focus on this case in what follows. The degree of epistasis, $\epsilon_{xy}$, between a pair of mutations $x$ and $y$ can be estimated as $\epsilon_{xy} = W_{00}W_{xy} - W_{x0}W_{0y}$, where $W_{00}$ is the fitness of the non-mutated genotype, $W_{xy}$ the experimentally determined fitness of the double mutant and $W_{x0}$ and $W_{0y}$ are the measured fitness of each single mutant. Under a multiplicative fitness effect model, $W_{x0}W_{0y}/W_{00}$ represents the expected fitness value of the double mutant and, therefore, $\epsilon_{xy}$ represents the deviation from this null hypothesis. The sign of $\epsilon_{xy}$ corresponds to the sign of epistasis. \\[2mm] Magnitude epistasis causes deviations from the multiplicative model, but the landscape remains monotonic; sign epistasis means that the fitness sign of at least one of the mutations in the pair changes in presence of the other mutation; reciprocal sign epistasis occurs when both mutations change the sign of their fitness effect when combined, so both potential adaptive pathways connecting the nonmutated ancestor with the double mutant necessarily must cross a valley. Epistasis thus determines the ruggedness of a fitness landscape \cite{wright:1932,weinreich:2005,poelwijk:2011} and therefore the accessibility of adaptive pathways \cite{schaper:2011}. If there is either magnitude epistasis or no epistasis at all, fitness landscapes are smooth and single-peaked, and evolving populations can reach the global maximum. In the case of sign epistasis, only a fraction of the total paths to the optimum are accessible. Reciprocal sign epistasis is a necessary but not sufficient condition for rugged landscapes with multiple local optima \cite{poelwijk:2011}, a situation where an evolving population might get stuck into suboptimal peaks. Most studies on epistasis have focused on pairwise epistasis, ignoring interactions among more than two mutations. However, higher-order epistasis appears in almost every published combinatorial fitness landscape \cite{weinreich:2013:COGD}, so the topographical features of fitness landscapes seem to depend on all orders of epistasis. } \label{box:epistasis} \end{infobox} \end{figure} In what follows, we review work focusing on the topography of an RNA virus fitness landscape. We begin with an investigation of how prevalent different epistasis types are (see Box~\ref{box:epistasis}), and then continue with the influence of landscape topography on the evolutionary potential of a virus population. Finally, we discuss the relevance of the environment on viral evolution, through analyses of landscapes on different host species. \subsubsection{Description of epistasis among random pairs of mutations} The analysis of the effects of mutations on fitness provides information about the degree of ruggedness of the landscape at a coarse-grained level. In a study with {\it Tobacco etch potyvirus} (TEV) \cite{lalic:2012}, 20 single nucleotide substitution mutations randomly scattered along the RNA genome of the virus were analysed. These mutations were deleterious when evaluated in {\it Nicotiana tabacum,} its natural host, through competition experiments against a reference TEV strain \cite{carrasco:2007}. Those single mutations were randomly combined to yield 53 double mutants, whose fitness was measured also in {\it N.~tabacum}. Twenty combinations rendered $\epsilon_{xy}$ values significantly deviating from the null expectation, 11 of which were positive and 9 negative (see Box.~\ref{box:epistasis}). Interestingly, these nine cases were all examples of synthetic lethality, that is, single mutations were deleterious but viable, but in combination became lethal. This represents an extreme case of negative epistasis. Previous studies with other RNA viruses obtained comparable epistatic interactions in type and sign \cite{bonhoeffer:2004,sanjuan:2004,sanjuan:2006:JGV}. How can we explain positive epistasis in the small and compact genomes of RNA viruses? Given the lack of genetic and functional redundancy and, in many cases, overlapping genes and multifunctional proteins, a small number of mutations can produce a strong deleterious effect. But, as mutations accumulate, they affect the same function with increasing probability and thus, their marginal contribution to fitness diminishes. Hence, the observed fitness is above the expected multiplicative value. In other words, epistasis is positive. \subsubsection{Description of a combinatorial landscape and higher-order epistasis} \begin{figure*}[ht] \begin{center} \includegraphics[width=1.0\textwidth]{figure7.png} \end{center} \caption{Snapshot of an empirical fitness landscape constructed with combinations of mutations observed during experimental adaptation of tobacco etch potyvirus (TEV) to its new experimental host {\it Arabidopsis thaliana}. Each string of dots represents a genotype. Black dots represent a mutation in the corresponding locus, while grey dots correspond to the wild-type allele at that locus. Green lines stand for mutations with beneficial effect, red lines for deleterious mutations and orange lines for neutral mutations in the corresponding genetic background. Lines link genotypes which are one mutation away. The global optimum for this landscape corresponds to the 01001000 genotype. Data has been processed with MAGELLAN, which qualitatively orders genotypes along the $x-$axis according to the number of mutations. \cite{brouillet:2015}} \label{fig:LandscapeTEV} \end{figure*} TEV was evolved in a novel host, {\it Arabidopsis thaliana}, until it achieved high fitness \cite{agudelo-romero:2008}. The consensus genome of this adapted strain had only six mutations, three of which were nonsynonymous. The fitness effect of five of these mutations (the sixth one had to be discarded) was individually evaluated: two were significantly beneficial (one synonymous and one nonsynonymous), one was neutral (nonsynonymous), one deleterious (synonymous), and one lethal (nonsynonymous) \cite{agudelo-romero:2008}. All $2^5 = 32$ possible genotypes that result from combining the observed five mutations were created in order to generate a complete five-sites landscape (Fig.~\ref{fig:LandscapeTEV}), with abundant epistasis. Thus, the obtained landscape was rugged and without neutrality. The pervasiveness of higher-order epistatic interactions in all empirically characterised combinatorial landscapes \cite{weinreich:2013:COGD} prompted its study in the small TEV combinatorial landscape \cite{lalic:2015}. Using the Walsh-transform method \cite{weinreich:2013:COGD,poelwijk:2016}, higher-order epistatic interactions were found to be as important as pairwise interactions to fully understand the topological properties of adaptive landscapes. Interestingly, and despite previous reports claiming that pervasive epistasis results in predictable evolutionary dynamics \cite{devisser:2014}, repeated evolutionary experiments starting from different genotypes of the TEV virus resulted in different evolutionary endpoints, and populations were able to escape local optima, moving efficiently in this highly rugged landscape, with new mutations appearing in the course of evolution. \cite{cervera:2016:PRSB} This result suggests that evolutionary predictions based on extrapolations from non-exhaustive fitness landscapes have to be taken with care, as evolving populations are often able to find new, previously undescribed mutations that introduce new evolutionary dimensions. \subsubsection{Effect of host species on the topography} Another quite common observation in evolutionary experiments with RNA viruses, as well as in natural populations, is the existence of pleiotropic fitness costs across different hosts \cite{bedhomme:2015}---beneficial mutational effects in one host may become deleterious in an alternative host. These negative fitness effects limit the host range of viruses to closely related species that share most of the molecular targets needed for the virus to complete its infectious cycle. The concept of pleiotropy can be explored in terms of changes in the topography of fitness landscapes across hosts. The fitness of TEV single and double mutants was measured in four different susceptible hosts that differed in their degree of genetic relatedness \cite{lalic:2013}: the natural host {\it N.~tabacum, Datura stramonium} (in the same botanical family, Solanaceae), {\it Helianthus annuus} (an Asteraceae phylogenetically related to the Solanaceae---both are Asterids), and in {\it Spinacea oleraceae} (an Amaranthaceae). Both the sign and the magnitude of epistasis changed across hosts: epistasis was positive ($\epsilon_{xy} > 0$) only in the natural host, and it diminished as the host species relatedness to {\it N.~tabacum} decreased. The topography of the combinatorial fitness landscape was more rugged in {\it N.~tabacum} when evaluated with the TEV strain adapted to {\it A.~thaliana} \cite{cervera:2016:JV}. Though the global optimum was the same in both landscapes, it was less accessible in {\it N.~tabacum} given the greater magnitude of reciprocal sign epistasis in its vicinity. Altogether, these results suggest that the topography of the adaptive fitness landscape for an RNA virus is strongly dependent on the environment (host species), though some general properties, such as the existence of lethal genotypes, minimal or null neutrality and high ruggedness, remain. In this light, novel frameworks that explicitly account for environmental changes on the properties of landscapes, such as seascapes or adaptive multiscapes \cite{catalan:2017}, should yield a better picture to think about these experiments and even provide some predictive power. \subsection{Inferring phenotypes from genotype-fitness maps} \label{inferring-phenots} Massively parallel empirical studies that examine a large ensemble of genotypes often yield information on their biological activity, or their overall fitness, while the identification of the phenotypes involved becomes difficult. In such cases it is possible to infer a phenotypic level, ideally endowed with a biological meaning, from data analogous to that of the previous sections. The key assumption underlying these formal approaches is that the mutational effects on the unobserved phenotypic traits are additive, such that any epistatic interaction for fitness arises from the nonlinearity of the phenotype-fitness map \cite{domingo:2019b}. In the simplest case of a one-dimensional trait that maps monotonically to fitness, the trait variable has been referred to as the \textit{fitness potential} \cite{kondrashov:2001,milkman:1978} and the nonlinearity of the phenotype-fitness map as \textit{global epistasis} \cite{otwinowski:2018}. Because a monotonic phenotype-fitness map preserves the rank ordering of genotypes with respect to fitness, it can account for magnitude epistasis, but not for sign epistasis \cite{weinreich:2005}. Sign epistasis can however arise from non-monotonic one-dimensional phenotype-fitness maps. Consider a fitness function $f(x)$ where the wild type trait value is located at $x=0$ and grows toward a single phenotypic optimum at $x_\mathrm{opt} > 0$. A mutation that increases the trait value by an amount $\Delta x < x_\mathrm{opt}$ is then beneficial on the wild type background but deleterious on a background with trait value $x \geq x_\mathrm{opt}$ that overshoots the optimum. In an experimental study of the ssDNA bacteriophage ID11, it was found that this scenario explains the pairwise epistatic interactions between 9 individually beneficial mutations rather well \cite{rokyta:2011}. In this case, the phenotype-fitness map was taken to be a gamma function with 4 parameters, and the unknown phenotype was parametrised by the 9 single mutational effects. The joint inference of the phenotypic effects and the phenotype-fitness map thus required 13 parameters to be estimated from the fitness values of 9 single and 18 double mutants. The range of epistatic interaction patterns that can be generated from a one-dimensional phenotypic trait subject to a single-peaked phenotype-fitness map is obviously limited. In particular, any evolutionary trajectory composed of mutations that are individually beneficial on the wild-type background can display at most one fitness maximum. This criterion was used in a recent study of the combined resistance effects of synonymous mutations in the antibiotic resistance enzyme TEM-1 $\beta$-lactamase challenged by cefotaxime to conclude that the phenotype underlying these effects is most likely multidimensional \cite{zwart:2018}. Multidimensional phenotypes allow for more versatile interaction structures but also require more parameters to be inferred from data. In a study of non-synonymous resistance mutations in TEM-1, a two-dimensional phenotype combined with a sigmoidal phenotype-fitness map was found to provide a good description of the measured resistance values \cite{schenk:2013}. In this case one of the phenotypes was taken to be protein stability, which was determined computationally, whereas the second phenotype was inferred along the lines of the experiment with the ID11 bacteriophage above \cite{rokyta:2011}. A similar approach has been applied to the fitness landscape of a norovirus escaping a neutralising antibody, where the folding stability and binding affinity of the capsid protein were mapped to the probability of infection \cite{rotem:2018}. Importantly, in two-dimensional phenotype-fitness maps sign epistasis can emerge even in the absence of a phenotypic optimum \cite{manhart:2015,schenk:2013}. \begin{figure}[ht] \begin{center} \includegraphics[width=8cm]{figure8.jpeg} \end{center} \caption{\label{JK-Fig1} Illustration of Fisher's geometric model for a two-dimensional phenotype and a single-peaked phenotype-fitness map. Three phenotypic mutations originating from the wild type (marked in red) combine additively, giving rise to a distorted three-dimensional cube in the phenotype plane. As a consequence of the nonlinear mapping to fitness, two of the double mutants (marked in green) become local fitness maxima in the induced genotypic landscape. Courtesy of Sungmin Hwang.} \end{figure} The models described so far can be viewed as variants of the \textit{geometric model} devised by Ronald Fisher to argue that the adaptation of complex phenotypes must proceed in small steps \cite{blanquart:2014,fisher:1930,tenaillon:2014}. Originally, Fisher's geometric model (FGM) did not include the assumption of additive phenotypes, which was introduced later in a study of pairwise epistasis between mutations in \textit{Escherichia coli} and vesicular stomatitis virus \cite{martin:2007}. In its modern formulation, the model is based on a set of $d$ real-valued traits forming a vector $\vec{x} = (x_1,x_2,\cdots,x_d)$ in the $d$-dimensional Euclidean space $\mathbb{R}^d$ and a nonlinear phenotype-fitness function $f(\vec{x})$ with a single optimum that is conventionally located at the origin $\vec{x}=0$ (Fig.~\ref{JK-Fig1}). The genotype is described by a sequence $\tau = (\tau_1, \tau_2,\cdots,\tau_L)$ of $L$ symbols $\tau_i$ drawn from the allele set $\{0,1,\cdots,k-1\}$, where $\tau_i = 0$ denotes the wild type allele and in most cases a binary alphabet with $k=2$ has been considered. The additive GP map takes the form \cite{hwang:2017} \begin{equation} \label{JK:FGM} \vec{x}(\tau) = \vec{x}_0 + \sum_{a=1}^{k-1} \sum_{i=1}^L \delta_{\tau_i,a} \vec{v}_{i,a}, \end{equation} where $\vec{x}_0$ is the wild type phenotype and the vector $\vec{v}_{i,a} \in \mathbb{R}^d$ describes the phenotypic effect of the mutation $0 \to a$ at the $i$'th genetic locus. The genotype-fitness map is then obtained as $F(\tau) = f[\vec{x}(\tau)]$. In applications of FGM to experimental data, the mutational effects $\vec{v}_{i,a}$ are usually treated as random vectors drawn from a multivariate Gaussian distribution. Rather than inferring specific phenotypes, such analyses yield gross statistical features of the phenotypic landscape, such as the number of phenotypic traits $d$, the distance of the wild type phenotype from the optimum $\vert \vec{x_0} \vert$, and the variance of phenotypic mutational effects \cite{blanquart:2016,martin:2007,schoustra:2016,weinreich:2013:E}. Current high-throughput sequencing methods are capable of measuring fitness and other functional phenotypes for hundreds of thousands of genotype sequences in a single experiment, and methods based on the inference of unobserved additive traits provide an important tool for organising and interpreting the resulting data sets. A recent large-scale analysis of the fitness landscape of a segment of the \emph{His3} gene in yeast built out of amino acid substitutions from extant species made use of a deep learning approach to infer the additive phenotype and its nonlinear mapping to fitness \cite{pokusaeva:2019}. Remarkably, large parts of the data were well described by a one-dimensional fitness potential combined with a sigmoidal phenotype-fitness map, suggesting that much of the observed complexity of epistatic interactions could potentially be explained in terms of thermodynamic considerations \cite{otwinowski:2018}. Combining such data-driven inference methods with biophysical modeling and functional information appears to be a promising route towards a deeper understanding of the relation between genotype, phenotype and fitness on the molecular level \cite{bershtein:2017}. \subsection{Synthetic biology approaches to characterising GP maps} A major goal in the field of synthetic biology is the re-purposing of biological components and systems to create living cells with new, designed functionalities. So what is the link between synthetic biology and GP maps? Faced with the challenge of understanding the function of biological parts and using this insight to rationally engineer cells, synthetic biologists frequently assemble large numbers of genetic designs (genotypes) and measure key aspects of the resultant cellular phenotypes. In doing so, novel methods for characterising GP maps have been developed (Fig.~\ref{fig:paps}). Key to many of these are two capabilities. First, it is necessary to be able to construct large numbers of diverse genotypes (referred to as {\it libraries}) in a structured way. For example, assembling many genetic circuits simultaneously, each one containing a different combination of functional DNA parts (e.g. protein coding genes or regulatory elements like promoters, ribosome binding sites and terminators) \cite{brophy:2014,gasperini:2016,appleton:2017}. Second, it should be possible to test these designs \textit{en masse}. To support both requirements, high-throughput, pooled DNA assembly and sequencing methods have been developed to measure the phenotype of every genetic circuit design (genotype) across huge libraries, effectively creating a detailed GP map. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figure9.pdf} \caption{ Synthetic biology methodologies can support the construction of detailed GP maps. Diverse sets of genotypes in cells can be generated using combinatorial DNA assembly \cite{woodruff:2017,gorochowski:2014,plesa:2018}, chip-based DNA synthesis \cite{kosuri:2014}, or systems to induce structural DNA rearrangements, e.g. Synthetic Chromosome Recombination and Modification by LoxP-mediated Evolution (SCRaMbLE) \cite{blount:2018}. Pooled libraries of cells can then be sorted into physically separated groups based on a parameter of interest, e.g. fluorescence of cells using fluorescence-activated cell sorting (FACS). Barcoded sequencing libraries can be generated from cells in each group and deep-sequencing of DNA/RNA performed to measure a wide range of phenotypic properties \cite{cambray:2018,gorochowski:2019,gorochowski:2017,kosuri:2013,johns:2018,gasperini:2016,liszczak:2018}. The inclusion of external standards during the sequencing allows for the conversion of relative phenotypic measures into absolutes units that are comparable across contexts \cite{gorochowski:2018}. Genetic diagrams drawn using Synthetic Biology Open Language Visual notation \cite{der:2017}. Image courtesy of Thomas E. Gorochowski. } \label{fig:paps} \end{figure*} Genotype libraries can be constructed in many ways, each with their own advantages and pitfalls. Perhaps the simplest and most transferable protocol involves pooled synthesis of a large library of pre-defined DNA parts, insertion of each part into a circular plasmid backbone that enables self-replication in cells, and transformation of the resulting plasmid library in the host cell of interest. This method can be used in combination with oligo(nucleotide) library synthesis (OLS) \cite{kosuri:2014} for generation of the DNA part library. Whilst limitations include genotype length (up to 200 nucleotides) and accuracy (error rate of 1 in 200 nucleotides), accessible genotype libraries allow access to regions of genotype space distant from one another, with the latest OLS derived study characterising 244,000 sequences simultaneously \cite{cambray:2018}. Other approaches for constructing libraries of genotypes include multiplexed DNA assembly \cite{plesa:2018,woodruff:2017,hughes:2017}, and site-specific incorporation of random genetic diversity \cite{komura:2018,patwardhan:2009,cozens:2018,holmqvist:2013}. The latter approach was recently used to characterise millions of promoter variants \cite{deboer:2020}. Multiplexed measurement of many different phenotypes of the constructed genotype library is possible \cite{cambray:2018,gorochowski:2018}, though it must be ensured that each genotype contains a unique nucleotide-encoded barcode, to enable sequencing reads to be matched to the correct genotype \cite{church:1988}. Sequenceable phenotypes such as DNA or RNA abundance \cite{cambray:2018,johns:2018,kosuri:2013,patwardhan:2009} can be studied directly, in absolute units \cite{gorochowski:2019,gorochowski:2017}. Non-sequenceable phenotypes can be measured too, by sorting phenotypes into groups and then appending a unique barcode sequence to genotypes in each group (Fig.~\ref{fig:paps}). In this way, genotypes are mapped to phenotype categories. A detailed framework for design of pooled sequencing experiments (Multiplexed Assays for Variant Effects, MAVEs or Massively Parallel Reporter Assays, MPRAs) is available \cite{gasperini:2016}. Much of the focus to date has been on using these methods to characterise genetic part function, measuring the behaviour of parts taken from distantly related species or designed \textit{in silico}. However characterisation of mutationally-connected genotype networks elucidates structural properties of GP maps for phenotypes which have not previously been characterised empirically at such scale. The resulting data can enable the construction of new \textit{in silico} models for predicting phenotypes from genotypes \cite{payne:2018,cuperus:2017} (section~\ref{sec:EEM}), a goal common to both synthetic biology and GP map researchers. Indeed, the synergy goes both ways: GP map studies highlight important principles which are only beginning to be considered by synthetic biologists during genetic circuit design. A clear example is genotypic robustness to mutations \cite{payne:2018}, which may prove important for genetic circuit longevity \cite{bull:2017,sleight:2013}. New science and technology brings new questions: ecological implications of microorganisms containing mutationally robust synthetic sequences have yet to be considered. Expansion of this approach to study GP maps for different genotypes and phenotypes lies ahead. Innovations in nucleic acid sequencing are beginning to open up high-throughput characterisation of new types of phenotype in detail without sorting, such as epigenetic signatures or protein concentrations \cite{yus:2017,sze:2017,liszczak:2018}. The advent of long read sequencing \cite{dijk:2018} beckons high-throughput characterisation of GP maps for whole-cell genotypes. This is becoming possible with methods for high-throughput genome modification \cite{chari:2017}, such as SCRaMbLE which uses recombination for \textit{in vivo} combinatorial genomic rearrangement \cite{blount:2018}. Pooled DNA assembly and sequencing is by no means the final solution for synthetic biologists or GP map researchers: crucially, it is limited by the number and length of assembleable genotypes and to phenotypes that can be inferred from sequencing data or for which high-throughput sorting methods (e.g.~FACS) exist. Nonetheless, this approach offers a significant increase in the size of GP maps that can be studied empirically and highlights the potential for mutually beneficial collaboration across these two emerging areas of biological research. \begin{figure}[ht] \begin{center} \includegraphics[width=8.5cm]{figure10.pdf} \end{center} \caption{Cancer progression models. (a) Main steps in the analysis of patient data. On the right, the DAG of restrictions shows genes in the nodes; an arrow from gene $i$ to gene $j$ indicates that a mutation in gene $i$ must occur before a mutation in gene $j$ can occur and, thus, indicates a direct dependency of a mutation in gene $j$ on a mutation in gene $i$. The absence of an arrow between two genes means that there are no direct dependencies between the two genes. According to this DAG a mutation in the fourth gene can only be observed if both the second and third genes are mutated, but mutations in the first, second, and third gene do not have any dependencies among themselves. (b) Genotypes that fulfil the restrictions encoded in the DAG of restrictions: these are the accessible genotypes under the DAG. Genotypes are shown as sequences of 0s and 1s, where ``1100'' means a genotype with the first and second genes mutated. (c) Fitness graph or graph of mutational paths between accessible genotypes; nodes are genotypes (not genes) and arrows point toward mutational neighbors of higher fitness (thus, two genotypes connected by an arrow differ in one mutation that increases fitness \cite{crona:2013,devisser:2014}). Under CPMs, each new driver mutation with its dependencies satisfied increases fitness; therefore, all accessible genotypes that differ by exactly one mutation are connected in the fitness graph and the genotype with all driver genes mutated is the single fitness maximum. The fitness graph shows all the paths of tumor progression that start from the ``0000'' genotype and end in the fitness maximum. Figures modified from \cite{diaz-uriarte:2019,diaz-uriarte:2015}.} \label{fig:cpm} \end{figure} \section{Consequences of GP maps for models of tumour evolution and cancer progression} \label{sec:cancer} Epistatic interactions between genetic alterations can constrain the order of accumulation of mutations during cancer progression (e.g. in colorectal cancer, mutations in the \emph{APC} gene are an early event that generally precedes mutations in the \emph{KRAS} gene \cite{gerstung:2011}). Cancer progression models (CPMs) have been developed to try to identify these restrictions during tumour progression using cross-sectional mutation data \cite{beerenwinkel:2015,beerenwinkel:2016}. CPMs take as input a cross-sectional sample from a population of cancer patients: each individual or patient provides a single observation, the cancer genotype in that patient. Thus, the input for CPMs is a matrix of individuals or patients by alteration events, where each entry in the matrix is binary coded as mutated/not-mutated or altered/not-altered (Fig.~\ref{fig:cpm}). The output from CPMs are directed acyclic graphs (DAGs) that encode the restrictions inferred (which are in fact sign epistasis relationships \cite{crona:2013,diaz-uriarte:2018}). In these DAGs, an edge between nodes $i$ and $j$ is to be interpreted as a direct dependence of an alteration of event $j$ on an alteration of event $i$; $j$ should never be observed altered unless $i$ is also altered. CPMs regard different patients as replicate evolutionary experiments, assume that the cancer cells in all patients are under the same genetic constraints \cite{beerenwinkel:2016,beerenwinkel:2015,gerstung:2011}, and ignore back mutations in the alteration of driver events. Thus, CPMs implicity encode all the possible mutational paths or trajectories of tumour progression (Fig.~\ref{fig:cpm}) \cite{diaz-uriarte:2019}, and some methods (e.g., CBN) provide estimates of the probabilities of the different paths of tumour progression \cite{diaz-uriarte:2019,hosseini:2019}. As in other domains, such as predicting antibiotic resistance, even small increases in our capacity to predict disease progression would be valuable for diagnostic, prognostic, and treatment purposes \cite{toprak:2012}; this renders CPMs a potentially useful tool in precision medicine. (Note that the focus here is on predicting mutational paths, but see also Section \ref{sec:empiricaltest} for transition forecasts, a different objective and approach when predicting evolution using GP maps.) Several CPM methods have been developed, including oncogenetic trees (OT) \cite{szabo:2008,desper:1999}, conjunctive Bayesian networks (CBN) \cite{gerstung:2009,gerstung:2011,montazeri:2016}, and CAncer PRogression Inference (CAPRI) \cite{ramazzotti:2015,caravagna:2016}. The different methods differ in their model fitting procedures and in the types of restrictions they can represent. For example, OT can only return trees, where a mutation in a given gene has a direct dependence on only one other gene mutation; this is in contrast to CAPRI and CBN, where a mutation in a gene can depend on mutations in two or more different genes, and thus CAPRI and CBN return as output DAGs where some nodes can have multiple parents, as shown in Fig.~\ref{fig:cpm}. All CPMs focus on ``driver alterations'', i.e. those believed to actually drive, through selection, cancer progression (in contrast to so-called passenger mutations or hitchhikers). The types of alterations studied in CPMs range from changes in genes and pathways to gains and losses of chromosomal regions \cite{gerstung:2011,caravagna:2016,desper:1999}. CPMs model sign epistasis, but they cannot model reciprocal sign epistasis (see section~\ref{box:epistasis}) \cite{diaz-uriarte:2018}, and thus CPMs effectively consider fitness landscapes with a single global peak. As a consequence, predictions of tumour progression, compared to the true paths of tumour progression, are very poor under multi-peaked fitness landscapes \cite{diaz-uriarte:2019}. Remarkably, even in the latter scenario, CPMs could be used to estimate an upper bound to the true evolutionary unpredictability \cite{diaz-uriarte:2019}; the analysis of twenty-two cancer data sets shows many of them to have low unpredictability. CPMs do not force us to try to infer restrictions in the order of accumulation of alterations at any particular level or layer, in so far as the alterations examined can be regarded as heritable alterations with no back mutations. Thus, we could use the layers or levels of analysis (see also section \ref{sec:multilevel}) that are more relevant (e.g., metabolic pathways) or more likely to satisfy assumptions. Germane to this task are phenotypic bias (see section \ref{sec:RNABias}) and the effect on evolvability of many-to-many GP maps, relevant in the context of cancer \cite{nichol:2019,frank:2012}. These results suggest examining which is the most appropriate layer of analysis when using CPMs, which might not be gene alterations, but a layer closer to a ``heritable phenotype''. On the one hand, layers other than genes could allow us to maximise predictive ability (related to ideas on how to choose the relevant phenotypic dimensions \cite{altenberg:2005}). On the other hand, at other layers of analysis CPMs' assumptions might be more likely to be satisfied---in particular, the lack of reciprocal sign epistasis and local fitness maxima, as well as the absence of disjunctive (OR) relationships in dependencies between alterations (when a mutation in a gene can happen if a mutation in at least one of its parents has occurred; in Fig.~\ref{fig:cpm}, under an OR model, a mutation in gene 4 would need one of genes 2 or 3 to be mutated, but not both) \cite{diaz-uriarte:2018,diaz-uriarte:2019}. CPMs assume Markovian evolution. However, non-Markovian dynamics on neutral networks \cite{manrubia:2015} (see also section \ref{sec:dynamics}) raises issues about choosing the layer of analysis for CPMs. For example, it seems unlikely that we could detect the existence of non-Markovian evolution reliably from the cross-sectional data used by CPMs. Additionally, non-Markovian evolution might be strong enough to cancel out possible benefits of working at other layers, and it could even be having an effect at the usual gene level of analysis where we label genes as altered/not-altered (mutated/not-mutated), because there is a many-to-one mapping between mutations in individual DNA bases and ``altered'' gene status. The effects that environmental changes might have on evolutionary dynamics, given the dependence of epistatic relationships and fitness landscapes on the environment \cite{nichol:2019,lalic:2013,cervera:2016:JV,yubero:2017,payne:2019} (see also sections~\ref{sec:evolutionOFgpmaps} and \ref{sec:viruslandscape}), could be particularly relevant for the use of CPMs if, as posited by the ``adaptive oncogenesis'' hypothesis \cite{degregori:2018}, a key contribution to the relationship between age and cancer is the change in tissue fitness landscape with age (briefly, under the fitness landscapes of youth most mutants would have low fitness, unlike in the landscapes at older age). At a minimum, stratification of data sets by age would be warranted. The sheer size of genotype and phenotype spaces is a potential matter of concern for CPMs, since the latter can only analyse a limited number of events. High-dimensional fitness landscapes might show increased mutational accessibility \cite{gavrilets:1997} and thus show both increased evolvability \cite{payne:2019} and decreased evolutionary predictability. From the point of view of predicting tumour evolution with CPMs, robustness to alterations in the features examined by the CPM would of course be a hurdle; but then, hopefully, these features would not have been regarded as ``drivers''. However, it should be mentioned here that ``passenger'' mutations in cancer, traditionally considered neutral, might actually reduce fitness of cancer cells and prevent tumour progression \cite{mcfarland:2017}; this raises the question of how to incorporate this lack of robustness in CPMs and, more generally, the extent of robustness and fitness landscape navigability in the cancer genome. The existence of a large pool of mildly deleterious passengers can also have consequences for procedures, such as CPMs, that analyse only a small subspace of the GP map. Of note, CPMs are often used in cancer progression scenarios where aneuploidies and karyotipic changes are common \cite{heng:2017}. This becomes an example of evolution of the GP map \cite{cuypers:2012,cuypers:2014,cuypers:2017}, a question deeply related to the proper comparison between GP maps of variable sequence length (see section~\ref{sec:evolutionOFgpmaps}). Choosing the right layer of analysis might again alleviate this problem, at least from the point of view of using CPMs to predict tumour evolution. Possible applicability of concepts emerging from evolving GP maps to the cancer genome are intriguing, especially given the possible costs of chromosomal instability and aneuploidy in cancer \cite{mcfarland:2017}, with the caveat that cancer constitutes a short-term evolution experiment that starts from cells with a long evolutionary history and that dies with its host \cite{kokko:2017}. Finally, the extent to which neutrality and phenotypic bias (see sections~\ref{sec:RNABias} and \ref{sec:dynamics}) affect CPMs remain as open questions, since CPMs are predicated on the idea that natural selection is what matters for the features studied. \section{Summary and short-term perspectives} \label{sec:perspectives} Exhaustive enumerations of genotype spaces are only feasible for short sequence lengths. These enumerations may be sufficient in specific empirical cases, as to study transcription factor binding sites (see Sections~\ref{sec:SMevol} and \ref{sec:EEM}) or to build, in the near future, the first complete RNA GP maps incorporating experimentally measured fitness (SELEX experiments of small synthetic aptamers exploring the whole sequence space of length 24, such as those of Ref. \cite{jimenez:2013}, are already available). However, the number of possible genotypes for most biologically relevant sequence lengths is out of reach and, in the vast majority of cases, will always be: The estimated number of particles in the universe is of order $10^{80}$, a quantity comparable to the number of RNA sequences of length $L=133$ (the shortest known viroid has length $L=246$) or to that of proteins with $62$ amino acids (the class of ``small proteins'' refers to those with fewer than 100 amino acids). On the other hand, complete GP maps using RNA folding, the HP model, and toyLIFE \cite{garcia-martin:2018}, or using transcription factor binding \cite{berg:2004,khatri:2015a}, have proven to be very valuable resources for unveiling and testing some general properties of GP relationships, which seem to be common to several models. Further efforts toward theoretical developments that allow extrapolations to arbitrarily large genotype sizes, as well as approaches targeting higher levels of abstraction to study GP relationships without exploring the whole genotype space \cite{garcia-martin:2016b}, appear as two main avenues to complement computational studies. Though the specifics of folding algorithms do not seem to affect the statistical properties of GP maps, we cannot forget that the predictive abilities of those algorithms depend on the accuracy of the energy model and its parameters, which in the case of RNA or proteins are extrapolated from experimental measurements obtained under very specific conditions. Therefore, any improvement on this aspect will have a huge impact, not only in the accuracy of RNA, proteins, and possibly other GP maps, but in every related research field concerned with functional prediction. Computational analyses might also benefit from approaches that do not demand an exhaustive enumeration, but are tailored to test theoretical predictions, for example. One of them might be computations of the dual partition function of multiple RNA structures. Also, complete inverse folding methodologies can be used to develop computational frameworks for the study of genotype-phenotype-function relationships. Current algorithms can potentially build partial GP maps focused on phenotypes of interest, in which experimental data available can be fit, thus providing an appropriate context to make predictions and guide further experiments. New tools able to produce reliable estimates of structural properties, like neutral set size, robustness or evolvability, should be ideally independent of the GP map, as well as experimentally compatible, i.e.~they should allow predictions from small samples of genotypes. In this context, a sampling method that produces a genotype sample that optimally represents the phenotype of interest would be an important advance. There was some progress towards this goal in the form of a computational tool \cite{jorg:2008} which produces estimates of the size and robustness of RNA secondary structure phenotypes---but is in principle transferable to other GP maps---or through the estimation of the versatility of genotypes (see Section~\ref{sec:ConstrainedPositions}), but ample space for improvement remains. Also, and while these tools can predict neutral set size and robustness, no approaches exist yet for estimating phenotype evolvability or phenotype-phenotype correlations, which would yield the framework required to understand the evolution of evolvability or the deeply related concept of selection of the mutational neighbourhood. We are only beginning to understand how the structure of GP maps depends upon environmental conditions \cite{devos:2015,steinberg:2016,li:2018,gorter:2018}. We mostly ignore how the structure of a GP map changes with the dimensionality of genotype space, a topic that, beyond simple evolvable cells \cite{cuypers:2017} or toyLIFE \cite{arias:2014}, could potentially be explored using artificial genetic codes \cite{zhang:2017,fredens:2019} or expanded nucleotide alphabets \cite{hoshika:2019}. Finally, no matter the technological advance, the hyper-astronomical size of genotype space precludes the experimental construction of exhaustively-enumerated GP maps for large macromolecules, gene regulatory circuits, and metabolic pathways \cite{louis:2016}. This inconvenient fact necessitates the development of methods that can reliably infer the structure of a GP map from a relatively small sample of the map \cite{otwinowski:2014,duplessis:2016}. Besides analytical approaches based on generic properties of GP maps that allow inferences of their large-scale structure (see section~\ref{sec:UnivTopology}), advances in deep learning are already offering promising solutions to this key problem \cite{riesselman:2018}. \subsection{Towards an improved understanding of GP map architecture: Is it universal?} As discussed in section~\ref{sec:UnivTopology}, notable similarities exist amongst GP map properties giving rise to the notion of ``universal'' \cite{ahnert:2017,greenbury:2014} properties of GP maps, such as genetic correlations and phenotypic bias. Phenotypic bias, genetic correlations and evolvability are discussed in most studies of GP maps, but other properties, such as the assortativity of neutral networks \cite{aguirre:2011}, have only been analysed for some models. These topological properties could either provide a way of distinguishing between sequence-to-structure and artificial life GP maps or they could also be ``universal'' across a variety of models. At present, the universality of the structure induced in genotype spaces by evolutionarily sensible GP maps is a conjecture that those analyses, among others, could help to prove or disprove. Behind this conjecture, there is the main question of which fundamental mechanisms are responsible for the potentially universal features. As discussed in Section~\ref{sec:PossibleRoots}, spaces of high dimensionality that facilitate interconnections between genotypes and phenotypes seem to be a must. More specific explanations for the striking similarities detected among dissimilar GP maps have come from simple analytic models \cite{greenbury:2015,manrubia:2017} (see section~\ref{sec:ConstrainedPositions}). These models differ, but qualitatively they are all based on the fact that, depending on the phenotype, a part of the genotype is more constrained than the rest, for example to enable base pairing in RNA \cite{manrubia:2017}. Interestingly, such a model can also be constructed to predict GP map properties in Richard Dawkins' biomorphs \cite{martin:2020}. The fact that such widely different GP maps can be understood with similar models, supports the hypothesis that sequence constraints are an important cause for the observed similarities between GP maps. A question for future research is the extent to which these sequence constraints generalise to other biological or artificial life GP maps. Are there any counter-examples? And do the kind of assumptions about sequence constraints in the analytic models always hold or can we observe GP maps with similar properties which cannot be modelled based on sequence constraints? In the context of mathematical models of GP maps, it would also be desirable to further develop the existing models to explain more complex and biologically relevant situations, and to find out whether generic structural properties of genotype spaces are maintained under those circumstances. More realistic models should include mutations other than point mutations, such as deletions, duplications or insertions (there are just a few examples where the genome size is variable, among them that described in Section~\ref{sec:VirtualCells}), and recombination (see Section~\ref{sec:Recombination}). Extension to many-to-many GP maps by allowing multiple and semi-optimal phenotypes for a genotype---as it is the case for RNA sequences, for which there can exist multiple secondary structures with quite similar free energies---seems essential to fully understand adaptability \cite{deboer:2012,deboer:2014}. Models such as toyLIFE and virtual cells should be further studied if we want to explore issues relevant to synthetic biology, among others. A very relevant question for the synthetic biology community has been to design gene regulatory circuits that are mutationally robust \cite{chen:2011}. Results with toyLIFE show that genotypic robustness is a function not only of the individual components, but also of the complete network, which could be designed to be robust even if the individual components are not \cite{catalan:2018}. Actually, these extended models would have to come along with redefinitions of structural properties like neutral set size, robustness and evolvability. \subsection{Evolution \emph{on} and \emph{of} genotype spaces} Since the eventual aim of GP map studies is to understand evolutionary processes, a key question is how each of the universal GP map properties---should they exist---affect evolutionary outcomes. Phenotypic bias, for example, implies that only very abundant phenotypes will be visited when adapting to a new evolutionary challenge. This implies that the evolutionary search is constrained to look in the space of very abundant solutions. This constraint might lead to a limitation in the number of possible phenotypes attainable through evolution: it has been put forward, and supported with simple developmental models, that the small fraction of phenotypes visible to evolution are highly clustered in morphospace and that the most frequent phenotypes are the most similar \cite{borenstein:2008}---recalling the relevance of phenotype-phenotype correlations. Since evolutionary search is a consequence of the stochastic nature of the evolutionary dynamics, and is not dependent on the particulars of the GP map, there is no reason why this phenomenon should not be observed in real GP maps. It has been also shown that transition times between phenotypes depend very strongly on how they are connected in genotype space, and there is also a strong indication that the genotypic robustness in a neutral network plays a role \cite{hu:2012}: transition times between phenotypes depend strongly on how accessible a given phenotype is from the most robust genotypes. Because evolution naturally tends to visit the most robust genotypes \cite{nimwegen:1999}, their connections to other phenotypes may be more relevant than those of less robust genotypes. The natural question to address is if there is a mutational bias in evolution towards phenotypes that connect to robust genotypes. Though computational GP maps have been the primary tool to explore this question in depth, some experimental work on this topic has been carried out as well. Indeed, {\it Pseudomonas aeruginosa} preferentially chooses three particular mutational pathways to evolve an adaptive phenotype under certain conditions \cite{lind:2015}. When these pathways are repressed (through gene knockouts), the bacteria are able to evolve the same phenotype, but using new mutations. Actually, those mutations were available in the original population, but the probability of fixing them is very small compared to the three preferred pathways. This work gives empirical support to the relevance of mutational neighbourhoods for evolution (see Section~\ref{sec:Quasispecies}), and highlights once more the need for further computational and experimental investigations of this topic. Our knowledge of how the GP map properties individually affect evolutionary outcomes is still incomplete. For example, phenotypic bias is known to affect evolutionary outcomes due to at least three mechanisms: the `survival of the flattest' \cite{wilke:2001Nat}, the `arrival of the frequent' \cite{schaper:2014}, and its effect on the free fitness of phenotypes in monomorphic regime \cite{iwasa:1988,sella:2005,khatri:2009,khatri:2015a}. Despite this progress, it may still be difficult to estimate for more complex cases than the scenario studied by Schaper and Louis, how strong the `arrival of the frequent' effect will be and whether phenotypic frequency or phenotypic fitness are likely to determine evolutionary outcomes. Ultimately, this knowledge will help us answer the bigger questions of whether and how we can use GP maps to predict short- and long-term trends in evolution \cite{lassig:2017,milocco:2019,nosil:2020}. The application of the tools of network science to the previous context, and to evolutionary dynamics of heterogeneous populations at large, opens a promising avenue that, as of today, faces however some limitations and difficulties. First, and although the hyperastronomical sizes of genotype networks seem an insurmountable obstacle, the theory of competing networks shows that, in genotype spaces where function is relatively sparse, only the much smaller local subnetworks are relevant to analyse the evolution of populations---while the rest of the huge network of networks is in practice negligible \cite{yubero:2017}. A different promising avenue is the generic construction of a phenotype network that can be computationally---and likely analytically---tackled \cite{cowperthwaite:2008,schaper:2014,manrubia:2015}. Second, most theoretical work has been developed using models that only consider point mutations. The introduction of different mutational mechanisms, as discussed in the previous sections, would drastically transform the topology and spectral properties of genotype networks. However, once the new network defined through those rules is known, the analysis proceeds following standard procedures. When the GP map is many-to-many, either due to environmental changes or to phenotypic promiscuity, more complex configurations such as multi-layer networks should be introduced to properly describe the evolution of the system \cite{catalan:2017,aguirre:2018}. Recombination cannot be easily cast in this network framework, which is unable to describe the process in detail \cite{azevedo:2006,devisser:2009,paixao:2014}. Different approaches can be however used in this case (see Section~\ref{sec:Recombination}) and hopefully combined to eventually yield a unified formal description of dynamics under a variety of microscopic processes generating diversity. \section{Outlook: On the feasibility of a complete genotype-to-organism map} \label{sec:GOmap} Systems such as RNA folding, protein secondary structure, and transcription factor binding are attractive models for understanding the GP map because it is possible to compute the map from physical first-principles. But these processes are only the first steps in the long chain of interactions whose end result is organismal function, structure, viability, and reproduction \cite{bershtein:2017}. At these higher levels of integration, it is the integration itself that in large measure determines the GP map. To address the question of whether evolution \emph{of} the GP map or evolution \emph{on} the GP map is the appropriate framework, it may be helpful to use a distinction \cite{altenberg:1995} of two different properties of the GP map: \emph{generative} properties---how the genotype is actually used to produce the phenotype---and \emph{variational} properties---the way that changes in the genotype map to changes in the phenotype. More recently, this distinction has been called ``formative'' and ``differential'' properties, respectively \cite{orgogozo:2015}. Unravelling the generative properties of the GP map is the main agenda of molecular and developmental biology. The variational properties ultimately derive from the generative properties, so the question is whether anything systematic about either can be predicted from evolutionary theory. Tremendous resources have been dedicated to molecular and cellular biology with the promise that, by identifying all the parts and interactions involved in a biological phenomenon, it could be understood, controlled, and even synthesised. The fruition of this promise has been realised in many cases, as attested to by the advent of successful treatments for many diseases. This was the justification of the Human Genome Project, with the hope that once all of the human DNA sequence was known, the genetic basis of diseases and organismal functions would be attainable. But a surprise from the Human Genome Project was the ``missing heritability'' that emerged from genome-wide association studies (GWAS). Analysis of DNA sequences could identify only a small fraction of the genes responsible for human phenotypes known from family studies to have high heritability. Currently there are contradictory findings about the GP map at the whole organism level. On the one hand are studies which find that organisms exhibit a modular structure over large classes of phenotypic variables \cite{wagner:2011b}. to the point where modularity is often stated as an accepted fact \cite{espinosa-soto:2018}. On the other hand are studies which find that almost every gene affects many characters (universal pleiotropy) and almost every character is affected by many genes, summarised as the \emph{omnigenic model} of the GP map \cite{boyle:2017}. The omnigenic model proposes that while there may be ``core genes'' contributing to any given phenotype, the network of gene-interactions has a ``small world'' topology, a property that leads to broad pleiotropy and polygeny in the GP map. It may prove helpful that there is another field that is also trying to understand how complex functional behaviours emerge out of the interaction of thousands or millions of simple parts---the artificial neural network community. Artificial neural networks (ANNs) have now been created whose behaviours rival or exceed certain human cognitive capabilities. Because of the recent achievements of ANNs, the field is currently in an explosive state of development. The achievement of the engineers outpaced the understanding of the theoreticians as to why deep learning networks perform so well, and the theoreticians are working to catch up. As of now, there are several observations about ANNs that may be instructive to those making computational models of the GP map. The multilayered ``deep neural networks'' (DNNs), which have proven to be the most successful ANNs, are defined by a collection of thousands or millions of algorithmically learned numbers. The numbers specify the weights of connections between nodes, and each node sums its inputs from other nodes, and then outputs a function of this sum as inputs to other nodes. What is most notable about DNN engineering is that there is very little interest in the specific values of the numbers, and no way to understand how the specific numbers generate the network behaviour. While there has been some success at \emph{interpreting} DNNs---where one identifies what feature of an input causes a particular neuron in the network to activate---there is currently little understanding of how the all weights connecting the neurons produce this behaviour. The main focus has been on the \emph{processes} that generate the numbers, and this is where theoreticians are attempting to generate understanding. The most successful process for training the weights is based on their variational properties: how changes in the weights change the error between the network's actual behaviour and its desired behaviour. The methods of back-propagation and stochastic gradient descent change the weights until there is little or no error on a set of traning examples applied to the network \cite{bottou:2010}. The variational properties of greatest interest are how the network behaves on novel inputs, and how changes in the inputs map to changes in the network behaviour. A similar situation may hold in complex organisms. Without understanding or even knowing how the thousands of organismal components are generating phenotypes on the whole-organism level, we may nevertheless be able to understand its variational properties based on evolutionary processes. Here we briefly list a few principal processes that are understood to shape the variational properties of genotypes. \subsection{The evolution of re-evolvability under varying selection} In a number of different GP map models, evolution under recurring variation in natural selection moves the genome to places in genotype space where fewer and fewer mutations are needed to re-evolve previous adaptations when the old environment returns. This has been observed for a model of two neutral networks \cite{draghi:2008} (also called the evolution of ``genetic potential'' \cite{ancel:2005}), networks of logic gates (the variation is called ``modularly varying goals'' \cite{kashtan:2005}), gene regulatory networks and the virtual cells discussed above (just called the ``evolution of evolvability'' \cite{cuypers:2017,crombach:2007,crombach:2008}). However, not all GP maps support this phenomenon \cite{kashtan:2005}. Exactly what properties a GP map must possess to allow the evolution of re-evolvability remains an open problem. \subsection{Constructional selection} Genes not only provide material for the generation of the phenotype, but provide degrees of freedom for varying the phenotype. A gene duplication or {\it de novo} gene origin thus differs from a point mutation in that it increases the degrees of freedom of the GP map, and thus adds new variational properties to the genome. Gene duplications and deletions are frequent events in eukaryotic reproduction. Any variational property of a gene that is associated with the gene being retained in the genome can thus become enriched in the GP map \cite{altenberg:1995}. The likelihood of a duplicate copy of a gene to be retained by evolution has been called its ``gene duplicability'' \cite{yang:2003}. The identification of gene properties that are associated with gene duplicability is an active area of research. Some of the properties identified include: \begin{itemize} \item peripheral versus central position in protein-protein interaction networks \cite{chen:2014}; \item high levels of gene expression \cite{mattenberger:2017}; \item high rates of sequence evolution before duplication \cite{otoole:2017}; \item ordered, versus intrinsically disordered proteins \cite{banerjee:2017}; \item signaling, transport, and metabolism functions increase gene duplicability, while involvement in genome stability and organelle function reduces it, for whole genome duplications in plants \cite{liz:2016}. \end{itemize} While the causes of differential gene duplicability have been subject of a great deal of investigation, its \emph{consequences} for organismal evolvability have received limited thought. Quantitative models for how differences in gene duplicability can shape the variational properties of the entire genome\cite{altenberg:1995} have been applied to examples of evolutionary computation\cite{altenberg:1994:EEGP,altenberg:1994:EBR,altenberg:1994:EPIGP} under the rubric ``constructional selection''. One can conceive of the genome as a population of genes, and differences in gene duplicability as fitness differences, not on the organismal level, but on this level of genome-as-population. Constructional selection results in the enrichment of the genome in genes that have a higher likelihood of being retained when copies of them are created. These are gene copies that evolve to where deletion or inactivation becomes deleterious to the organism. This occurs for genes more likely to subfunctionalise, or escape adaptive conflict, or neofunctionalise. It provides a ubiquitous mechanism for the evolution of evolvability. \subsection{Entropic evolutionary forces} The GP map is mostly cast as a many-to-one map because there may be multiple genotypes that result in the same phenotype, due to low-level properties such as synonymous codons, but also due to multiple ways that ligand-receptor bonds may be achieved, and multiple ways that the same gene regulatory interactions may be encoded. This degeneracy of the GP map\cite{whitacre:2010} creates the possibility of evolution along neutral networks of mutationally-connected genotypes with the same fitness. The randomness of evolution along neutral networks brings forth statistical mechanical forces of entropy increase. This entropic behaviour has been described as ``biology's first law'' \cite{mcshea:2010}. Entropic phenomena that result from evolution along neutral networks include: \paragraph {\bf Subfunctionalization} If different functions in a gene are modular enough so that they can be individually disabled without affecting each other, then the process of gene duplication and complementary loss of functions effectively spreads the functions among multiple genes \cite{hughes:1994,stoltzfus:1999,force:1999}. Since there are many more ways to spread the functions apart than to keep them in one gene, there is an entropic force in the direction of separating separable functions. \paragraph {\bf Constructive neutral evolution} Stoltzfus \cite{stoltzfus:1999} introduced the general concept of entropic processes that produce greater genetic complexity to traits simply because there happen to be more complex ways to generate a trait than there are simple ways. If there are neutral mutational pathways between alternate means of generating traits, then the more numerous class will come to dominate. \paragraph{\bf Non-optimal phenotypes} As explored in Section~\ref{sec:SMevol}, in the context of the weak mutation monomorphic regime, there is an exact analogy to statistical mechanics, embodied in a quantity called free fitness \cite{iwasa:1988,khatri:2015}, which is the sum the fitness of phenotypes and the sequence entropy (log degeneracy) weighted by the analogue of temperature, the inverse of the population size. This means that for small populations, evolution gives rise to non-optimal phenotypes that balance fitness and entropy, or free fitness. \paragraph {\bf Developmental systems drift} Primary sequences may diverge between species even while the same developmental outcomes are maintained \cite{true:2001}. Within the free fitness framework traits this has been explored (Section~\ref{sec:SMevol}) under stabilising selection, and for small populations it is predicted that the effect of sequence entropy is that populations develop isolation more quickly \cite{khatri:2019}. The great variation in genome sizes over different taxa and even within closely related taxa suggests that the quantity of DNA maintained in the genome may function as a quantitative trait subject to species-specific natural selection. Just as in physical systems, where entropic forces can be counteracted by energy potentials, natural selection on genomic complexity as a quantitative trait may counteract the entropic tendencies in constructive neutral evolution. Such dynamics may be at work in the genomic streamlining discussed in Section~\ref{sec:empirical}. As is seen with the infinitesimal model of quantitative genetics, even though any individual streamlining event or an individual complexification event may have unobservable effects on fitness, the aggregate forces of entropic complexification and quantitative selection on genome size may statistically push the genome toward a balance, the character of which depends on the species-specific costs of genome maintenance. \subsection{Omnigenic integration} When adaptations are produced by large scale interactions of organismal components, the GP map can be expected to be highly polygenic. Complex interactions of many components make the individual components also highly pleiotropic. Any given genetic change may beneficially affect certain traits while being detrimental to others. When their net effect is beneficial, then they are selected, but the deleterious effects they produce on certain traits creates the opportunity for other genetic variation to compensate for these effects. The GP map then becomes a patchwork of compensatory effects. In the limit of small effects, this patchwork becomes Fisher's \emph{infinitesimal model,} \cite{barton:2017} in which pleiotropy and polygeny are continuous and ubiquitous and there is little structuring of the GP map. \subsection{Selection for mixability} Natural selection in sexual organisms with genetic recombination favors alleles that have high average fitness among all the different genotypes in which they appear in the population. An allele which might produce a highly adaptive phenotype when combined with just the right alleles at the same or other loci faces the breakup of such an advantageous combination due to segregation and recombination. Alleles which produce a reliable fitness advantage regardless of the genetic variation they are recombined with---a property called ``mixability''---have a selective advantage \cite{livnat:2008}. The aggregate consequence of selection for mixability is toward greater modularity in the production of phenotypes: alleles individually produce the adaptive advantage without reliance on particular states of alleles at other loci. It suggests a process that counteracts the ``omnigenic'' model of complete genomic integration. The consequences of selection for mixability on the GP map have only begun to be elucidated \cite{livnat:2010}. \subsection{Epistatic smoothing of the fitness landscape} Conrad noted that a mutation which smoothed the fitness landscape for other loci would enhance their chance of producing advantageous mutations, and hitchhike along with such mutations, thus providing a constant force toward reducing reciprocal sign epistasis. This is the earliest mechanism proposed for the evolution of evolvability \cite{conrad:1972,conrad:1979}, and has yet to be fully investigated theoretically. \subsection{Summary} We have identified a patchwork of processes that in principle are able to shape the variational properties of the GP map for phenotypes at the level of whole organisms, where complex integration leaves us unable to derive the properties from physical first-principles. This is an area in which evolutionary theory needs much greater development. At levels of complexity at which detailed reductionist modelling is currently impossible, we have surveyed efforts to date that attempt to analyse how evolutionary processes shape the GP map. The body of results described, while not a fully fleshed-out theory, is perhaps sufficient to demonstrate that this process-based approach can inform a research program for the GP map at the whole organism level. \section*{Acknowledgements} All authors are indebted to the Centre Europ\'een de Calcul Atomique et Mol\'eculaire (CECAM) for supporting the organization of the workshop ``From genotypes to function. Challenges in the computation of realistic genotype-phenotype maps", which took place in Zaragoza (March 13th to March 15th, 2019) and triggered the production of this work. These are additional sources of financial support of the authors: \\ SM: grant FIS2017-89773-P (MINECO/FEDER, EU); “Severo Ochoa” Centers of Excellence to CNB, SEV 2017-0712\\ JAC: grants FIS2015-64349-P (MINECO/FEDER, EU) and PGC2018-098186-B-I00 (MICINN/FEDER, EU) \\ JA: grant FIS2017-89773-P (MINECO/FEDER, EU) \\ LA: Foundational Questions Institute (FQXi) and Fetzer Franklin Fund, a donor advised fund of Silicon Valley Community Foundation, for FQXi Grant number FQXi-RFP-IPW-1913, Stanford Center for Computational, Evolutionary and Human Genomics and the Morrison Institute for Population and Resources Studies, Stanford University, the 2015 Information Processing in Cells and Tissues Conference, and the Mathematical Biosciences Institute at The Ohio State University, for its support through National Science Foundation Award \#DMS 0931642 \\ PC: Ram\'on Areces Postdoctoral Fellowship \\ RDU: grant BFU2015-67302-R (MINECO/FEDER, EU) \\ SFE: grants BFU2015-65037-P (MCIU-FEDER) and PROMETEOII/2014/012 (Generalitat Valenciana) \\ JK: DFG within CRC1310 ``Predictability in Evolution'' \\ NSM: Gates Cambridge Scholarship; Winton Programme for the Physics of Sustainability \\ JLP: Swiss National Science Foundation, grant PP00P3\_170604 \\ MJT: grants EP/L016494/1 (EPSRC/BBSRC Centre for Doctoral Training in Synthetic Biology) and BB/L01386X/1 (BBSRC/EPSRC Synthetic Biology Research Centre, BrisSynBio) \\ MW: the EPSRC and the Gatsby Charitable Foundation \bibliographystyle{elsarticle-num-names}
1,116,691,501,036
arxiv
\section{SDP formulation of the guessing probability} As shown in \cite{PCS+15}, it is possible to re-express the guessing probability presented in the main text as a SDP \cite{BV04,footnote2}. By defining $\tilde{\rho}_e^{\mathrm{A}\mathrm{B}} := p(e)\rho^{\mathrm{A}\mathrm{B}}_e$ the subnormalised state sent by Eve, and $\sigma_{a|x}^e := \tr_\mathrm{A}[(M_{a|x}\otimes \openone)\tilde{\rho}^{\mathrm{A}\mathrm{B}}_e]$ the subnormalised assemblage, then the $P_\mathrm{guess}(x^*)$ is equivalent to \begin{align} \label{e:pguess sdp} P_\mathrm{guess}(x^*) = \max_{\{\sigma_{a|x}^e\}}&\quad\tr\sum_e\sigma_{a=e|x=x^*}^e \\ \text{s.t.}&\quad \tr\sum_{a,x}F_{a|x}\sum_e \sigma_{a|x}^e =\beta^\mathrm{obs}, \nonumber \\ &\quad \sum_a \sigma_{a|x}^e = \sum_a \sigma_{a|x^*}^e \quad \forall e,x, \nonumber \\ &\quad \tr \sum_{ae} \sigma_{a|x^*}^e = 1, \quad \sigma_{a|x}^e \geq 0 \quad \forall a,e,x. \nonumber \end{align} The first constraint enforces consistency of the average assemblage prepared by Eve with the observed steering inequality violation; the second enforces no-signalling, which arises from the fact that $\sum_a M_{a|x} = \openone$ for all $x$, satisfied by all valid measurements; the third enforces that $\sum_e p(e) = 1$; the last constraint simultaneously enforces that the $p(e) \geq 0$ and that the states prepared for system $B$, $\rho^e_{a|x}$ are positive semidefinite operators. In particular, it was shown in \cite{PCS+15} that given any set of assemblages $\{\sigma_{a|x}^e\}_e$ satisfying the above SDP, then one can always find a quantum strategy for Eve $\{p(e), \rho^{\mathrm{A}\mathrm{B}}_e, M_{a|x}\}$ which realises them, allowing Eve to guess $A$'s outcomes with the same guessing probability. \end{appendix} \end{document}
1,116,691,501,037
arxiv
\subsection{Topological partition function and the wavefunction} In~\cite{Ooguri:2004zv}, it is conjectured that the partition function of a 4d BPS black hole is related to the topological string partition function by \begin{equation} Z_{BH} = |Z_{top}|^2\,.\label{eq:ooguri} \end{equation} Furthermore, it is pointed out that the topological partition function can be interpreted as a wave function, this interpretation comes from \cite{witten1993quantum}. Thus, the conjecture above becomes \begin{equation} Z_{BH} = |Z_{top}|^2 = |\psi|^2\,. \end{equation} A similar proposal exists in TMT \cite{Dijkgraaf:2004te}, where the partition function $Z_H$ of a 6d theory (contained within TMT and constructed from a volume form) is associated to a Wigner function arising from the B model of topological strings. Here we present a realization of these ideas, but in the context of 3d gravity. As shown in~\cite{Dijkgraaf:2004te} at the level of the equations of motion and in \cite{Chagoya_2018} at the level of the action, 3d gravity is contained in TMT as a particular splitting of the 7d manifold. In order to give a concrete example of the relation between $Z_H$ and the black hole entropy we consider an extremal BTZ black hole, compute its volume form in terms of the 2+1 dimensional standard and exotic actions for gravity, then we obtain $Z_H$, and finally we compare it to the norm of the wave function for the same black hole \cite{Vaz2008}. The organization of this work is as follows. First, we review and formalize the derivation of the standard and exotic actions for 2+1 gravity from TMT and construct the topological partition function. Then, we also review the BTZ black hole solutions and its partition function obtained from canonical quantization. Finally, we show how these results are related. \section{Stable forms in 7D} In this section we study the relation between invariant stable forms and structures on a 7d Riemannian manifold, $\mathbb R^7$. To understand the geometric structures defined by stable forms, we need to study the isotropy subgroup of such forms under the action of the general linear group $GL(7)$. We start by recalling the structure on $\mathbb{R}^7$. Later we use such construction to understand the case of a manifold $X$. Let $V$ be a real 7d vector space with basis $\{e_i\}$ and consider the space of $3$-forms $\wedge^3 V^*$. A form $\omega$ in $\wedge^3 V^*$ can be written as \begin{equation} \omega=\sum_{i,j,k=1}^7 a_{ijk}e^{ijk}, \end{equation} where $e^{ijk}=e^i\wedge e^j\wedge e^k$ and $\{e^i\}$ is a basis for $V^*$. Consider the group $G=GL(7)$ of automorphisms of $V$. There is a natural action $G\curvearrowright \wedge^3V^*$ and it is known that there are two distinguished orbits given by this action, namely \begin{align} G&\cdot \omega_1,\\ G&\cdot \omega_2, \end{align} where $\omega_i$ is the form defined as \begin{align} \omega_1 & = e^{123} - e^{145} + e^{167} + e^{246} + e^{257} + e^{347} - e^{356}, \label{eq:w1} \\ \omega_2 & = e^{123} + e^{145} - e^{167} + e^{246} + e^{257} + e^{347} - e^{356}.\label{eq:w2} \end{align} To each form corresponds an isotropy group, the Lie group \begin{equation} G_{\omega_1}=G_2,\quad G_{\omega_2}=\Tilde{G_2}. \end{equation} It is proved in \cite{bryant1987metrics} that $G_2$ is compact, connected, simple, simply connected, $14$-dimensional and it fixes the Euclidean metric $g_1=\sum (x^i)^2$ where $x= x^ie_i$ and $y= y^ie_i$ induced by $$\langle x, y\rangle_{\omega_1} = x^1y^1+x^2y^2+x^3y^3+x^4y^4+x^5y^5+x^6y^6+x^7y^7.$$ $G_2$ also preserves the orientation of the forms $\omega_1$ and $*\omega_1$ with respect to $g_1$, and $G_2$ is isomorphic to the group of automorphisms of the octonians. There are analogous results for the group $\Tilde{G_2}$, this group preserves $\omega_2,*\omega_2$, the metric induced by $$\langle x,y\rangle_{\omega_2}=x^1y^1+x^2y^2+x^3y^3-x^4y^4-x^5y^5-x^6y^6-x^7y^7,$$ and it is the non-compact dual of $G_2$. It is also connected, of dimension 14 and simple. In this case the natural identification $$G\cdot \omega_i=G/G_{\omega_i},$$ is in fact a diffeomorphism. Since $dim(G)=49$ and $dim(G_2)=dim(\Tilde{G_2})=14$ then the dimension of these orbits $dim(G\cdot \omega_i)=49-14=35$ coincides with the dimension of the ambient space $dim(\wedge^3V^*)=35$ and we conclude as in \cite{bryant1987metrics} that both orbits are open and the forms $\omega_1$ and $\omega_2$ are stable. In \cite{2007HVLe}, the authors show that the forms $\omega_1,\omega_2$ are essentially the unique stable forms, in the sense that any stable form $\omega\in \wedge^3V^*$ is either in the orbit of $\omega_1$ or $\omega_2$. The scenario we study in this paper is the case when $X$ is a complete 7d Riemannian manifold, $x\in X$ is a point and $V=T_x X$. A stable form induces a $G_{\omega_i}$-structure on $X$, as follows (see \cite{Clarke2012HolonomyGI}) : Consider the fiber bundle $\wedge^3T^*X$ and the open subbundle $\mathcal{P}^i(X)$ with fiber $$\mathcal{P}^i_x=\{\omega\in \wedge^3 V^*| \exists f:V\rightarrow \mathbb{R}^7\text{ with }f^*(\omega_i)=\omega\},$$ where in the last definition $f$ is an oriented isomorphism. From the previous discussion $\mathcal{P}^3_x\cong G\cdot \omega_i$. Fix a form $\omega$ over $X$ such that $\omega|_p\in\mathcal{P}^i_x=g\cdot\omega_i$ and consider the frame bundle $F$ of $X$ with fiber $$F_x=\{f|f:V\rightarrow \mathbb{R}^7\text{ is an isometry}\}.$$ Let $Q$ be the principal subbundle of $F$ whose fiber consists in isomorphisms preserving $\omega$. Hence the fiber is $Q_x\cong G_{\omega_i}$ and $\omega$ determines $Q$ which defines a $G_{\omega_i}$-structure on $X$, preserving the metric $g_{\omega}$ induced by the inner product $$\langle x,y\rangle_\omega=g\cdot\langle x,y\rangle_{\omega_i}.$$ There is a converse for this construction: given an oriented $G_{\omega_i}$-structure we can define a metric $g$, a $3$-form $\omega$ and $*\omega$ requiring that the corresponding metric is preserved by the action of $G_{\omega_i}$. Let $X$ be a Riemannian 7d manifold with a $G_2$-structure $(\omega,g)$ and denote as $\nabla_g$ the Levi-Civita connection associated to $g$. Let $\nabla_g\omega$ be the torsion of this $G_2$-structure. We say that $(\omega,g)$ is torsion-free if $\nabla_g\omega=0$. Finally define a $G_2$-manifold as a triplet $(X,\omega,g)$ such that $(\omega,g)$ is torsion-free. Consider a $G_2$-manifold $X$. The existence of a $G_2$ holonomy metric is equivalent to the existence of a $3$-form $\Phi$ satisfying as in \cite{Dijkgraaf:2004te}, \begin{align} \begin{split} d\Phi&=0, \\ d_{*\Phi}\Phi &=0. \end{split} \end{align} A stable $3$-form can be written in terms of a 7d vielbein as \begin{equation}\Phi=\sum_{i,j,k=1}^7 \Psi_{ijk}e^i e^j e^k,\end{equation} where $\Psi_{ijk}$ are the structure constants of the imaginary octonions. There are analogous constructions for stable forms on a $\Tilde{G_2}$-manifold, since the orbits of $\omega_1,\omega_2$ correspond with the holonomy groups $G_2$ and $\Tilde{G_2}$ respectively. In order to define a volume on a $G_{\omega_i}$-manifold $X$ consider a $3$-form $\Phi$ on $X$ as before, invariant by the corresponding holonomy group and define a volume as \begin{equation} V_7(\Phi)=\int_X\Phi\wedge{}_{*\Phi}\Phi. \label{vform} \end{equation} As above since in the 7d case there are only two open orbits of maximal dimension hence is natural to consider only forms in these orbits to get a notion of {\it genericity} as in \cite{Dijkgraaf:2004te}. \section{3D gravity from topological M-theory} In \cite{Dijkgraaf:2004te}, Dijkgraaf et al. introduced a notion for TMT in 7d with the property that it seems to unify several lower dimensional topological models. In particular, they find a dimensional reduction that recovers the equations of motion of 2+1 gravity from the volume of the 7d manifold $X$ discussed in the previous section. A similar construction was given by Bryant et al.~\cite{bryant1987metrics}, where starting from a rank-4 spin bundle $\mathbf{S}$ over a 3d space of constant curvature (\textit{space form}), a 3-form $\Phi$ satisfying $d\Phi = d_{*\Phi}\Phi = 0$ is constructed by making use of the structure equations for a manifold with constant sectional curvature $\kappa \equiv 4\Lambda$, i.e., \begin{subequations}\label{eqs:struc} \begin{align} d e = - A\wedge e - e\wedge A\,, \\ d A = - A\wedge A - \Lambda e\wedge e\,, \end{align} \end{subequations} where $\{e^1, e^2, e^3\}$ is a basis of the tangent space at a point of the 3-manifold, and $A$ is a Levi-Civita connection 1-form. As~\cite{bryant1987metrics,Dijkgraaf:2004te} point out, a 3-form that generalizes $\omega_1$~\eqref{eq:w1} can satisfy the conditions $d\Phi = d_{*\Phi}\Phi = 0$ in some special cases. In order to write down this 3-form $\psi$ it is convenient to introduce first a set of local coordinates on the 4d fibre. Let $y_i$ be those coordinates, we define $r=y_i y^i$. Notice that this is $SO(4)$-invariant. With the following 2-forms, \begin{align}\label{eq:sigmalo} \begin{split} \Sigma^5&=e^{12}-e^{34}, \\ \Sigma^6&=e^{13}-e^{42}, \\ \Sigma^7&=e^{14}-e^{23}, \end{split} \end{align} we can write the 3-form $\Phi$ that satisfies $d\Phi = d_{*\Phi}\Phi = 0$ as \begin{equation} \Phi = f^3(r) e^{567} + f(r) g^2(r) e^m\wedge \Sigma^m\,.\label{eq:phiminus} \end{equation} Since $f$ and $g$ depend only on $r$, $\Phi$ preserves the $SO(4)$ invariance of $\omega_1$. Remembering that $SO(4)$ is a subgroup of $G_2$, and by the discussion of the previous section, the fact that $\Phi$ is $SO(4)$-invariant is a good indicator that it can define a $G_2$ structure -- thus satisfying the required equations. The local coordinates $y_i$ are also used to define a basis of 1-forms in the fibre direction as \begin{equation} \alpha = d y - yA\,. \end{equation} The four components of $\alpha$ are identified as a local basis on the fibre, $\alpha^i = e^i$, $i=4,5,6,7$. As a consequence of eqs.~\eqref{eqs:struc}, these 1-forms satisfy \begin{equation} d\alpha = -\alpha\wedge A + (\kappa/4) y\omega\wedge\omega.\label{eq:dalpha} \end{equation} Using Eqs.~(\ref{eqs:struc}),(\ref{eq:dalpha}), and \begin{equation}\label{phiast} {}_{*\Phi}\Phi=-\frac{1}{6} g^4\Sigma_m\wedge\Sigma^m + \frac{1}{2}f^2 g^2\epsilon^{mnp}e^m\wedge e^n\wedge \Sigma^p, \end{equation} in ~\cite{0681.53021} it is showed that the equations $d\Phi = d_{*\Phi}\Phi = 0$ hold if \begin{align} \begin{split} f(r)&=\sqrt{3\Lambda}(1+r)^{1/3}\,, \\ g(r)&=2(1+r)^{-1/6}\,. \end{split} \end{align} Conversely, the authors of~\cite{Dijkgraaf:2004te} start with $d\Phi = d_{*\Phi}\Phi = 0$ and verify that the above assumptions for $f(r)$ and $g(r)$ lead to the structure equations,~(\ref{eqs:struc}), i.e., in their interpretation, the equations of motion for 3d gravity arise from the equations for a 3-form with $G_2$-holonomy. If these equations of motion are recovered from such a 3-form $\Phi$, it is natural to look for a Lagrangian for $\Phi$ that encompasses the main points of the derivations above and reduces to the known Lagrangians for 3d gravity. This Lagrangian is given precisely in terms of the volume form discussed around Eq.~(\ref{vform}). In order to convert~Eq.~(\ref{vform}) into an expression that we can recognise as the action for 2+1 gravity we perform the following steps. First, we rewrite the integrand $\Phi\wedge{}_{*\Phi}\Phi$ using the antisymmetry of the wedge product and of the Levi-Civita tensor, obtaining \begin{equation} V_7(\Phi)=\int_X \frac{40}{3}(3\Lambda)^{3/2}(1+r)^{1/3}e^{567}\wedge\Sigma_i\wedge\Sigma^i\,. \end{equation} Now, let $\Sigma$ be the curvature of a connection $\alpha$, i.e., \begin{equation} \Sigma_5=d\alpha_5 + 2\alpha_6 \alpha_7\,, \label{eq:sigmacurv} \end{equation} and cyclically for the others. Later on we will relate this $\alpha$ to the connection 1-form $A$. Notice that this is compatible with the equations~\eqref{eq:sigmalo} that express $\Sigma^i$ in a local orthonormal basis~\cite{hitchin2001stable}. Using again the properties of the wedge product, and noticing that as a consequence of the structure equations~\eqref{eqs:struc} we have $d(e^{567}) = 0$~\cite{0681.53021}, the volume $V_7$ can be written as \begin{align} V_7(\Phi)=\int_X & \frac{40}{3}(3\Lambda)^{3/2}(1+r)^{1/3}d\left[e^{567}\wedge(\alpha_i\wedge d\alpha_i \right. \nonumber \\ & \left.+\frac{2}{3} \epsilon^{ijk}\alpha_i\alpha_j\alpha_k) \right]\,. \end{align} The argument of the differential does not depend on $r$, therefore, by an appropriate choice of coordinates, its prefactor can be integrated out so that it becomes a global factor of a 6d integral. We can further reduce these dimensions by using Stokes theorem, obtaining\footnote{We have to be careful with the notation: all $p$-forms are integrated over $p$-dimensional manifolds. If the dimensions of the integral and the order of the $p$-form obtained by counting wedge products does not match, this means that one of the differentials $dx^i$ has been integrated out, and we have to remember this when writing the form in component notation.} \begin{align} V_7(\Phi)\propto\int_{X^5} & e^{567}\wedge(\alpha_i\wedge d\alpha_i +\frac{2}{3} \epsilon^{ijk}\alpha_i\alpha_j\alpha_k) \,. \end{align} Finally, since the argument of the integral only depends on quantities defined over the 3-manifold $\mathcal M$ with basis $\{e^5,e^6,e^7\}$, the volume can be expressed as \begin{equation} V_7(\Phi)\sim \int_{\mathcal M} e^{567}\wedge(\alpha_i\wedge d\alpha_i+\frac{2}{3} \epsilon^{ijk}\alpha_i\alpha_j\alpha_k) . \end{equation} Expanding the wedge product in components, relabeling the internal indice as $(a,b,c)$ and using $(i,j,k)$ for the spacetime indices, we get \begin{equation} V_7(\Phi)\sim \int_{\mathcal M} \epsilon^{ijk}(2\alpha^a_i\wedge \partial_j\alpha^a_k+\frac{2}{3} \epsilon_{abc}\alpha^a_i\alpha^b_j\alpha^c_k) . \end{equation} This is the Chern-Simons action. At this point it is convenient to notice that the 2-forms $\Sigma$ are anti-self-dual, i.e., $^*\Sigma^i = -\Sigma^i$. For this reason, we rename it as $^-\Sigma^i$, with associated connection $^-\alpha_i$, and we also rename the form $\Phi$ given in eq.~\eqref{eq:phiminus} as $^-\Phi$. Now we are ready to see the relevance of the discussion of the previous section. The form $^-\Phi$ is constructed out of the stable form $\omega_2$ presented in eq.~\eqref{eq:w2}. However, we have seen that the volume form can also be constructed in terms of $\omega_1$, eq.~\eqref{eq:w1}. Furthermore, these two possibilites, $\omega_1$ and $\omega_2$ are unique in the sense discussed in the previous section. With these considerations in mind, we construct a volume form for each of the 3-forms \begin{align} ^-\Phi &= f^3(r) e^{567} + f(r) g^2(r) e^m\wedge {}^-\Sigma^m\,,\label{eq:phiminus2}\\ {}^+\Phi &= f^3(r) e^{567} + f(r) g^2(r) e^m\wedge {}^+\Sigma^m\,.\label{eq:phiplus2}\, , \end{align} where ${}^+\Sigma^m$ are the self-dual 2-forms \begin{align} \begin{split} ^+\Sigma^5&=e^{12}+e^{34}, \\ ^+\Sigma^6&=e^{13}+e^{42}, \\ ^+\Sigma^7&=e^{14}+e^{23}, \end{split} \end{align} and $r$ is defined in the same way as described before. When $f(r)=g(r)=1$, ${}^- \Phi$, ${}^+ \Phi$ are equivalent to $\omega_2$ and $\omega_1$, respectively. The 4-forms associated to $^-\Phi$ and $^+\Phi$ are \begin{align}\label{phiastminusplus} {}_{*\Phi}{^\mp\Phi}=& \mp\frac{1}{6} g^4{}^\mp\Sigma_m\wedge{}^\mp\Sigma^m \nonumber \\ & \pm \frac{1}{2}f^2 g^2\epsilon^{mnp}e^m\wedge e^n\wedge {}^\mp\Sigma^p\,. \end{align} We can use either of ${}^\pm\Phi$ to construct the volume of the 7-manifold $X$, \begin{equation} V^{\pm}\equiv V_7({}^\pm\Phi) = \int_X {}^\pm\Phi\wedge {}_{*\Phi}{^\pm\Phi}\,. \end{equation} By the same steps of the previous section, $V_7$ can be written as \begin{equation} V^{\pm}\sim \int_{\mathcal M} \epsilon^{ijk}(2{}^\pm\alpha^a_i\wedge \partial_j{}^\pm\alpha^a_k+\frac{2}{3} \epsilon_{abc}{}^\pm\alpha^a_i{}^\pm\alpha^b_j{}^\pm\alpha^c_k) , \label{eq:cspm} \end{equation} where ${}^+\alpha^i$ is the connection associated to ${}^+\Sigma^i$. Thus, we have found two Chern-Simons actions derivable from the volume of a 7-manifold that admits two special stable forms. Now we want to understand how these two actions are related to 2+1 gravity. From the results of ~\cite{0681.53021,Dijkgraaf:2004te}, we know that the equations of motion arising from the volume of ${}^-\Phi$ are those of 2+1 gravity with a cosmological constant. Since $V({}^+\Phi)$ describes the same volume as $V({}^-\Phi)$, the 3d equations of motion derived from both actions have to coincide. This is remarkably similar, and consistent, with the results of~\cite{Witten:1988hc}, where it is shown that there are two 3d actions, named \textit{standard} and \textit{exotic}, that lead to the same equations of motion that we are interested in. Furthermore, they show that these actions can be written precisely in terms of the Chern-Simons actions~\eqref{eq:cspm} by setting \begin{equation} {}^\pm\alpha^a_i = A_i^a \pm \sqrt{\lambda}e^a_i\,, \end{equation} where $A_i$ and $e_i$ are the fields introduced around Eq.~\eqref{eqs:struc}. The combinations \begin{align} I_{st}&=\frac{^+I - ^- I}{4\sqrt{\lambda}}\,, \label{eq:ist} \\ I_{ex}&=\frac{^+I + ^- I}{2}, \label{recoverstex} \end{align} where ${}^\pm I$ are the integrals in Eq.~(\ref{eq:cspm}), give respectively the standard and exotic actions. Now we can reinterpret the standard and exotic actions in terms of the volume functional as \begin{align} I_{st}&=\frac{h^+ V^+ - h^- V^-}{4\sqrt{\lambda}} \nonumber \\ I_{ex}&=\frac{h^+ V^+ + h^-V ^- }{2}. \end{align} where $h^{\pm}$ are the inverses of the proportionality factors in Eq.~\eqref{eq:cspm}. In this way, we can see the standard and exotic actions as two different combinations of pieces of the volume of the 7-manifold X. Applications of the ideas developed so far to the Immirzi ambiguity in 3d gravity have been presented in~\cite{Chagoya_2018}. In the next section we explore the entropy of the BTZ black hole from the point of view of TMT and we discuss the relation of our results to the conjecture $Z_{BH} = |Z_{top}|^2$. \section{BTZ black hole: partition function} Using the results described above we can provide evidence that the conjecture discussed around Eq.~\eqref{eq:ooguri} also applies for $G_2$-manifolds and 3d black holes, i.e., that in general, the partition function of a theory with action defined by a Hitchin functional is related to the partition function for a BPS black hole in the gravitational theory allowed by the $p$~-forms used to construct the Hitchin functional. The possibility that the relation between BPS objects and form theories of gravity extends to G2-manifolds was hinted in~\cite{Dijkgraaf:2004te}; however, it was only studied for 4d and 5d black holes embedded in a 6d $SU(3)$-manifold. In this work we show explicitly that the partition function of the BTZ black hole is recovered from the partition function associated to the volume $V_7$. Given the different ways of writing down $V_7$ either in terms of $V^+, V^-$ or both, one could think that the result only applies to the extremal case, which turns out to be associated to the situation were we demand that the linear combinations of $V^+$ and $V^-$ -- for instance $I_{st}$ and $I_{ex}$ -- preserve a given multiple of $V_7$; but as we argue below, the partition function obtained from TMT correctly gives the BH partition function even away from the extremal case. In the case of TMT, the total space $X$ is 7d and as we shown in the previous sections, its volume can be constructed with either of the 3-forms ${}^+\Phi$ and ${}^-\Phi$. A certain combination of these volumes, Eq.~(\ref{eq:ist}), results in the standard action for 3d gravity. In this theory, a black hole solution is given by the BTZ space-time \cite{Banados1992}, whose metric can be written as \begin{equation} ds^2 = -N^2 dt^2 + N^{-2} dr^2 + r^2(N^\phi dt + d\phi)^2, \end{equation} where the lapse $N$ and shift $N^\phi$ are \begin{align} N &= \left(-M + \frac{r^2}{{\ell}^2} + \frac{J^2}{ 4 r^2} \right)^{1/2}, \\ N^\phi & = -\frac{J}{2 r^2}. \end{align} The integration constants $M$ and $J$ are interpreted respectively as the mass and angular momentum of the black hole, and $\ell$ is related to the cosmological constant of the theory by $\ell^{-2} =\Lambda/3$. The lapse function vanishes at two distinct values of $r$, defining two coordinate singularities, $r_{\pm}$, \begin{equation} r_{\pm} = \frac12 \left( \sqrt{\ell (\ell M+J) } \pm \sqrt{\ell(\ell M-J)}\right)\,. \end{equation} When $J=0$ only $r_+$ is different from zero, and in the extremal case $J=M\ell$ the two horizons coincide. The entropy of the BTZ black hole can be computed by different methods, for example, by Euclidean path integral or by Noether charges \cite[see e.g.][]{CarlipClass.Quant.Grav.12:2853-28801995}, and it is given by \begin{equation} S_{BTZ}^{st} = 4\pi r_+.\label{smas} \end{equation} These computations do not depend only on the metric but also on the action, that is usually taken to be the standard action, hence the superscript $st$. Originally, this result comes from geometrical considerations on the standard action of 2+1 gravity, and then deriving the entropy from the grand canonical partition function in the classical approximation~\cite{Banados:1993qp} $$ Z = \exp(I_{st})\,. $$ Since the standard action is recovered from TMT, the entropy of a BTZ black hole described by such an action is recovered as well. The same techniques that lead to Eq.~\eqref{smas} have been applied to the exotic action, finding an entropy proportional to the inner BTZ horizon, $r_-$. The fact that the entropy is proportional to the inner horizon raised doubts about the validity of black hole thermodynamics. However, it has been shown that these laws hold~\cite{Townsend2013}. Indeed, the result is even more general: an entropy of the form \begin{equation}S\sim \alpha r_+ + \gamma r_- \label{eq:mixen} \end{equation} is in agreement with black hole thermodynamics. Eq.~\eqref{eq:mixen} arises naturally in the context we are studying in this work. Hitchin's partition function is defined in terms of the volume functional, \begin{equation} Z_H(\Phi) = \int_{[\Phi]} d\Phi \,\textrm{exp}{(V_H(\Phi))}\,. \end{equation} Thus, when we write TMT as a theory of a 4d vector bundle over a 3d base space such that the 7d manifold $X$ has a $G_2$-structure, we can separate $V_7$ in terms of the volume functionals $V^{\pm}$, \begin{equation} \lambda V_7 = \beta_+ V^+ + \beta_- V^- \,,\label{eq:volsplit} \end{equation} for some coefficients $\lambda, \beta_{\pm}$. Notice that, so far, all the properties that hold for a theory based on $V_7$ hold for a theory based on a multiple $\lambda$ of $V_7$. In addition, $V^{\pm}$ are proportional to the Chern-Simons actions, Eq.~\eqref{eq:cspm}, with proportionality constants $1/h^\pm$. Putting all together, we write Hitchin's partition function as \begin{equation} Z_H(\Phi) = \int_{[\Phi]} d\Phi \, \textrm{exp}\left[\sum_{\sigma=+,-} \beta_\sigma(h^{\sigma})^{-1}\, {}^\sigma I\right], \end{equation} As before, the basis of the 7d manifold can be decomposed into a 3d base space and a 4d bundle. The coefficients $\beta_\pm$ can be chosen in such a way that the linear combination of $^\pm I$ in the argument of the exponential reproduces either the standard or the exotic action, or a combination of both. For the choice that leads to the standard action, by the discussion above we confirm that Hitchin's entropy is related to the BTZ entropy, \begin{equation} Z_H(\Phi) \propto \int de d\alpha\, \textrm{exp}(I_{st}) = Z_{BH}\,. \label{entropy} \end{equation} On the other hand, for a different choice of parameters we can have \begin{equation} Z_H(\Phi) \propto \int de d\alpha\, \textrm{exp}(I_{ex} ) = Z_{BH} \,, \label{entropy2} \end{equation} i.e., the Hitchin partition function for the exotic action is also related to a black hole partition function, only that in this case $Z_{BH}$ corresponds to the exotic BTZ black hole. The extremal case, $r_+ = r_-$, admits an interpretation from the point of view of TMT. Suppose we fix $\lambda$, e.g. $\lambda=1$. This imposes a constraint on the linear combinations in Eq.~\eqref{eq:volsplit}, such that any choice of $\beta_\pm$ leads to a fixed $V_7$ and the same $Z_H(\Phi)$. Therefore, all combinations lead to the same black hole entropy, and this is only possible if $r_+ = r_-$, i.e., the extremal case corresponds to a constraint on the parameters $\beta_{\pm}$. \section{Discussion} 3d gravity can be embedded in a 7-manifold with $G_2$-holonomy. The volume form of this manifold is constructed in terms of a stable (\textit{generic}, in the sense of~\cite{Dijkgraaf:2004te}) form. Indeed, there are essentially two unique such forms and by using these two stable forms, we split the volume of the 7-manifold into contributions from the distinct orbits. Using the structure equations appropriated for our geometrical set-up, we find that these two contributions can be rephrased as Chern-Simons actions, one for a self-dual curvature and one for an anti-self-dual curvature. This observation allows us to recover the two classically equivalent known actions of 3d gravity, i.e., Witten's standard and exotic actions, thus completing the picture shown in \cite{0681.53021,Dijkgraaf:2004te}. In a context that is more general than the theory that we study here, it has been conjectured that topological and black hole partition functions are related. Our results give a concrete realisation of this conjecture: by writing the action of TMT in terms of the contributions from the two unique stable forms, we can tune the theory so that it reproduces the partition function of the standard action of 3d gravity, thus agreeing with the result for the BTZ black hole; or we can choose to reproduce the exotic action, obtaining the correct entropy for the exotic BTZ black hole. It is worth noticing that a combined standard/exotic entropy is in agreement with black hole thermodynamics~\cite{Townsend2013}, and our results provide a scenario where such combined models can be embedded. The topological partition function is also conjectured to be related to a wave function. The wave function for a static BTZ black hole in the region outside the horizon has been computed within a canonical quantization scheme~\cite{Vaz2008}. When evaluated at the horizon, their result takes the form (more details in the Appendix): $$ |\psi|^2 \sim e^{\tilde \mu r_+} \,, $$ where $\tilde \mu$ is a quantized number related to the energy levels of the system. This result indeed reassembles the Euclidean partition function for the BTZ black hole. It would be interesting to explore the quantization of a non-static BTZ black hole, so that the relation between the wave function and the black hole partition function can be explored for the extremal case, i.e., the case that would correspond to the conjectures in~\cite{Ooguri:2004zv}. This is left for future work. \section*{Acknowledgments} This work is supported by CONACYT grants 257919, 258982. M. S. is supported by CIIC 28/2020. \bibliographystyle{unsrt}
1,116,691,501,038
arxiv
\section{Siegel units}\label{sec siegel} We recall some basic definitions and results about Siegel units, for which we refer the reader to \cite[\S 1]{kato} and \cite{kubert-lang}. Let $B_2 = X^2-X+\frac16$ be the second Bernoulli polynomial. For $x \in \mathbf{R}$, we define $B(x) = B_2(\{x\}) = \{x\}^2-\{x\}+\frac16$, where $\{x\} = x- \lfloor x \rfloor$ denotes the fractional part of $x$. Let $\mathcal{H}$ be the upper half-plane. Let $N \geq 1$ be an integer and $\zeta_N=e^{2\pi i/N}$. For any $(a,b) \in (\mathbf{Z}/N\mathbf{Z})^2$, $(a,b) \neq (0,0)$, the Siegel unit $g_{a,b}$ on $\mathcal{H}$ is defined by \begin{equation}\label{def gab} g_{a,b}(\tau) = q^{B(a/N)/2} \prod_{n \geq 0} (1-q^n q^{\tilde{a}/N} \zeta_N^b) \prod_{n \geq 1} (1-q^n q^{-\tilde{a}/N} \zeta_N^{-b}) \qquad (q=e^{2\pi i \tau}) \end{equation} where $\tilde{a}$ is the representative of $a$ satisfying $0 \leq \tilde{a} <N$. Here $q^\alpha = e^{2\pi i \alpha \tau}$ for $\alpha \in \mathbf{Q}$. It is known that the function $g_{a,b}^{12N}$ is modular for the group \begin{equation*} \Gamma(N) = \{ \gamma \in \SL_2(\mathbf{Z}) : \gamma \equiv I_2 \pmod{N} \}. \end{equation*} In fact $g_{a,b}$ defines an element of $\mathcal{O}(Y(N))^\times \otimes \mathbf{Q}$, where $Y(N)$ denotes the affine modular curve of level $N$ over $\mathbf{Q}$. Recall that the group $\GL_2(\mathbf{Z}/N\mathbf{Z})$ acts on $Y(N)$ by $\mathbf{Q}$-automorphisms. For any $\gamma \in \GL_2(\mathbf{Z}/N\mathbf{Z})$, we have the identity in $\mathcal{O}(Y(N))^\times \otimes \mathbf{Q}$ \begin{equation}\label{gab gamma} g_{a,b} | \gamma = g_{(a,b)\gamma}. \end{equation} \begin{lem}\label{lem gab sigma} Let $(a,b) \in (\mathbf{Z}/N\mathbf{Z})^2$, $(a,b) \neq (0,0)$. We have \begin{equation} g_{a,b}(-1/\tau) = e^{-2\pi i (\{\frac{a}{N}\}-\frac12)(\{\frac{b}{N}\}-\frac12)} g_{b,-a}(\tau) \qquad (\tau \in \mathcal{H}). \end{equation} \end{lem} \begin{proof} By taking the matrix $\gamma = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}$ in (\ref{gab gamma}), we see that $g_{a,b}(-1/\tau)=w_{a,b}g_{b,-a}(\tau)$ for some root of unity $w_{a,b}$. The formula for $w_{a,b}$ follows from \cite[Chap. 2, \S 1, K1, K4]{kubert-lang}. \end{proof} \begin{lem}\label{lem int darg} For any $a,b \in \mathbf{Z}/N\mathbf{Z}$, we have \begin{equation}\label{int darg eq} \int_0^\infty \darg g_{a,b} = \begin{cases} 0 & \textrm{if } a=0 \textrm{ or } b=0\\ 2\pi (\{\frac{a}{N}\}-\frac12)(\{\frac{b}{N}\}-\frac12) & \textrm{if } a \neq 0 \textrm{ and } b \neq 0. \end{cases} \end{equation} \end{lem} \begin{proof} If $a=0$ or $b=0$ then $g_{a,b}$ has constant argument on the imaginary axis $\tau = it$, $t >0$, hence $\int_0^\infty \darg g_{a,b}=0$. If $a \neq 0$ and $b \neq 0$, it is easily seen that $\arg g_{a,b}(it) \xrightarrow{t \to \infty} 0$. Moreover, by Lemma \ref{lem gab sigma}, we have $\arg g_{a,b}(it) \xrightarrow{t \to 0} -2\pi (\{\frac{a}{N}\}-\frac12)(\{\frac{b}{N}\}-\frac12) \pmod{2\pi}$. This proves (\ref{int darg eq}) up to a multiple of $2\pi$. In order to establish the exact equality, let us introduce the Klein forms \cite[Chap. 2, \S 1, p. 27]{kubert-lang}: \begin{equation*} \mathfrak{k}_{\alpha,\beta}(\tau) = e^{-\frac12 \eta(\alpha \tau+\beta,\tau) (\alpha \tau+\beta)} \sigma(\alpha \tau+\beta, \tau) \qquad (\alpha,\beta \in \mathbf{R}; \tau \in \mathcal{H}) \end{equation*} where $\eta$ and $\sigma$ denote the Weierstrass functions. The link with Siegel units is given by \begin{equation*} g_{a,b}(\tau) = w \mathfrak{k}_{a/N,b/N}(\tau) \Delta(\tau)^{1/12} \qquad (1 \leq a,b \leq N-1) \end{equation*} where $w$ is a root of unity \cite[p. 29]{kubert-lang}. Since $\Delta$ is positive on the imaginary axis, it follows that \begin{equation*} \int_0^\infty \darg g_{a,b} = \int_0^\infty \darg \mathfrak{k}_{a/N,b/N}. \end{equation*} Using the $q$-product formula for the $\sigma$ function \cite[Chap. 18, \S 2]{lang:elliptic} and the Legendre relation $\eta_2 \omega_1 - \eta_1 \omega_2 = 2\pi i$, we find \begin{equation}\label{eq kab} \mathfrak{k}_{\alpha,\beta}(it)=\frac{1}{2\pi i} e^{-\pi \alpha^2 t} e^{\pi i\alpha \beta}(e^{\pi i \beta}e^{-\pi \alpha t}-e^{-\pi i \beta} e^{\pi \alpha t}) \prod_{n \geq 1} \frac{(1-e^{-2\pi (n+\alpha)t}e^{2\pi i \beta})(1-e^{-2\pi (n-\alpha)t}e^{-2\pi i \beta})}{(1-e^{-2\pi nt})^2}. \end{equation} Assume $0 < \alpha,\beta < 1$. Then by (\ref{eq kab}), we have $\arg \mathfrak{k}_{\alpha,\beta}(it) \xrightarrow{t \to \infty} \pi (\alpha \beta-\beta+\frac12)$. Moreover, the Klein forms are homogeneous of weight -1 \cite[p. 27, K1]{kubert-lang}, which implies \begin{equation*} \mathfrak{k}_{\alpha,\beta}(-1/\tau) = \frac{1}{\tau} \mathfrak{k}_{\beta,-\alpha}(\tau). \end{equation*} From this we get $\arg \mathfrak{k}_{\alpha,\beta}(it) \xrightarrow{t \to 0} \pi (-\alpha \beta+\alpha) \pmod{2\pi}$ and \begin{equation*} \int_0^\infty \darg \mathfrak{k}_{\alpha,\beta} \equiv 2\pi (\alpha-\frac12)(\beta-\frac12) \pmod{2\pi}. \end{equation*} Moreover, using the fact that $\int_0^\infty \darg \mathfrak{k}_{\alpha,\beta} = \int_i^{\infty} \darg \mathfrak{k}_{\alpha,\beta} - \int_i^\infty \darg \mathfrak{k}_{\beta,-\alpha}$ and taking the imaginary part of the logarithm of (\ref{eq kab}), we may express $\int_0^\infty \darg \mathfrak{k}_{\alpha,\beta}$ as an infinite sum, which shows that it is a continuous function of $(\alpha,\beta) \in (0,1)^2$. But for $\beta=\frac12$, the Klein form $\mathfrak{k}_{\alpha,\frac12}(it)$ has constant argument. This implies that $\int_0^\infty \darg \mathfrak{k}_{\alpha,\beta} = 2\pi (\alpha-\frac12)(\beta-\frac12)$ for any $0 < \alpha,\beta < 1$. \end{proof} \section{$L$-functions of modular forms}\label{sec Lstar} In this section we recall basic results on the functional equation satisfied by $L$-functions of modular forms. Let $f(\tau) = \sum_{n=0}^\infty a_n q^n$ be a modular form of weight $k \geq 1$ on the group $\Gamma_1(N)$. The $L$-function of $f$ is defined by $L(f,s)=\sum_{n=1}^\infty a_n n^{-s}$, $\Re(s)>k$. Define the completed $L$-function \begin{equation*} \Lambda(f,s):=N^{s/2} (2\pi)^{-s} \Gamma(s) L(f,s) = N^{s/2} \int_0^\infty (f(iy)-a_0) y^s \frac{\mathrm{d} y}{y}. \end{equation*} Recall that the Atkin-Lehner involution $W_N$ on $M_k(\Gamma_1(N))$ is defined by $(W_N f)(\tau)=i^k N^{-k/2} \tau^{-k} f(-1/(N\tau))$ (note that in the case $k=2$ this $W_N$ is the opposite of the usual involution acting on differential $1$-forms). The following theorem is classical (see \cite[Thm 4.3.5]{miyake}). \begin{thm}\label{thm hecke} Let $f=\sum_{n=0}^\infty a_n q^n \in M_k(\Gamma_1(N))$. The function $\Lambda(f,s)$ can be analytically continued to the whole $s$-plane, and satisfies the functional equation $\Lambda(f,s) = \Lambda(W_N f,k-s)$. Moreover, write $W_N f = \sum_{n=0}^\infty b_n q^n$. Then the function \begin{equation*} \Lambda(f,s)+\frac{a_0}{s}+\frac{b_0}{k-s} \end{equation*} is holomorphic on the whole $s$-plane. \end{thm} \begin{definition} The notations being as in Theorem \ref{thm hecke}, we define the regularized values of $\Lambda(f,s)$ at $s=0$ and $s=k$ by \begin{align} \Lambda^*(f,0) & := \lim_{s \to 0} \Lambda(f,s)+\frac{a_0}{s}\\ \Lambda^*(f,k) & := \lim_{s \to k} \Lambda(f,s)+\frac{b_0}{k-s}. \end{align} \end{definition} Note that the functional equation translates into the equalities of regularized values \begin{equation}\label{eq lambda star} \Lambda^*(f,0) = \Lambda^*(W_N f,k) \qquad \Lambda^*(f,k) = \Lambda^*(W_N f,0). \end{equation} We will need the following lemma. \begin{lem}\label{lem fgh} Let $f = \sum_{n=0}^\infty a_n q^n \in M_k(\Gamma_1(N))$ and $g = \sum_{n=0}^\infty b_n q^n \in M_{\ell}(\Gamma_1(N))$ with $k,\ell \geq 1$. Let $h=W_N(g)$. Write $f^* = f-a_0$ and $g^* = g-b_0$. Then for any $s \in \mathbf{C}$, we have \begin{equation}\label{eq fgh} N^{s/2} \int_0^\infty f^*(iy) g^*\bigl(\frac{i}{Ny}\bigr) y^s \frac{\mathrm{d} y}{y} = \Lambda(fh,s+\ell) - a_0 \Lambda(h,s+\ell) - b_0 \Lambda(f,s). \end{equation} \end{lem} \begin{proof} Note that the integral in (\ref{eq fgh}) is absolutely convergent because $f^*(\tau)$ and $g^*(\tau)$ have exponential decay when $\Im(\tau)$ tends to $+\infty$. Moreover, it is easy to check, using Theorem \ref{thm hecke}, that the right hand side of (\ref{eq fgh}) is holomorphic on the whole $s$-plane. Therefore it suffices to establish (\ref{eq fgh}) when $\Re(s)>k$. Since $W_N g=h$, we have \begin{align*} N^{s/2} \int_0^\infty f^*(iy) g^*\bigl(\frac{i}{Ny}\bigr) y^s \frac{\mathrm{d} y}{y} & = N^{s/2} \int_0^\infty f^*(iy) \bigl(g\bigl(\frac{i}{Ny}\bigr)-b_0\bigr) y^s \frac{\mathrm{d} y}{y}\\ & = N^{s/2} \int_0^\infty f^*(iy) (N^{\ell/2} t^\ell h(iy)-b_0) y^s \frac{\mathrm{d} y}{y}. \end{align*} Now, we remark that $f^* h = fh-a_0 h = (fh)^*-a_0 h^*$. Thus \begin{align*} N^{s/2} \int_0^\infty f^*(iy) g^*\bigl(\frac{i}{Ny}\bigr) y^s \frac{\mathrm{d} y}{y} & = N^{s/2} \int_0^\infty \bigl(N^{\ell/2} y^\ell ((fh)^*(iy) -a_0 h^*(iy)) - b_0 f^*(iy) \bigr) y^s \frac{\mathrm{d} y}{y}\\ & = \Lambda(fh,s+\ell) - a_0 \Lambda(h,s+\ell) - b_0 \Lambda(f,s). \end{align*} \end{proof} Specializing Lemma \ref{lem fgh} to the (regularized) value at $s=k$, we get the following formula. \begin{lem}\label{lem fgh 2} Let $f = \sum_{n=0}^\infty a_n q^n \in M_k(\Gamma_1(N))$ and $g = \sum_{n=0}^\infty b_n q^n \in M_{\ell}(\Gamma_1(N))$ with $k,\ell \geq 1$. Let $h=W_N(g)$. Write $f^* = f-a_0$ and $g^* = g-b_0$. Then we have \begin{equation}\label{eq fgh 2} N^{k/2} \int_0^\infty f^*(iy) g^*\bigl(\frac{i}{Ny}\bigr) y^k \frac{\mathrm{d} y}{y} = \Lambda^*(fh,k+\ell) - a_0 \Lambda(h,k+\ell) - b_0 \Lambda^*(f,k). \end{equation} \end{lem} \section{Eisenstein series of weight 1} In this section we define some Eisenstein series of weight 1. These are the same as those arising in \cite{zudilin}. \begin{definition} For any $a,b \in \mathbf{Z}/N\mathbf{Z}$, we let \begin{equation} e_{a,b} = \alpha_0(a,b) + \sum_{ \substack{m,n \geq 1\\ m \equiv a, \; n \equiv b (N)}} q^{mn} - \sum_{ \substack{m,n \geq 1\\ m \equiv -a, \; n \equiv -b (N)}} q^{mn} \end{equation} where \begin{equation*} \alpha_0(a,b) = \begin{cases} 0 & \textrm{if } a=b=0\\ \frac12 - \{\frac{b}{N}\} & \textrm{if } a=0 \textrm{ and } b \neq 0\\ \frac12 - \{\frac{a}{N}\} & \textrm{if } a \neq 0 \textrm{ and } b = 0\\ 0 & \textrm{if } a \neq 0 \textrm{ and } b \neq 0. \end{cases} \end{equation*} \end{definition} \begin{lem} The function $e_{a,b}(\tau/N)$ is an Eisenstein series of weight $1$ on the group $\Gamma(N)$, and the function $e_{a,b}$ is an Eisenstein series of weight $1$ on $\Gamma_1(N^2)$. \end{lem} \begin{proof} In \cite[Chap. VII, \S 2.3]{schoeneberg}, for any $(a,b) \in (\mathbf{Z}/N\mathbf{Z})^2$ the following Eisenstein series are introduced \begin{equation*} G_{1,(a,b)}(\tau) = -\frac{2\pi i}{N} \Bigl(\gamma_0(a,b)+\sum_{\substack{m,n \geq 1\\ n \equiv a (N)}} \zeta_N^{bm} q^{mn/N} - \sum_{\substack{m,n \geq 1\\ n \equiv -a (N)}} \zeta_N^{-bm} q^{mn/N}\Bigr) \end{equation*} where \begin{equation*} \gamma_0(a,b) = \begin{cases} 0 & \textrm{if } a=b=0\\ \frac12 \frac{1+\zeta_N^b}{1-\zeta_N^b} & \textrm{if } a=0 \textrm{ and } b \neq 0\\ \frac12 - \{\frac{a}{N}\} & \textrm{if } a \neq 0. \end{cases} \end{equation*} The function $G_{1,(a,b)}$ is an Eisenstein series of weight $1$ on the group $\Gamma(N)$. We have \begin{align*} e_{a,b}\left(\frac{\tau}{N}\right) & = \alpha_0(a,b) + \sum_{\substack{m,n \geq 1 \\ m \equiv a, \; n \equiv b (N)}} q^{mn/N} - \sum_{\substack{m,n \geq 1 \\ m \equiv -a, \; n \equiv -b (N)}} q^{mn/N}\\ & = \alpha_0(a,b)+ \frac{1}{N} \sum_{c=0}^{N-1} \zeta_N^{ca} \Biggl( \sum_{\substack{m,n \geq 1 \\ n \equiv b (N)}} \zeta_N^{-cm} q^{mn/N} - \sum_{\substack{m,n \geq 1 \\ n \equiv -b (N)}} \zeta_N^{cm} q^{mn/N} \Biggr)\\ & = \alpha_0(a,b)-\frac{1}{N} \sum_{c=0}^{N-1} \zeta_N^{ca} \gamma_0(b,-c) -\frac{1}{2\pi i} \sum_{c=0}^{N-1} \zeta_N^{ca} G_{1,(b,-c)}. \end{align*} If $b \neq 0$ then \begin{equation*} \frac{1}{N} \sum_{c=0}^{N-1} \zeta_N^{ca} \gamma_0(b,-c) = \frac{1}{N} \sum_{c=0}^{N-1} \zeta_N^{ca} \bigl(\frac12 - \{\frac{b}{N}\}\bigr) = \alpha_0(a,b), \end{equation*} hence $e_{a,b}(\tau/N)$ is an Eisenstein series of weight $1$ on $\Gamma(N)$. If $a \neq 0$ then the same is true because $e_{a,b}=e_{b,a}$. Finally if $a=b=0$ then \begin{equation*} \alpha_0(a,b)-\frac{1}{N} \sum_{c=0}^{N-1} \zeta_N^{ca} \gamma_0(b,-c) =-\frac{1}{N} \sum_{c=0}^{N-1} \gamma_0(0,c) = 0 \end{equation*} because $\gamma_0(0,-c)=-\gamma_0(0,c)$. The second assertion follows from the fact that $\begin{pmatrix} N & 0 \\ 0 & 1 \end{pmatrix} \Gamma_1(N^2) \begin{pmatrix} N & 0 \\ 0 & 1 \end{pmatrix}^{-1} \subset \Gamma(N)$. \end{proof} \begin{definition} For any $a,b \in \mathbf{Z}/N\mathbf{Z}$, we let \begin{equation} f_{a,b} = \beta_0(a,b) + \sum_{m,n \geq 1} (\zeta_N^{am+bn}-\zeta_N^{-am-bn}) q^{mn} \end{equation} where \begin{equation*} \beta_0(a,b) = \begin{cases} 0 & \textrm{if } a=b=0\\ \frac12 \frac{1+\zeta_N^b}{1-\zeta_N^b} & \textrm{if } a=0 \textrm{ and } b \neq 0\\ \frac12 \frac{1+\zeta_N^a}{1-\zeta_N^a} & \textrm{if } a \neq 0 \textrm{ and } b = 0\\ \frac12 \Bigl( \frac{1+\zeta_N^a}{1-\zeta_N^a} + \frac{1+\zeta_N^b}{1-\zeta_N^b}\Bigr) & \textrm{if } a \neq 0 \textrm{ and } b \neq 0. \end{cases} \end{equation*} \end{definition} As the next lemma shows, the functions $f_{a,b}$ are also Eisenstein series; they relate to $e_{a,b}$ by the Atkin-Lehner involution of level $N^2$. \begin{lem}\label{eab sigma} We have the relation \begin{equation}\label{eab fab} e_{a,b}\left(-\frac{1}{N\tau}\right) = -\frac{\tau}{N} f_{a,b}\left(\frac{\tau}{N}\right) \qquad (\tau \in \mathcal{H}). \end{equation} The function $f_{a,b}(\tau/N)$ is an Eisenstein series of weight $1$ on $\Gamma(N)$, and the function $f_{a,b}$ is an Eisenstein series of weight $1$ on $\Gamma_1(N^2)$. Moreover, we have $W_{N^2} (e_{a,b}) = -\frac{i}{N} f_{a,b}$. \end{lem} \begin{proof} The relation (\ref{eab fab}) follows from \cite[Lemma 2]{zudilin} (the proof there works for arbitrary $a,b \in \mathbf{Z}/N\mathbf{Z}$). We deduce that $f_{a,b}(\tau/N)$ is a multiple of the function obtained from $e_{a,b}(\tau/N)$ by applying the slash operator $| \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}$ in weight 1. Hence $f_{a,b}(\tau/N)$ is an Eisenstein series of weight $1$ on $\Gamma(N)$. The last assertion follows from replacing $\tau$ by $N\tau$ in (\ref{eab fab}). \end{proof} We will need the following formula for the completed $L$-function of $f_{a,b}$. \begin{lem}\label{lem Lfab} For any $a,b \in \mathbf{Z}/N\mathbf{Z}$, we have \begin{equation}\label{eq Lfab} \Lambda(f_{a,b}+f_{-a,b},s) = N^s \Gamma(s) (2\pi)^{-s} \Bigl(\sum_{m \geq 1} \frac{\zeta_N^{am}+\zeta_N^{-am}}{m^s}\Bigr) \Bigl(\sum_{n \geq 1} \frac{\zeta_N^{bn}-\zeta_N^{-bn}}{n^s}\Bigr). \end{equation} \end{lem} \begin{proof} See the proof of \cite[Lemma 3]{zudilin}. \end{proof} In the special cases $s=1$ and $s=2$, this gives the following formulas. Note that formula (\ref{eq Lfab2}) is none other than \cite[Lemma 3]{zudilin}. \begin{lem} \label{lem Lfab12} We have \begin{align} \label{eq Lfab1} \Lambda^*(f_{a,b}+f_{-a,b},1) & = \begin{cases} 0 & \textrm{if } b=0\\ 2iN \gamma \cdot (\frac12-\{\frac{b}{N}\}) & \textrm{if } a=0 \textrm{ and } b \neq 0\\ -2iN \log |1-\zeta_N^a| \cdot (\frac12-\{\frac{b}{N}\}) & \textrm{if } a \neq 0 \textrm{ and } b \neq 0 \end{cases}\\ \label{eq Lfab2} \Lambda(f_{a,b}+f_{-a,b},2) & = iN^2 B\bigl(\frac{a}{N}\bigr) \Cl_2\bigl(\frac{2\pi b}{N}\bigr) \end{align} where $\gamma$ is Euler's constant and \begin{equation*} \Cl_2(x) = \sum_{m=1}^\infty \frac{\sin(mx)}{m^2} \qquad (x \in \mathbf{R}) \end{equation*} denotes the Clausen dilogarithmic function. \end{lem} \begin{proof} If $a=0$ then $\sum_{n=1}^\infty \zeta_N^{an} n^{-s}=\zeta(s)=\frac{1}{s-1}+\gamma+O_{s \to 1}(s-1)$. If $a \neq 0$ then $\sum_{n=1}^\infty \zeta_N^{an}/n = -\log(1-\zeta_N^a)$ where we use the principal value of the logarithm. Formula (\ref{eq Lfab1}) follows, noting that $-\log \frac{1-\zeta_N^b}{1-\zeta_N^{-b}} = 2\pi i (\frac12-\{\frac{b}{N}\})$. Formula (\ref{eq Lfab2}) is \cite[Lemma 3]{zudilin}. \end{proof} \section{The computation} \begin{lem}\label{lem gu} For any $(a,b) \in (\mathbf{Z}/N\mathbf{Z})^2$, $(a,b) \neq (0,0)$, we have \begin{align} \label{gu1} \log g_{a,b}(it) & = -\pi B(a/N) t + C_{a,b} - \sum_{m \geq 1} \sum_{\substack{n \geq 1 \\ n \equiv a (N)}} \frac{\zeta_N^{bm}}{m} e^{-\frac{2\pi mnt}{N}} - \sum_{m \geq 1} \sum_{\substack{n \geq 1 \\ n \equiv -a (N)}} \frac{\zeta_N^{-bm}}{m} e^{-\frac{2\pi mnt}{N}}\\ \label{gu2} & = -\frac{\pi B(b/N)}{t} + C_{b,-a} + i \theta_{a,b} - \sum_{m \geq 1} \sum_{\substack{n \geq 1 \\ n \equiv b (N)}} \frac{\zeta_N^{-am}}{m} e^{-\frac{2\pi mn}{Nt}} - \sum_{m \geq 1} \sum_{\substack{n \geq 1 \\ n \equiv -b (N)}} \frac{\zeta_N^{am}}{m} e^{-\frac{2\pi mn}{Nt}} \end{align} where $\theta_{a,b}=2\pi (\{\frac{a}{N}\}-\frac12)(\{\frac{b}{N}\}-\frac12)$ and \begin{equation} C_{a,b} = \begin{cases} \log(1-\zeta_N^b) & \textrm{if } a=0,\\ 0 & \textrm{if } a \neq 0. \end{cases} \end{equation} \end{lem} \begin{proof} By the definition of Siegel units, we have \begin{equation*} \log g_{a,b} = \pi i B(a/N) \tau+ \sum_{n \geq 0} \log(1-q^n q^{\tilde{a}/N} \zeta_N^b) + \sum_{n \geq 1} \log (1-q^n q^{-\tilde{a}/N} \zeta_N^{-b}) \end{equation*} Using the identity $\log(1-x)=-\sum_{m=1}^{\infty} \frac{x^m}{m}$ and substituting $\tau=it$, we get (\ref{gu1}). Applying Lemma \ref{lem gab sigma} with $\tau=i/t$, we have $g_{a,b}(it)=e^{i\theta_{a,b}} g_{b,-a}(i/t)$, whence (\ref{gu2}). \end{proof} We will need the following lemma from \cite{zudilin}. \begin{lem}\cite[Lemma 4]{zudilin}\label{lem int dlog} For any $a,b \in \mathbf{Z}/N\mathbf{Z}$, we have \begin{equation} \begin{split} I(a,b) := & \int_0^\infty \frac{1}{it} \mathrm{d} \sum_{m=1}^\infty \frac{\zeta_N^{am}-\zeta_N^{-am}}{m} \Biggl( \sum_{\substack{n \geq 1 \\ n \equiv b (N)}} - \sum_{\substack{n \geq 1 \\ n \equiv -b (N)}} \Biggr) \exp\bigl(-\frac{2\pi mn}{Nt}\bigr)\\ & \qquad = \begin{cases} 0 & \textrm{if } a=0 \textrm{ or } b=0\\ -i \Cl_2(\frac{2\pi a}{N}) \frac{1+\zeta_N^b}{1-\zeta_N^b} & \textrm{if } a \neq 0 \textrm{ and } b \neq 0. \end{cases} \end{split} \end{equation} \end{lem} \begin{proof}[Proof of Theorem \ref{main thm}] By Lemma \ref{lem gu}, we get \begin{equation}\label{proof eq 1} \log |g_u(it)| = -\frac{\pi B(b/N)}{t} + \Re(C_{b,-a}) - \frac12 \sum_{m \geq 1} \frac{\zeta_N^{am}+\zeta_N^{-am}}{m} \Biggl( \sum_{\substack{n \geq 1 \\ n \equiv b (N)}} + \sum_{\substack{n \geq 1 \\ n \equiv -b (N)}} \Biggr) e^{-\frac{2\pi mn}{Nt}} \end{equation} and \begin{align} \label{proof eq 2} \darg g_u(it) & = -\frac{1}{2i} \mathrm{d} \sum_{m \geq 1} \frac{\zeta_N^{bm}-\zeta_N^{-bm}}{m} \Biggl( \sum_{\substack{n \geq 1 \\ n \equiv a (N)}} - \sum_{\substack{n \geq 1 \\ n \equiv -a (N)}} \Biggr) e^{-\frac{2\pi mnt}{N}}\\ \label{proof eq 3} & = \frac{1}{2i} \mathrm{d} \sum_{m \geq 1} \frac{\zeta_N^{am}-\zeta_N^{-am}}{m} \Biggl( \sum_{\substack{n \geq 1 \\ n \equiv b (N)}} - \sum_{\substack{n \geq 1 \\ n \equiv -b (N)}} \Biggr) e^{-\frac{2\pi mn}{Nt}}. \end{align} Let $u=(a,b)$, $v=(c,d) \in (\mathbf{Z}/N\mathbf{Z})^2$, $u,v \neq (0,0)$. We have \begin{equation}\label{proof eq 4} \begin{split} \eta(g_u,g_v) & = \Bigl(-\frac{\pi B(b/N)}{t}+\Re(C_{b,-a}) \Bigr) \cdot \frac{1}{2i} \mathrm{d} \sum_{m \geq 1} \frac{\zeta_N^{cm}-\zeta_N^{-cm}}{m} \Biggl( \sum_{\substack{n \geq 1 \\ n \equiv d (N)}} - \sum_{\substack{n \geq 1 \\ n \equiv -d (N)}} \Biggr) e^{-\frac{2\pi mn}{Nt}}\\ & \quad - \frac12 \sum_{m_1 \geq 1} \frac{\zeta_N^{am_1}+\zeta_N^{-am_1}}{m_1} \Biggl( \sum_{\substack{n_1 \geq 1 \\ n_1 \equiv b (N)}} + \sum_{\substack{n_1 \geq 1 \\ n_1 \equiv -b (N)}} \Biggr) e^{-\frac{2\pi m_1 n_1}{Nt}}\\ & \quad \quad \times -\frac{1}{2i} \mathrm{d} \sum_{m_2 \geq 1} \frac{\zeta_N^{dm_2}-\zeta_N^{-dm_2}}{m_2} \Biggl( \sum_{\substack{n_2 \geq 1 \\ n_2 \equiv c (N)}} - \sum_{\substack{n_2 \geq 1 \\ n_2 \equiv -c (N)}} \Biggr) e^{-\frac{2\pi m_2 n_2 t}{N}}\\ & \quad - \Bigl(-\frac{\pi B(d/N)}{t}+\Re(C_{d,-c}) \Bigr) \cdot \frac{1}{2i} \mathrm{d} \sum_{m \geq 1} \frac{\zeta_N^{am}-\zeta_N^{-am}}{m} \Biggl( \sum_{\substack{n \geq 1 \\ n \equiv b (N)}} - \sum_{\substack{n \geq 1 \\ n \equiv -b (N)}} \Biggr) e^{-\frac{2\pi mn}{Nt}}\\ & \quad + \frac12 \sum_{m_1 \geq 1} \frac{\zeta_N^{cm_1}+\zeta_N^{-cm_1}}{m_1} \Biggl( \sum_{\substack{n_1 \geq 1 \\ n_1 \equiv d (N)}} + \sum_{\substack{n_1 \geq 1 \\ n_1 \equiv -d (N)}} \Biggr) e^{-\frac{2\pi m_1 n_1}{Nt}}\\ & \quad \quad \times -\frac{1}{2i} \mathrm{d} \sum_{m_2 \geq 1} \frac{\zeta_N^{bm_2}-\zeta_N^{-bm_2}}{m_2} \Biggl( \sum_{\substack{n_2 \geq 1 \\ n_2 \equiv a (N)}} - \sum_{\substack{n_2 \geq 1 \\ n_2 \equiv -a (N)}} \Biggr) e^{-\frac{2\pi m_2 n_2 t}{N}}. \end{split} \end{equation} The terms involving double sums can be integrated using Lemmas \ref{lem int darg} and \ref{lem int dlog}. This gives \begin{equation}\label{proof eq 4bis} \begin{split} \int_0^\infty \eta(g_u,g_v) & = -\frac{\pi}{2} B\bigl(\frac{b}{N}\bigr) I(c,d) + \frac{\pi}{2} B\bigl(\frac{d}{N}\bigr) I(a,b)\\ & \quad +\Re(C_{b,-a}) \int_0^\infty \darg g_v - \Re(C_{d,-c}) \int_0^\infty \darg g_u + I \end{split} \end{equation} with \begin{equation}\label{proof eq 5} \begin{split} I & = \frac{\pi i}{2N} \sum_{m_1,m_2 \geq 1} \Biggl( (\zeta_N^{am_1}+\zeta_N^{-am_1}) (\zeta_N^{dm_2}-\zeta_N^{-dm_2}) \Biggl( \sum_{\substack{n_1 \geq 1 \\ n_1 \equiv b (N)}} + \sum_{\substack{n_1 \geq 1 \\ n_1 \equiv -b (N)}} \Biggr) \Biggl( \sum_{\substack{n_2 \geq 1 \\ n_2 \equiv c (N)}} - \sum_{\substack{n_2 \geq 1 \\ n_2 \equiv -c (N)}} \Biggr) \\ & \qquad - (\zeta_N^{cm_1}+\zeta_N^{-cm_1}) (\zeta_N^{bm_2}-\zeta_N^{-bm_2}) \Biggl( \sum_{\substack{n_1 \geq 1 \\ n_1 \equiv d (N)}} + \sum_{\substack{n_1 \geq 1 \\ n_1 \equiv -d (N)}} \Biggr) \Biggl( \sum_{\substack{n_2 \geq 1 \\ n_2 \equiv a (N)}} - \sum_{\substack{n_2 \geq 1 \\ n_2 \equiv -a (N)}} \Biggr)\Biggr) \cdot\\ & \qquad \cdot \frac{n_2}{m_1} \int_0^\infty \exp \left(-2\pi \left(\frac{m_1 n_1}{Nt}+\frac{m_2 n_2 t}{N}\right)\right) \mathrm{d} t. \end{split} \end{equation} Making the change of variables $t'=\frac{n_2}{m_1}t$, we have \begin{equation}\label{proof eq 6} \frac{n_2}{m_1} \int_0^\infty \exp \left(-2\pi \left(\frac{m_1 n_1}{Nt}+\frac{m_2 n_2 t}{N}\right)\right) \mathrm{d} t = \int_0^\infty \exp \left(-2\pi \left(\frac{n_1 n_2}{Nt'}+\frac{m_1 m_2 t'}{N}\right)\right) \mathrm{d} t'. \end{equation} Replacing in (\ref{proof eq 5}) and interchanging integral and summation, we get \begin{equation}\label{proof eq 7} \begin{split} I = \frac{\pi i}{2N} \int_0^\infty & \sum_{m_1,m_2 \geq 1} (\zeta_N^{am_1}+\zeta_N^{-am_1}) (\zeta_N^{dm_2}-\zeta_N^{-dm_2}) e^{-\frac{2\pi m_1 m_2 t'}{N}} \cdot \\ & \qquad \cdot \Biggl( \sum_{\substack{n_1 \geq 1 \\ n_1 \equiv b (N)}} + \sum_{\substack{n_1 \geq 1 \\ n_1 \equiv -b (N)}} \Biggr) \Biggl( \sum_{\substack{n_2 \geq 1 \\ n_2 \equiv c (N)}} - \sum_{\substack{n_2 \geq 1 \\ n_2 \equiv -c (N)}} \Biggr) e^{-\frac{2\pi n_1 n_2}{Nt'}}\\ & - \sum_{m_1, m_2 \geq 1} (\zeta_N^{cm_1}+\zeta_N^{-cm_1}) (\zeta_N^{bm_2}-\zeta_N^{-bm_2}) e^{-\frac{2\pi m_1 m_2 t'}{N}} \cdot \\ & \qquad \cdot \Biggl( \sum_{\substack{n_1 \geq 1 \\ n_1 \equiv d (N)}} + \sum_{\substack{n_1 \geq 1 \\ n_1 \equiv -d (N)}} \Biggr) \Biggl( \sum_{\substack{n_2 \geq 1 \\ n_2 \equiv a (N)}} - \sum_{\substack{n_2 \geq 1 \\ n_2 \equiv -a (N)}} \Biggr)\Biggr) e^{-\frac{2\pi n_1 n_2}{Nt'}} \mathrm{d} t'. \end{split} \end{equation} Making the change of variables $y=t'/N$, we obtain \begin{equation}\label{proof eq 8} \begin{split} I = \frac{\pi i}{2} \int_0^\infty & (f^*_{a,d}+f^*_{-a,d})(iy) \cdot (e^*_{b,c}+e^*_{-b,c}) \bigl(\frac{i}{N^2 y}\bigr)\\ & - (f^*_{c,b}+f^*_{-c,b})(iy) \cdot (e^*_{d,a}+e^*_{-d,a}) \bigl(\frac{i}{N^2 y}\bigr) \mathrm{d} y. \end{split} \end{equation} We compute this integral using Lemma \ref{lem fgh 2} with $k=\ell=1$, taking into account Lemma \ref{eab sigma}: for any $a,b,c,d \in \mathbf{Z}/N\mathbf{Z}$, we have \begin{equation}\label{proof eq 9} \int_0^\infty f^*_{a,b}(iy) e^*_{c,d}\bigl(\frac{i}{N^2 y}\bigr) \mathrm{d} y = -\frac{i}{N^2} \bigl(\Lambda^*(f_{a,b} f_{c,d},2) - \beta_0(a,b) \Lambda(f_{c,d},2) \bigr) -\frac{\alpha_0(c,d)}{N} \Lambda^*(f_{a,b},1). \end{equation} Replacing in (\ref{proof eq 8}), we get $I=I_1+I_2+I_3$ with \begin{align} \label{eq I1} I_1 & = \frac{\pi}{2N^2} \Lambda^*((f_{a,d}+f_{-a,d})(f_{b,c}+f_{-b,c})-(f_{c,b}+f_{-c,b})(f_{d,a}+f_{-d,a}),2)\\ \label{eq I2} I_2 & = - \frac{\pi}{2N^2} \Bigl((\beta_0(a,d)+\beta_0(-a,d)) \Lambda(f_{b,c}+f_{-b,c},2) - (\beta_0(c,b)+\beta_0(-c,b)) \Lambda(f_{d,a}+f_{-d,a},2)\Bigr)\\ \label{eq I3} I_3 & = -\frac{\pi i}{2N} \Bigl((\alpha_0(b,c)+\alpha_0(-b,c)) \Lambda^*(f_{a,d}+f_{-a,d},1) - (\alpha_0(d,a)+\alpha_0(-d,a)) \Lambda^*(f_{c,b}+f_{-c,b},1)\Bigr) \end{align} Using the fact that $f_{a,b}=f_{b,a}=-f_{-a,-b}$, $I_1$ simplifies to \begin{equation}\label{eq I1 2} I_1 = \frac{\pi}{N^2} \Lambda^*(f_{a,d} f_{-b,c} - f_{a,-d} f_{b,c},2). \end{equation} The terms involving $\Lambda(f_{a,b},2)$ can be evaluated with (\ref{eq Lfab2}); they simplify with the terms involving $I(a,b)$ in (\ref{proof eq 4bis}): \begin{equation}\label{eq I2 2} I_2 = \frac{\pi}{2} B\bigl(\frac{b}{N}\bigr) I(c,d) - \frac{\pi}{2} B\bigl(\frac{d}{N}\bigr) I(a,b). \end{equation} The terms involving $\Lambda^*(f_{a,b},1)$ can be evaluated with (\ref{eq Lfab1}). Note that $\alpha_0(b,c)+\alpha_0(-b,c)$ is nonzero only in the case $b=0$ and $c \neq 0$. Since we assumed $u \neq 0$, this implies $a \neq 0$ and the case of Lemma \ref{lem Lfab12} involving Euler's constant does not happen. Anyway $I_3$ simplifies with the terms involving $\int_0^\infty \darg g_u$ in (\ref{proof eq 4bis}): \begin{equation}\label{eq I3 2} I_3 = -\Re(C_{b,-a}) \int_0^\infty \darg g_v + \Re(C_{d,-c}) \int_0^\infty \darg g_u. \end{equation} Putting everything together, we get \begin{equation} \int_0^\infty \eta(g_u,g_v) = I_1 = \frac{\pi}{N^2} \Lambda^* (f_{a,d} f_{-b,c}-f_{a,-d} f_{b,c},2). \end{equation} Theorem \ref{main thm} now follows from (\ref{eq lambda star}), taking into account the fact that $W_{N^2}(f_{a,b}f_{c,d}) = W_{N^2}(f_{a,b}) W_{N^2}(f_{c,d}) = -N^2 e_{a,b} e_{c,d}$. \end{proof} \section{Applications}\label{sec applications} In this section we investigate the applications of Theorem \ref{main thm} to elliptic curves. Our strategy can be explained as follows. In \cite{brunault:mod_units}, we determined a list of elliptic curves defined over $\mathbf{Q}$ which can be parametrized by modular units. Let $E$ be such an elliptic curve, with modular parametrization $\varphi : X_1(N) \to E$. Let $x,y$ be functions on $E$ such that $u:=\varphi^*(x)$ and $v:=\varphi^*(y)$ are modular units. Assume that $\{x,y\} \in K_2(E) \otimes \mathbf{Q}$. Then the minimal polynomial $P$ of $(x,y)$ is tempered and in favorable cases, the Mahler measure of $P$ can be expressed in terms of a regulator integral $\int_\gamma \eta(x,y)$ where $\gamma$ is a (non necessarily closed) path on $E$. Using the techniques of \cite{brunault:mod_units}, we compute the images of the various cusps under $\varphi$ and deduce the divisors of $u$ and $v$. Since the divisors of Siegel units are easily computed using (\ref{def gab}) and (\ref{gab gamma}), we get an expression of $u$ and $v$ in terms of Siegel units, and may apply Theorem \ref{main thm}. We will need the following expression for the regulator integral in terms of Bloch's elliptic dilogarithm. Let $E/\mathbf{Q}$ be an elliptic curve, and let $D_E : E(\mathbf{C}) \to \mathbf{R}$ be the elliptic dilogarithm associated to a chosen orientation of $E(\mathbf{R})$. Extend $D_E$ by linearity to a function $\mathbf{Z}[E(\mathbf{C})] \to \mathbf{R}$. Let $\gamma_E^+$ be the generator of $H_1(E(\mathbf{C}),\mathbf{Z})^+$ corresponding to the chosen orientation. \begin{pro}\label{pro int eta DE} Let $x \in K_2(E) \otimes \mathbf{Q}$. Choose rational functions $f_i,g_i$ on $E$ such that $x = \sum_i \{f_i,g_i\}$, and define $\eta(x) = \sum_i \eta(f_i,g_i)$. Then for every $\gamma \in H_1(E(\mathbf{C}),\mathbf{Z})$, we have \begin{equation*} \int_\gamma \eta(x) = -(\gamma_E^+ \bullet \gamma) D_E(\beta) \end{equation*} where $\bullet$ denotes the intersection product on $H_1(E(\mathbf{C}),\mathbf{Z})$, and $\beta$ is the divisor given by \begin{equation*} \beta = \sum_i \sum_{p,q \in E(\mathbf{C})} \ord_p(f_i) \ord_q(g_i) (p-q). \end{equation*} \end{pro} \begin{proof} Since $x \in K_2(E) \otimes \mathbf{Q}$, the integral of $\eta(x)$ over a closed path $\gamma$ avoiding the zeros and poles of $f_i,g_i$ depends only on the class of $\gamma$ in $H_1(E(\mathbf{C}),\mathbf{Z})$. Let $\delta$ be an element of $H_1(E(\mathbf{C}),\mathbf{Z})$ such that $\gamma_E^+ \bullet \delta = 1$. Let $c$ denote the complex conjugation on $E(\mathbf{C})$. Since $c^* \eta(x) = -\eta(x)$, we have $\int_{\gamma_E^+} \eta(x)=0$ and it suffices to prove the formula for $\gamma=\delta$. Choose an isomorphism $E(\mathbf{C}) \cong \mathbf{C}/(\mathbf{Z}+\tau\mathbf{Z})$ which is compatible with complex conjugation. We have \begin{equation*} \overline{\int_{E(\mathbf{C})} \eta(x) \wedge \mathrm{d} z} = \int_{E(\mathbf{C})} \eta(x) \wedge \mathrm{d} \overline{z} = \int_{E(\mathbf{C})} c^* ( -\eta(x) \wedge \mathrm{d} z) = \int_{E(\mathbf{C})} \eta(x) \wedge \mathrm{d} z \end{equation*} so that $\int_{E(\mathbf{C})} \eta(x) \wedge \mathrm{d} z \in \mathbf{R}$. By \cite[Prop. 6]{brunault:LEF}, we get \begin{equation*} \int_{E(\mathbf{C})} \eta(x) \wedge \mathrm{d} z = D_E(\beta). \end{equation*} Since $(\gamma_E^+,\delta)$ is a symplectic basis of $H_1(E(\mathbf{C}),\mathbf{Z})$, we have \cite[A.2.5]{bost} \begin{equation*} \int_{E(\mathbf{C})} \eta(x) \wedge \mathrm{d} z = \int_{\gamma_E^+} \eta(x) \cdot \int_\delta \mathrm{d} z - \int_{\gamma_E^+} \mathrm{d} z \cdot \int_\delta \eta(x) = -\int_\delta \eta(x). \end{equation*} \end{proof} The following proposition is a slight generalization of a technique introduced by A. Mellit \cite{mellit} to prove identities involving elliptic dilogarithms. Let $E/\mathbf{Q}$ be an elliptic curve, which we view as a smooth cubic in $\mathbf{P}^2$. \begin{definition} For any lines $\ell$ and $m$ in $\mathbf{P}^2$, let $\beta_E(\ell,m)$ be the divisor of degree $9$ on $E(\mathbf{C})$ defined by $\beta_E(\ell,m) = \sum_{x \in \ell \cap E} \sum_{y \in m \cap E} (x-y)$. \end{definition} \begin{pro}\label{pro incident} Let $\ell_1,\ell_2,\ell_3$ be three incident lines in $\mathbf{P}^2$. Then \begin{equation}\label{eq incident} D_{E}(\beta_E(\ell_1,\ell_2))+D_{E}(\beta_E(\ell_2,\ell_3))+D_{E}(\beta_E(\ell_3,\ell_1))=0. \end{equation} \end{pro} \begin{proof} Let $f_1,f_2,f_3$ be equations of $\ell_1,\ell_2,\ell_3$ such that $f_1+f_2=f_3$. Using the Steinberg relation $\{\frac{f_1}{f_3},\frac{f_2}{f_3}\}=0$, we deduce $\{f_1,f_2\}+\{f_2,f_3\}+\{f_3,f_1\}=0$ in $K_2(\mathbf{C}(E)) \otimes \mathbf{Q}$. Applying the regulator map and taking the real part \cite[Prop. 6]{brunault:LEF}, we deduce \begin{equation*} D_E(\beta(f_1,f_2))+D_E(\beta(f_2,f_3))+D_E(\beta(f_3,f_1))=0 \end{equation*} where $\beta(f_i,f_{i+1})$ is defined as in Proposition \ref{pro int eta DE}. We have $\dv(f_i)=(\ell_i \cap E) - 3(0)$ so that \begin{equation*} \beta(f_i,f_{i+1})=\beta_E(\ell_i,\ell_{i+1})-3(\ell_i \cap E) - 3 \iota^* (\ell_{i+1} \cap E) + 9(0) \end{equation*} where $\iota$ denotes the map $p \mapsto -p$ on $E(\mathbf{C})$. Since $D_E$ is odd, the proposition follows. \end{proof} \begin{remark} If the incidence point of $\ell_1,\ell_2,\ell_3$ lies on $E$, then the relation (\ref{eq incident}) is trivial in the sense that it is a consequence of the fact that $D_E$ is odd. \end{remark} We will also need the following lemma to relate elliptic dilogarithms on isogenous curves. \begin{lem}\label{lem DE DE'} Let $\varphi : E \to E'$ be an isogeny between elliptic curves defined over $\mathbf{Q}$. Choose orientations of $E(\mathbf{R})$ and $E'(\mathbf{R})$ which are compatible under $\varphi$, and let $d_\varphi$ be the topological degree of the map $E(\mathbf{R})^0 \to E'(\mathbf{R})^0$, where $(\cdot)^0$ denotes the connected component of the origin. Then for any point $P' \in E'(\mathbf{C})$, we have \begin{equation}\label{eq DE DE'} D_{E'}(P') = d_\varphi \cdot \sum_{\varphi(P)=P'} D_E(P). \end{equation} \end{lem} \begin{proof} Choose isomorphisms $E(\mathbf{C}) \cong \mathbf{C}/(\mathbf{Z}+\tau\mathbf{Z})$ and $E'(\mathbf{C}) \cong \mathbf{C}/(\mathbf{Z}+\tau'\mathbf{Z})$ which are compatible with complex conjugation. Then $E(\mathbf{R})=\mathbf{R}/\mathbf{Z}$ and $E'(\mathbf{R}) = \mathbf{R}/\mathbf{Z}$ so that $\varphi$ is given by $[z] \mapsto [d_\varphi z]$. We have isomorphisms $E(\mathbf{C}) \cong \mathbf{C}^\times /q^\mathbf{Z}$ and $E'(\mathbf{C}) \cong \mathbf{C}^\times / (q')^\mathbf{Z}$ with $q=e^{2\pi i \tau}$ and $q'=e^{2\pi i \tau'}$. Let $\pi : \mathbf{C}^\times \to E(\mathbf{C})$ and $\pi' : \mathbf{C}^\times \to E'(\mathbf{C})$ be the canonical maps. Let $P'$ be a point of $E'(\mathbf{C})$. By definition $D_{E'}(P') = \sum_{\pi'(x')=P'} D(x')$ where $D$ is the Bloch-Wigner function, and similarly $D_E(P) = \sum_{\pi(x)=P} D(x)$. Now $\varphi$ is induced by the map $x \mapsto x^{d_\varphi}$, so that (\ref{eq DE DE'}) follows from the usual functional equation $D(x^r) = r \sum_{u^r=1} D(ux)$ for any $r \geq 1$ \cite[(21)]{oesterle}. \end{proof} Note that in the particular case $\varphi$ is the multiplication-by-$n$ map on $E$, Lemma \ref{lem DE DE'} gives the usual functional equation \begin{equation*} D_E(nP) = n \sum_{Q \in E[n]} D_E(P+Q). \end{equation*} \subsection{Conductors 14, 35 and 54} We prove the following cases of Boyd's conjectures \cite[Table 5, $k=-1,-2,-3$]{boyd:expmath}. Note that the case of conductor 14 was proved by A. Mellit \cite{mellit}. \begin{thm} Let $P_k$ be the polynomial $P_k(x,y)=y^2+kxy+y-x^3$, and let $E_k$ be the elliptic curve defined by the equation $P_k(x,y)=0$. We have the identities \begin{align} \label{mP-1} m(P_{-1}) & = 2 L'(E_{-1},0)\\ \label{mP-2} m(P_{-2}) & = L'(E_{-2},0)\\ \label{mP-3} m(P_{-3}) & =L'(E_{-3},0). \end{align} \end{thm} By the discussion in \cite[p. 62]{boyd:expmath}, the polynomial $P_k$ does not vanish on the torus for $k \in \mathbf{R}$, $k <-1$. For these values of $k$ we thus have \begin{equation*} m(P_k) = \frac{1}{2\pi} \int_{\gamma_k} \eta(x,y) \end{equation*} where $\gamma_k$ is the closed path on $E_k(\mathbf{C})$ defined by \begin{equation*} \gamma_k = \{(x,y) \in E_k(\mathbf{C}) : |x|=1, |y| \leq 1\}. \end{equation*} The point $A=(0,0)$ on $E_k$ has order $3$ and the divisors of $x$ and $y$ are given by \begin{equation*} \dv(x) = (A)+(-A)-2(0) \qquad \dv(y) = 3(A)-3(0). \end{equation*} The tame symbols of $\{x,y\}$ at $0$, $A$, $-A$ are respectively equal to $1,-1,-1$, so that $\{x,y\}$ defines an element of $K_2(E_k) \otimes \mathbf{Q}$. Moreover $\gamma_k$ is a generator of $H_1(E_k(\mathbf{C}),\mathbf{Z})^-$ which satisfies $\gamma_{E_k}^+ \bullet \gamma_k = -2$, so that Proposition \ref{pro int eta DE} gives \begin{equation}\label{mPk DEkA} m(P_k) = \frac{1}{\pi} D_{E_k}(\beta(x,y)) = \frac{9}{\pi} D_{E_k}(A) \qquad (k<-1). \end{equation} Note that by continuity (\ref{mPk DEkA}) also holds for $k=-1$. Now assume $k \in \{-1,-2,-3\}$. The elliptic curves $E_{-1}$, $E_{-2}$, $E_{-3}$ are respectively isomorphic to $14a4$, $35a3$ and $54a3$. By \cite{brunault:mod_units}, these curves are parametrized by modular units. Since the functions $x$ and $y$ are supported in the rational torsion subgroup, their pull-back $u=\varphi^* x$ and $v=\varphi^* y$ are modular units, and we may express them in terms of Siegel units. For brevity, we put $g_b = g_{0,b}$ in what follows. We also let $f_{-k}$ be the newform associated to $E_{-k}$. In the case $k=-1$, $N=14$, we find explicitly \begin{equation*} u = \frac{g_{5} g_{6}}{g_{1} g_{2}} \qquad v = -\frac{g_{3} g_{5} g_{6}^2}{g_{1}^2 g_{2} g_{4}}. \end{equation*} Moreover the Deninger path is the following sum of modular symbols \begin{equation*} \gamma_{-1} = \varphi_* \left\{\frac27,-\frac27\right\} = \varphi_* \left( -\xi \begin{pmatrix} 2 & 1 \\ 7 & 4 \end{pmatrix}-\xi \begin{pmatrix} 1 & 0 \\ 4 & 1 \end{pmatrix}+\xi \begin{pmatrix} 1 & 0 \\ -4 & 1 \end{pmatrix} + \xi \begin{pmatrix} -2 & 1 \\ 7 & -4 \end{pmatrix} \right). \end{equation*} Using Theorem \ref{main thm}, we obtain \begin{equation*} \int_{\gamma_{-1}} \eta(x,y) = \int_{2/7}^{-2/7} \eta(u,v) = \pi L'( 4f_{-1},0). \end{equation*} This proves (\ref{mP-1}). In the case $k=-2$, $N=35$, we find explicitly \begin{equation*} u = \frac{g_{2} g_{9} g_{12} g_{15} g_{16}}{g_{3} g_{4} g_{10} g_{11} g_{17}} \qquad v = - \frac{g_{2}^2 g_{5} g_{9}^2 g_{12}^2 g_{15} g_{16}^2}{g_{1} g_{3} g_{4} g_{6} g_{8} g_{10}^2 g_{11} g_{13} g_{17}}. \end{equation*} Moreover the Deninger path is the following sum of modular symbols \begin{equation*} \gamma_{-2} = \varphi_* \left\{\frac15,-\frac15\right\} = \varphi_* \left( \xi \begin{pmatrix} 1 & 0 \\ -5 & 1 \end{pmatrix}-\xi \begin{pmatrix} 1 & 0 \\ 5 & 1 \end{pmatrix} \right). \end{equation*} Using Theorem \ref{main thm}, we obtain \begin{equation*} \int_{\gamma_{-2}} \eta(x,y) = \int_{1/5}^{-1/5} \eta(u,v) = \pi L'( 2f_{-2},0). \end{equation*} This proves (\ref{mP-2}). In the case $k=-3$, $N=54$, we find explicitly \begin{equation*} x = \frac{g_2 g_4 g_5^2 g_{13}^2 g_{14} g_{16} g_{20} g_{21} g_{22} g_{23}^2 g_{24}}{g_1 g_7 g_8^2 g_{10}^2 g_{11} g_{12} g_{15} g_{17} g_{19} g_{25} g_{26}^2} \qquad y = - \frac{g_2^3 g_3 g_5^3 g_{13}^3 g_{16}^3 g_{20}^3 g_{21} g_{23}^3 g_{24}^2}{g_1^3 g_6 g_8^3 g_{10}^3 g_{12} g_{15}^2 g_{17}^3 g_{19}^3 g_{26}^3}. \end{equation*} Moreover the Deninger path is the following sum of modular symbols \begin{equation*} \gamma_{-3} = \varphi_* \left\{-\frac18,\frac18\right\} = \varphi_* \left( \xi \begin{pmatrix} 1 & 0 \\ 8 & 1 \end{pmatrix}-\xi \begin{pmatrix} 1 & 0 \\ -8 & 1 \end{pmatrix} \right). \end{equation*} Using Theorem \ref{main thm}, we obtain \begin{equation*} \int_{\gamma_{-3}} \eta(x,y) = \int_{-1/8}^{1/8} \eta(u,v) = \pi L'( 2f_{-3},0). \end{equation*} This proves (\ref{mP-2}). Using (\ref{mPk DEkA}), we also deduce Zagier's conjectures for these elliptic curves. \begin{thm} We have the identities \begin{equation} L(E_{-1},2) = \frac{9\pi}{7} D_{E_{-1}}(A) \qquad L(E_{-2},2) = \frac{36\pi}{35} D_{E_{-2}}(A) \qquad L(E_{-3},2) = \frac{2\pi}{3} D_{E_{-3}}(A). \end{equation} \end{thm} \subsection{Conductor 21} The modular curve $X_0(21)$ has genus $1$ and is isomorphic to the elliptic curve $E_0 = 21a1$ with minimal equation $y^2+xy = x^3-4x-1$. The Mordell-Weil group $E_0(\mathbf{Q})$ is isomorphic to $\mathbf{Z}/4\mathbf{Z} \times \mathbf{Z}/2\mathbf{Z}$ and is generated by the points $P=(5,8)$ and $Q=(-2,1)$, with respective orders 4 and 2. The modular curve $X_0(21)$ has 4 cusps: $0$, $1/3$, $1/7$, $\infty$ and we may choose the isomorphism $\varphi_0 : X_0(21) \xrightarrow{\cong} E_0$ so that $\varphi_0(0)=0$, $\varphi_0(1/3)=(-1,-1)=P+Q$, $\varphi_0(1/7)=Q$ and $\varphi_0(\infty)=P$. Let $f_P$ and $f_Q$ be functions on $E$ with divisors \begin{equation*} (f_P) = 4(P)-4(0) \qquad (f_Q) = 2(Q)-2(0). \end{equation*} These modular units can be expressed in terms of the Dedekind $\eta$ function \cite[\S 3.2]{ligozat}: \begin{equation*} f_P \sim_{\mathbf{Q}^\times} \frac{\eta(3\tau) \eta(21\tau)^5}{\eta(\tau)^5 \eta(7\tau)} \qquad f_Q \sim_{\mathbf{Q}^\times} \frac{\eta(3\tau) \eta(7\tau)^3}{\eta(\tau)^3 \eta(21\tau)}. \end{equation*} They can in turn be expressed in terms of Siegel units using the formula \begin{equation*} \frac{\eta(d\tau)}{\eta(\tau)} = C_d \prod_{k=1}^{(d-1)/2} g_{0,kN/d}(\tau) \qquad (C_d \in \mathbf{C}^\times). \end{equation*} Thus we can take \begin{equation*} f_P = \frac{ g_{0,7} (\prod_{b=1}^{10} g_{0,b})^5}{g_{0,3} g_{0,6} g_{0,9}} \qquad f_Q = \frac{ g_{0,7} (g_{0,3} g_{0,6} g_{0,9})^3}{ \prod_{b=1}^{10} g_{0,b}}. \end{equation*} The homology group $H_1(E_0(\mathbf{C}),\mathbf{Z})^-$ is generated by the modular symbol $\gamma = \{-\frac13,\frac13\} = \xi\begin{pmatrix} 1 & 0 \\ 3 & 1 \end{pmatrix} - \xi\begin{pmatrix} 1 & 0 \\ -3 & 1 \end{pmatrix}$. Using Theorem \ref{main thm} and a computer algebra system, we find \begin{equation*} \int_{\gamma} \eta(f_P,f_Q) = \pi \Lambda^*(F,0) \end{equation*} where $F$ is the modular form of weight 2 and level 21 given by \begin{equation*} F = 68q + 220q^2 + 68q^3 + 508q^4 + 440q^5 + 220q^6 + 508q^7 + 1068q^8 + 68q^9 + \cdots \end{equation*} The space $M_2(\Gamma_0(21))$ has dimension 4 and is generated by $f_0$, $E_{2,3}$, $E_{2,7}$ and $E_{2,21}$ where $f_0$ is the newform associated to $E_0$ and $E_{2,d}(\tau) = E_2(\tau)-dE_2(d\tau)$. We find explicitly \begin{equation*} F = -4f_0 + 72 E_{2,3} + \frac{72}{7} E_{2,7} - \frac{72}{7} E_{2,21} \end{equation*} We have $L(E_{2,d},s) = (1-d^{1-s}) \zeta(s) \zeta(s-1)$ and a little computation gives \begin{equation*} L(F,s) = -4L(E_0,s) + \frac{72}{7 \cdot 21^s} (7 \cdot 21^s-21\cdot 7^s - 7 \cdot 3^s + 21) \zeta(s) \zeta(s-1). \end{equation*} Thus $L(F,0)=0$ and using $\zeta(0)=-1/2$ and $\zeta(-1)=-1/12$, we find \begin{equation*} \Lambda^*(F,0) = \Lambda(F,0) = L'(F,0) = -4L'(E_0,0)-6 \log 7. \end{equation*} The extraneous term $6 \log 7$ stems from the fact that the Milnor symbol $\{f_P,f_Q\}$ does not extend to $K_2(E_0) \otimes \mathbf{Q}$. Indeed, the tame symbols are given by \begin{equation*} \partial_0 \{f_P,f_Q\} = 1 \qquad \partial_P \{f_P,f_Q\} = f_Q(P)^{-4} = \zeta_7^{-4} 7^{-4} \qquad \partial_Q \{f_P,f_Q\} = \zeta_7^{-4} 7^4. \end{equation*} Since $f_P$ and $f_Q$ are supported in torsion points, there is a standard trick (due to Bloch) to alter the symbol $\{f_P,f_Q\}$ to make an element of $K_2(E_0) \otimes \mathbf{Q}$. We will see that the corresponding regulator integral is proportional to $L'(E_0,0)$ alone. We put $x:=\{f_P,f_Q\}+\{7,f_P/f_Q^2\}$, which belongs to $K_2(E_0) \otimes \mathbf{Q}$, and we define \begin{equation*} \eta(x):=\eta(f_P,f_Q)+\eta(7,f_P/f_Q^2) = \eta(f_P,f_Q)+\log 7 \cdot \darg(f_P/f_Q^2). \end{equation*} We can compute the integral of $\darg(f_P/f_Q^2)$ using Lemma \ref{lem int darg}, which results in \begin{equation*} \int_\gamma \eta(x) = -4\pi L'(E_0,0). \end{equation*} On the other hand, we have $\int_\gamma \omega_{f_0} \sim 1.91099i$ which shows that $\gamma_{E_0}^+ \bullet \gamma >0$. Since $E_0(\mathbf{R})$ has two connected components, this implies $\gamma_{E_0}^+ \bullet \gamma=1$ and Proposition \ref{pro int eta DE} gives \begin{equation*} \int_\gamma \eta(x) = -D_{E_0}(\beta). \end{equation*} We have $\beta = 8(P+Q)-8(P)-8(Q)+8(0)$. Since $D_{E_0}$ is odd, this gives \begin{equation*} \int_\gamma \eta(x) = -8 \bigl(D_{E_0}(P+Q)-D_{E_0}(P)\bigr). \end{equation*} Taking into account the functional equation $L'(E_0,0) = \frac{21}{4\pi^2} L(E_0,2)$, we have thus shown Zagier's conjecture for $E_0$. \begin{thm}\label{zagier 21} We have the identity $L(E_0,2) = \frac{8\pi}{21} \bigl(D_{E_0}(P+Q)-D_{E_0}(P)\bigr)$. \end{thm} We will now deduce Boyd's conjecture \cite[Table 1, $k=3$]{boyd:expmath} for the elliptic curve $E_1$ of conductor $21$ given by the equation $P(x,y)=x+\frac{1}{x}+y+\frac{1}{y}+3=0$. \begin{thm}\label{boyd 21} We have the identity $m(x+\frac{1}{x}+y+\frac{1}{y}+3)=2L'(E_1,0)$. \end{thm} The change of variables \begin{equation*} X=x(x+y+3)+1 \qquad Y=x(x+1)(x+y+3)+1 \end{equation*} puts $E_1$ in the Weierstrass form $Y^2+XY=X^3+X$. This is the elliptic curve labelled $21a4$ in Cremona's tables \cite{cremona:tables}. The Mordell-Weil group $E_1(\mathbf{Q})$ is isomorphic to $\mathbf{Z}/4\mathbf{Z}$ and is generated by $P_1=(1,1)$. The polynomial $P$ satisfies Deninger's conditions \cite[3.2]{deninger:mahler}, so we have \begin{equation*} m(P)= \frac{1}{2\pi} \int_{\gamma_P} \eta(x,y) \end{equation*} where $\gamma_P$ is the path defined by $\gamma_P = \{(x,y) \in E_1(\mathbf{C}) : |x|=1, |y| \leq 1\}$. The path $\gamma_P$ joins the point $\bar{A} = (\bar{\zeta_3},-1)$ to $A=(\zeta_3,-1)$. Note that these points have last coordinate $-1$, so the discussion in \cite[p. 272]{deninger:mahler} applies and $\gamma_P$ defines an element of $H_1(E_1(\mathbf{C}),\mathbf{Q})$. After some computation, we find that $\gamma_P = \frac12 \gamma_1$ where $\gamma_1$ is a generator of $H_1(E_1(\mathbf{C}),\mathbf{Z})^-$ such that $\gamma_{E_1}^+ \bullet \gamma_1 = 2$ (note that $E_1(\mathbf{R})$ is connected). Using Proposition \ref{pro int eta DE}, it follows that \begin{equation*} \int_{\gamma_P} \eta(x,y) = \frac12 \int_{\gamma_1} \eta(x,y) = - D_{E_1}(\beta) \end{equation*} where $\beta = \dv(x) * \dv(y)^-$ is the convolution of the divisors of $x$ and $y$. We have \begin{equation*} \dv(x) = (P_1)+(2P_1)-(-P_1)-(0) \qquad \dv(y) = (P_1)-(2P_1)-(-P_1)+(0) \end{equation*} so that $\beta = 4(P_1)-4(-P_1)$. This gives \begin{equation*} \int_{\gamma_P} \eta(x,y) = -8D_{E_1}(P_1). \end{equation*} We are now going to relate elliptic dilogarithms on $E_1$ and $E_0$ using Proposition \ref{pro incident} and Lemma \ref{lem DE DE'}. The curve $E_1$ is the $X_1(21)$-optimal elliptic curve in the isogeny class of $E_0$. We have a $2$-isogeny $\lambda : E_1 \to E_0$ whose kernel is generated by $2P_1=(0,0)$. Using Vélu's formulas \cite{velu}, we find that an equation of $\lambda$ is \begin{equation*} \lambda(X,Y) = \Bigl(\frac{X^2+1}{X},-\frac{1}{X}+\frac{X^2-1}{X^2} Y\Bigr). \end{equation*} The preimages of $P+Q$ under $\lambda$ are the points $A=(\zeta_3,-1-\zeta_3)$ and $\bar{A}=(\bar{\zeta_3},-1-\bar{\zeta_3})$, while the preimages of $P$ are given by $B=(\frac{5+\sqrt{21}}{2},4+\sqrt{21})$ and $B'=(\frac{5-\sqrt{21}}{2},4-\sqrt{21})$. Note that $2A=-P_1$ and $2B=P_1$ so that $A$ and $B$ have order $8$ and we have the relations $\bar{A}=A+2P_1=5A$ and $B'=5B$. Moreover $C=A+B$ is the $2$-torsion point given by $C=(\frac{-1+3i\sqrt{7}}{8},\frac{1-3i\sqrt{7}}{16})$. Using Theorem \ref{zagier 21} and Lemma \ref{lem DE DE'}, we have \begin{equation*} L'(E_0,0) = \frac{4}{\pi} \bigl(D_{E_1}(A)+D_{E_1}(\bar{A})-D_{E_1}(B)-D_{E_1}(B')\bigr) \end{equation*} so that Theorem \ref{boyd 21} reduces to show \begin{equation*} D_{E_1}(P_1)=-2 \bigl(2D_{E_1}(A)-D_{E_1}(B)-D_{E_1}(B')\bigr). \end{equation*} We look for lines $\ell$ in $\mathbf{P}^2$ such that $\ell \cap E_1$ is contained in the subgroup generated by $A$ and $B$. Using a computer search, we find that the tangents to $E$ at $A$ and $-A$ and the line $\ell : Y+\frac12 X=0$ passing through the 2-torsion points of $E$ are incident. By Proposition \ref{pro incident}, we deduce the relation \begin{equation*} \begin{split} & 4 D_{E_1}(2A)+4D_{E_1}(3A)+D_{E_1}(4A)+2D_{E_1}(-2A)+4D_{E_1}(-A)\\ & \quad + 2 D_{E_1}(2A+C)+4D_{E_1}(3A+C)+2D_{E_1}(-2A+C)+4D_{E_1}(-A+C)=0. \end{split} \end{equation*} Since $D_{E_1}$ is odd and $D_{E_1}(3A)=-D_{E_1}(\bar{A})=-D_{E_1}(A)$, this simplifies to \begin{equation*} 2 D_{E_1}(2A)-8D_{E_1}(A)+4D_{E_1}(B)+4D_{E_1}(B')=0 \end{equation*} which is the desired equality. \subsection{Conductor 48} We prove the following case of Boyd's conjectures \cite[Table 1, $k=12$]{boyd:expmath}. \begin{thm}\label{boyd 48} We have the identity $m(x+\frac{1}{x}+y+\frac{1}{y}+12)=2L'(E,0)$, where $E$ is the elliptic curve defined by $x+\frac{1}{x}+y+\frac{1}{y}+12=0$. \end{thm} The curve $x+\frac{1}{x}+y+\frac{1}{y}+12=0$ is isomorphic to the elliptic curve $E=48a5$. We have a commutative diagram \begin{equation}\label{cd 48} \begin{tikzcd} X_1(48) \arrow{r}{\pi} \arrow{d}{\varphi_1} & X_0(48) \arrow{d}{\varphi_0} \\ E_1 \arrow{r}{\lambda_0} & E_0 \arrow{r}{\lambda} & E. \end{tikzcd} \end{equation} Here $E_1=48a4$ is the $X_1(48)$-optimal elliptic curve and $E_0=48a1$ is the strong Weil curve in the isogeny class of $E$. They are given by the equations \begin{equation} E_1 : y^2=x^3+x^2+x \qquad E_0 : y^2=x^3+x^2-4x-4. \end{equation} The isogeny $\lambda_0$ has degree $2$ and its kernel is generated by $P_1=(0,0)$. Using Vélu's formulas, we find an explicit equation for $\lambda_0$: \begin{equation} \lambda_0(x,y) = \left(x+\frac{1}{x},(1-\frac{1}{x^2})y\right). \end{equation} The modular parametrization $\varphi_0$ has degree $2$ and we have \begin{gather*} \varphi_0(0)=\varphi_0(1/2)=0 \qquad \varphi_0(1/3)=\varphi_0(1/6)=(-1,0) \\ \varphi_0(1/8)=\varphi_0(1/16)=(-2,0) \qquad \varphi_0(1/24)=\varphi_0(1/48)=(2,0)\\ \varphi_0(1/4)=(0,2i) \qquad \varphi_0(-1/4)=(0,-2i)\\ \varphi_0(1/12)=(-4,-6i) \qquad \varphi_0(-1/12)=(-4,6i). \end{gather*} Moreover the ramification indices of $\varphi_0$ at the cusps $\frac14,-\frac14,\frac{1}{12},-\frac{1}{12}$ are equal to $2$. Let $S_0$ be the set of points $P$ of $E_0(\mathbf{C})$ such that $\varphi_0^{-1}(P)$ is contained in the set of cusps of $X_0(48)$, and similarly let $S_1$ be the set of points $P$ of $E_1(\mathbf{C})$ such that $\varphi_1^{-1}(P)$ is contained in the set of cusps of $X_1(48)$. By the previous computation, we have \begin{equation} S_0 = E_0[2] \cup \{(0,\pm 2i),(-4,\pm 6i)\}. \end{equation} The curve $E_0$ doesn't admit a parametrization by modular units, but the curve $E_1$ does. Indeed, consider the point $A=(i,i) \in E_1(\mathbf{C})$. It has order $8$ and satisfies $\bar{A}=3A$ and $4A=P_1$. Moreover $\lambda_0(A)=(0,2i)$. Because of the commutative diagram (\ref{cd 48}), we know that $S_1$ contains $\lambda_0^{-1}(S_0)$; in particular $S_1$ contains the subgroup generated by $A$. Therefore the following functions on $E_1$ are modular units \begin{equation} (f)=2(P_1)-2(0) \qquad (g)=2(A)+2(\bar{A})-4(0). \end{equation} We may take $f=x$ and $g=x^2-2y+2x+1$. It is plain that $f$ and $g$ parametrize $E_1$. Moreover the tame symbols of $\{f,g\}$ at $0,P_1,A,\bar{A}$ are equal to $1,1,-1,-1$ so that $\{f,g\}$ belongs to $K_2(E_1) \otimes \mathbf{Q}$. The expression of $f$ and $g$ in terms of Siegel units is \begin{equation} \varphi_1^* f = \frac{g_2 g_{20} g_{22}}{g_4 g_{10} g_{14}} \qquad \varphi_1^* g = \frac{g_1^2 g_2 g_{10} g_{11}^2 g_{12}^4 g_{13}^2 g_{14} g_{22} g_{23}^2}{g_4^3 g_5^2 g_6^2 g_7^2 g_{17}^2 g_{18}^2 g_{19}^2 g_{20}}. \end{equation} A generator $\gamma_1$ of $H_1(E_1(\mathbf{C}),\mathbf{Z})^-$ is given by \begin{equation*} \gamma_1 = (\varphi_1)_* \left\{-\frac17,\frac17\right\} = (\varphi_1)_* \left( \xi \begin{pmatrix} 1 & 0 \\ 7 & 1 \end{pmatrix}-\xi \begin{pmatrix} 1 & 0 \\ -7 & 1 \end{pmatrix} \right). \end{equation*} Using Theorem \ref{main thm}, we find \begin{equation}\label{int gamma1} \int_{\gamma_1} \eta(f,g) = \int_{-1/7}^{1/7} \eta(\varphi_1^* f,\varphi_1^*g) = \pi L'(F_1,0) \end{equation} where $F_1$ is the modular form of weight 2 and level 48 given by \begin{equation*} F_1= 4q^2 + 8q^3 - 4q^6 - 8q^{10} - 32q^{11} - 16q^{15} + 4q^{18} + 32q^{19} + \ldots \end{equation*} This time $F_1$ is not a multiple of the newform $f_{E_1}$ associated to $E_1$. We look for another modular symbol. Another generator $\gamma_2$ of $H_1(E_1(\mathbf{C}),\mathbf{Z})^-$ is given by \begin{equation*} \gamma_2 = (\varphi_1)_* \left\{-\frac{2}{11},\frac{2}{11}\right\} = (\varphi_1)_* \left( \xi \begin{pmatrix} 2 & 1 \\ 11 & 6 \end{pmatrix} + \xi \begin{pmatrix} 1 & 0 \\ 6 & 1 \end{pmatrix} - \xi \begin{pmatrix} -2 & 1 \\ 11 & -6 \end{pmatrix} -\xi \begin{pmatrix} 1 & 0 \\ -6 & 1 \end{pmatrix} \right). \end{equation*} Using Theorem \ref{main thm}, we find \begin{equation}\label{int gamma2} \int_{\gamma_2} \eta(f,g) = \int_{-2/11}^{2/11} \eta(\varphi_1^* f,\varphi_1^*g) = \pi L'(F_2,0) \end{equation} where $F_2$ is the modular form of weight 2 and level 48 given by \begin{equation*} F_2 = -4q + 8q^2 + 12q^3 + 8q^5 - 8q^6 - 4q^9 - 16q^{10} - 48q^{11} + 8q^{13} - 24q^{15} - 8q^{17} + 8q^{18} + 48q^{19}+ \ldots \end{equation*} A computation reveals that $2F_1-F_2=4f_{E_1}$. Summing (\ref{int gamma1}) and (\ref{int gamma2}), we get \begin{equation}\label{int gamma12 L} \int_{2\gamma_1-\gamma_2} \eta(f,g) = 4\pi L'(E_1,0). \end{equation} Since $\gamma_{E_1}^+ \bullet \gamma_1 = \gamma_{E_1}^+ \bullet \gamma_2 = 2$, Proposition \ref{pro int eta DE} gives \begin{equation}\label{int gamma12 D} \int_{2\gamma_1-\gamma_2} \eta(f,g) = -2 D_{E_1}(\beta(f,g)) = -32 D_{E_1}(A). \end{equation} Combining (\ref{int gamma12 L}) and (\ref{int gamma12 D}), we have thus shown Zagier's conjecture for $E_1$. \begin{thm}\label{zagier 48} We have the identities $L'(E_1,0)=-\frac{8}{\pi} D_{E_1}(A)$ and $L(E_1,2)=-\frac{2\pi}{3} D_{E_1}(A)$. \end{thm} Let us now turn to the elliptic curve $E$. Let $P_k$ be the polynomial $P_k(x,y)=x+1/x+y+1/y+k$. For $k \not\in \{ 0, \pm 4\}$, let $C_k$ be the elliptic curve defined by $P_k(x,y)=0$. The change of variables \begin{equation*} X = 4x(x+y+k) \qquad Y=8x^2(x+y+k) \end{equation*} puts $C_k$ in Weierstrass form $Y^2+2kXY+8kY=X^3+4X^2$. The point $Q=(0,0)$ on $C_k$ has order $4$. We show that the Mahler measure of $P_k$ can be expressed in terms of the elliptic dilogarithm. \begin{pro}\label{pro DCk} Let $k$ be a real number such that $|k| > 4$. We have \begin{equation*} m(P_k) = \begin{cases} -\frac{4}{\pi} D_{C_k}(Q) & \textrm{if } k>0,\\ \frac{4}{\pi} D_{C_k}(Q) & \textrm{if } k<0. \end{cases} \end{equation*} \end{pro} \begin{proof} Since $|k|>4$, the polynomial $P_k$ doesn't vanish on the torus, so that \begin{equation*} m(P_k) = \frac{1}{2\pi} \int_{\gamma_k} \eta(x,y) \end{equation*} where $\gamma_k$ is the closed path on $C_k(\mathbf{C})$ defined by \begin{equation*} \gamma_k = \{(x,y) \in C_k(\mathbf{C}) : |x|=1, |y| \leq 1\}. \end{equation*} It turns out that $\gamma_k$ is a generator of $H_1(C_k(\mathbf{C}),\mathbf{Z})^-$ which satisfies $\gamma_{C_k}^+ \bullet \gamma_k = \operatorname{sgn}(k)$. The divisors of $x$ and $y$ are given by \begin{equation*} \dv(x) = (Q)+(2Q)-(-Q)-(0) \qquad \dv(y) = (Q)-(2Q)-(-Q)+(0). \end{equation*} Since $P_k$ is tempered, we have $\{x,y\} \in K_2(C_k) \otimes \mathbf{Q}$, and Proposition \ref{pro int eta DE} gives \begin{equation*} \int_{\gamma_k} \eta(x,y) = -\operatorname{sgn}(k) D_{C_k}(\beta(x,y)) = -8 \operatorname{sgn}(k) D_{C_k}(Q). \end{equation*} \end{proof} \begin{remark} The fact that $m(P_k)$ can be expressed as an Eisenstein-Kronecker series was also proved by F. Rodriguez-Villegas \cite{rodriguez:modular}. \end{remark} We are now going to relate elliptic dilogarithms on $E=C_{12}$ and $E_1$. Let $\lambda' : E_1 \to E$ be the isogeny $\lambda \circ \lambda_0$ from (\ref{cd 48}). It is cyclic of degree 8 and its kernel is generated by the point $B=(-2-\sqrt{3},3i+2i\sqrt{3})$. A preimage of $Q$ under $\lambda'$ is given by \begin{equation*} C = \left(\frac12(\alpha^3+\alpha^2+\alpha-1),\frac12(\alpha^3+\alpha^2-\alpha-3)\right) \end{equation*} with $\alpha = \sqrt[4]{-3}$. The point $C$ has order 4 and we have $A=B+2C$. By Lemma \ref{lem DE DE'}, we have \begin{equation}\label{eq DEQ} D_E(Q) = 2 \sum_{k \in \mathbf{Z}/8\mathbf{Z}} D_{E_1}(C+kB). \end{equation} Combining Theorem \ref{zagier 48}, Proposition \ref{pro DCk} and (\ref{eq DEQ}), Theorem \ref{boyd 48} reduces to show \begin{equation}\label{rel DE 48} \sum_{k \in \mathbf{Z}/8\mathbf{Z}} D_{E_1}(C+kB) = 2D_{E_1}(A). \end{equation} Let $T$ be the subgroup generated by $B$ and $C$. It is isomorphic to $\mathbf{Z}/8\mathbf{Z} \times \mathbf{Z}/4\mathbf{Z}$. There are 187 lines $\ell$ of $\mathbf{P}^2$ such that $\ell \cap E_1$ is contained in $T$. A computer search reveals that among them, there are 691 unordered triples of lines meeting at a point outside $E_1$. These incident lines yield a subgroup $\mathcal{R}$ of $\mathbf{Z}[T]$ of rank 18 such that $D_{E_1}(\mathcal{R})=0$. Let $\mathcal{R}_{\mathrm{triv}}$ be the subgroup of $\mathbf{Z}[T]$ generated by the following elements \begin{equation}\label{eq fonc} [P]-[\bar{P}], \qquad [P]+[-P], \qquad [2P]-2 \sum_{Q \in E_1[2]} [P+Q] \qquad (P,Q \in T). \end{equation} The group $\mathcal{R}_{\mathrm{triv}}$ has rank 26 and by Lemma \ref{lem DE DE'}, we have $D_{E_1}(\mathcal{R}_{\mathrm{triv}})=0$. Moreover $\mathcal{R}+\mathcal{R}_{\mathrm{triv}}$ has rank 27 and a generator of $(\mathcal{R}+\mathcal{R}_{\mathrm{triv}})/\mathcal{R}_{\mathrm{triv}}$ is given (for example) by the divisor \begin{equation*} \beta = \beta_{E_1}(\ell_1,\ell_2)+\beta_{E_1}(\ell_2,\ell_3)+\beta_{E_1}(\ell_3,\ell_1) \end{equation*} where $\ell_1$, $\ell_2$, $\ell_3$ are the lines defined by \begin{align*} \ell_1 \cap E_1 & = (B)+(-B)+(0)\\ \ell_2 \cap E_1 & = (B+2C)+(B-C)+(-2B-C)\\ \ell_3 \cap E_1 & = (4B+C)+(-3B+2C)+(-B+C). \end{align*} Computing explicitly, this gives \begin{equation*} \beta = 2 \left(\sum_{k \in \mathbf{Z}/8\mathbf{Z}} (2C+kB)+(3C+kB)\right) - 2(-A)-2(-\bar{A}) + (4B) - (2C) - (4B+2C). \end{equation*} Using the functional equations (\ref{eq fonc}) of $D_{E_1}$, we obtain (\ref{rel DE 48}). \bibliographystyle{smfplain}
1,116,691,501,039
arxiv
\section{Introduction} Mathematical models based on parabolic type equations with obstacles arise in various branches of science and technology: e.g., in mathematical biology (\cite{AABBK2011}), phase transitions problems (\cite{R1971}, \cite{V1996}), electrochemical industry (\cite{El1980}), stochastic control theory (\cite{BL1982}), mathematical economy (\cite{vM1974}, \cite{vM1975}, \cite{PS2007}). Obstacle problems for parabolic equations are well studied from the mathematical point of view. Existence of a generalised solution for the case of a smooth obstacle has been studied in many publications. For time-independent obstacles, first results were obtained in \cite{LS1967} and \cite{B1972}. The case, where obstacles are presented by the functions ``regular'' with respect to time was studied in \cite{B1972a}. The case obstacles non--increasing in time obstacles has been considered in the books \cite{L1969} and \cite{Na1984}. The existence results for linear parabolic problems with general obstacles depending on time only as measurable functions can be found in \cite{MiPu1977}. For irregular obstacles, the comprehensive existence theory was developed in \cite{BDM2011} and \cite{S2015}. Qualitative properties of solutions and free boundaries for the smooth obstacle case were studied in \cite{Fr1975} and \cite{BlaDoMo2006} in one dimension, and in \cite{C1977}, \cite{ASU2000}-\nocite{ASU2002}\cite{ASU2003}, \cite{CPS2004}, \cite{Bla2006}, \cite{LiMo2015} in higher dimensions, respectively. A systematic overview of the regularity results for smooth obstacles can be found in the book \cite{A2018}. When the obstacle is non-smooth, the regularity properties of solutions and free boundaries were examined in \cite{PS2007}. The regularity of solutions and free boundaries in the so-called parabolic thin obstacle problem (known also as parabolic Signorini problem) was studied in \cite{ArU1988} and \cite{ArU1996} (see also recent publications \cite{DGPT2017}, \cite{BSVGZ2017}, and \cite{Sh2020}). There exist various numerical methods for solving this class of nonlinear problems. At this point, we refer to the monographs \cite{G2008}, \cite{T2006} and the literature cited therein. Investigation of a priori error estimates for problems with obstacles begins with the paper by R. Falk \cite{F1974} devoted to the elliptic case. Estimates of this type for evolutionary variational inequalities have been later studied in many papers (e.g., see \cite{Fe1987} and \cite{V1990}). In this paper, we discuss a different problem. Our analysis is focused not on properties of the exact minimizer, but on guaranteed bounds of the difference between the exact solution (minimizer) of the parabolic variational problem and any function (approximation) from the energy class satisfying the prescribed boundary conditions and the restrictions stipulated by the obstacle. They can be called {\em estimates of deviations} from the exact solution (or a posteriori estimates of the functional type). The estimates bound a certain measure (norm) of the error by a functional (error majorant) that depends on the problem data and approximation type, but do not explicitly depend on the exact solution. Hence the functional is fully computable and can be used to evaluate the accuracy of an approximation. Within the framework of this conception, the estimates should be derived on the functional level by the same tools as commonly used in the theory of partial differential equations. They do not use specific features of approximations (e.g., Galerkin orthogonality) what is typical for a posteriori methods applied in mesh adaptive computations based upon finite element technologies. Unlike the a priori rate convergence estimates that establish general asymptotic properties of an approximation method, these a posteriori estimates are applied to a particular solution and allow us to directly verify its accuracy. For various elliptic and parabolic problems estimates of this type have been derived in \cite{Re2000,Re2002,Re2007} and many subsequent publications. The reader can find a consequent exposition of the corresponding theory in the monographs \cite{Re2008} and \cite{RS2020}. In this paper, we derive such type estimates for the parabolic obstacle problem. They depend only on the approximation solution (which is known) and on the data of the problem. We emphasise that they also do not need knowledge on the r exact coincidence set associated with the exact solution. The obtained error majorant is non-negative and vanishes if and only if the approximation coincides with the exact minimizer. It provides a guaranteed bound of the error expressed in terms of a natural measure of the distance between the exact and approximate solution for a finite time interval $[0,T]$. The outline of the paper is as follows. The first part of Section 2 contains basic notation and the mathematical formulation of the problem. The second part presents main result (Theorem~1) and discusses it. In Section~3 we discuss some applications of the error majorant. First, we show that it yields simple bounds for modeling errors generated by simplification of problem data. The corresponding estimate is directly computable and do not require an information about the exact solution of the original (complicated) problem. The second part of the section is devoted to error estimates for time-incremental approximations, which are often used in numerical analysis of evolution problems. The third part of Section 3 concerns the estimate of deviations from the exact solution to the parabolic thin obstacle problem. Finally, in Section~4, we consider several examples that demonstrate how the estimates work in practice. \section{Estimates of deviations from the exact solution to the parabolic obstacle problem} \subsection{Problem setting} We consider the classical parabolic obstacle problem, which elliptic part is presented by the Laplace operator. For simplicity, we restrict our consideration to the case of time-independent obstacles. Let $\Omega$ be an open, connected, and bounded domain in $\mathbb{R}^n$ with Lipschitz continuous boundary $\partial\Omega$, $Q_T=\Omega \times ]0,T[$. We consider an obstacle function $\phi$ satisfying $$ \phi \in H^2(\Omega) \qquad \text{and}\qquad \phi \leqslant 0 \quad \text{a.e. on}\ \, \partial\Omega. $$ The class of admissible functions is defined as follows: $$ \mathbb{K}=\mathbb{K}(\phi):=\{ w\in L^2((0,T),H^1_0(\Omega)):\ w_t\in L^2((0,T),H^{-1}(\Omega)), \ w \geqslant \phi(x) \ \; \text{a.e. on}\ \, Q_T\}. $$ By a standard interpolation argument, the above assumptions imply $w\in C^0((0,T),L^2(\Omega))$. Note that $K$ is non-empty due to the compatibility condition $\phi \leqslant 0$ on the lateral boundary $\partial\Omega \times (0,T)$ (which has to be understood in the sense of traces). Henceforth, we assume that $f\in L^2(Q_T)$ and $$ u_0\in H^1_0(\Omega) \quad \text{with}\quad u_0\geqslant \phi\quad \text{a.e. on}\ \Omega. $$ We consider the following variational {\it Problem ${\mathcal P}$.}\; Find a function $u\in \mathbb{K}$ such that for almost all $t$ and $\forall w\in \mathbb{K}$ we have \begin{eqnarray} &&\int\limits_{Q_T}u_t (w-u)dxdt+\int\limits_{Q_T}\nabla u \cdot \nabla (w-u)dxdt \geqslant \int\limits_{Q_T}f(w-u)dxdt, \label{eq:eq1}\\ && u(x,0)=u_0(x), \quad \forall x\in \Omega. \label{eq:bc2} \end{eqnarray} Here and later on, $w_t$, resp. $\frac{\partial w}{\partial t}$ denotes the partial derivative with respect to time and $\nabla w$ denotes the spatial gradient vector. It is known (see, e.g., \cite{LS1967}, \cite{B1972}, \cite{B1972a}, and \cite{DLi1976}) that under the above assumptions the minimization problem (\ref{eq:eq1})-(\ref{eq:bc2}) is uniquely solvable. By $Q^+_{T}(u):=\{(x,t)\in Q_T \mid u(x,t)>\phi\}$ we denote the subset of $Q_T$, where the obstacle is not active. In this set, \begin{equation} \label{obs1} f+{\rm div} p-u_t= 0,\qquad p=\nabla u. \end{equation} In the reminder (coincidence) set $Q^\phi_{T}(u):=\{(x,t)\in Q_T \mid u(x,t)=\phi\}$ it holds \begin{equation*} f+{\rm div} p-u_t\leq 0. \end{equation*} \begin{remark} It is also well known (see, e.g., \cite{ASU2000}, \cite{CPS2004}, or \cite{A2018}) that the best possible regularity of a solution, $u$, to a parabolic obstacle problem is $u\in W^{2,1,\infty}_{loc}(Q_T)$, even when the source, boundary data, and obstacle function and domain boundary are $C^{\infty}$. \end{remark} \subsection{Estimates of the distance to the exact solution} Let $v\in \mathbb{K}$ be a function viewed as an approximation of the exact solution $u$, so that $e:=v-u$ is the error and $Q^+_{T}(v):=\{(x,t)\in Q_T \mid v(x,t)>\phi\}$ and $Q^\phi_{T}(v):=\{(x,t)\in Q_T \mid v(x,t)=\phi\}$ denote the sets associated with $v$. Our goal is to deduce a computable majorant of $e$, which uses only known information (i.e., the function $v$, sets $Q^+_{T}(v)$ and $Q^\phi_{T}(v)$, $u_0$, $\Omega$, and other data of the Problem ${\mathcal P}$). The error is measured in terms of the combined error norm \begin{equation} |[e]|^2_{\alpha,Q_T}:=\|e(\cdot,T)\|^2_{\Omega}+\left(2-\frac{1}{\alpha}\right)\|\nabla e\|^2_{Q_T},\quad \alpha \geqslant \dfrac{1}{2}. \label{eq:3.1} \end{equation} For this purpose we combine the methods earlier developed for stationary problems with obstacles (see \cite{Re2000}, \cite{Re2007}, \cite{Re2008}, \cite{AR2018}, \cite{AR2020}) and for parabolic equations (see \cite{Re2002}, \cite{MR2016}, \cite{LMR2019}). In the above cited publications, the reader can also find numerical examples conforming the efficiency of these estimates to problems with obstacles and finite element and IgA approximations of evolutionary problems. \begin{theorem} \label{Th1} For any $v\in {\mathbb K}$ and any vector valued function $\tau$ such that \begin{equation*} \tau \in H_{\operatorname{div} }(Q_T):=\left\{\tau (x,t)\in L^2(Q_T, \mathbb{R}^n) \mid \operatorname{div}\tau \in L^2(\Omega) \ \text{for a.e.}\ t\in (0,T) \right\} \end{equation*} it holds \begin{equation} |[e]|^2_{\alpha,Q_T} \leqslant \|e(\cdot,0)\|^2_{\Omega} +\alpha \left( \|\tau -\nabla v\|_{Q_T}+C_F\|\mathcal{F}_f(v,\tau)\|_{Q_T}\right)^2, \label{eq:mainest} \end{equation} where $$ \mathcal{F}_f(v, \tau ):=\left\{\begin{aligned} &\mathcal{R}_f(v, \tau ), \ &&\text{if}\ \, (x,t) \in Q^+_{T}(v),\\ &\{\mathcal{R}_f(v, \tau)\}_{\oplus}, \ &&\text{if}\ \, (x,t) \in Q^\phi_T(v), \end{aligned} \right. $$ and $ \mathcal{R}_f(v,\tau ):=f+\operatorname{div}\tau - v_t$. The right hand side of (\ref{eq:mainest}) vanishes if an only if $v=u$ and $\tau=\nabla u$. \end{theorem} \begin{proof} From (\ref{eq:eq1}) it follows that \begin{align*} \int\limits_{Q_T} (u_t-v_t) (w-u)&dxdt+\int\limits_{Q_T}\nabla (u-v) \cdot \nabla (w-u)dxdt \\ &\geqslant\int\limits_{Q_T} f(w-u)dxdt -\int\limits_{Q_T} v_t (w-u)dxdt -\int\limits_{Q_T} \nabla v \cdot \nabla (w-u)dxdt, \end{align*} for any $w\in \mathbb{K}$. In particular, for $w=v$ we have \begin{equation} \label{eq:3.3} \begin{aligned} \frac{1}{2} \int\limits_{Q_T} \frac{\partial (u-v)^2}{\partial t}dxdt&+\int\limits_{Q_T}|\nabla (u-v)|^2dxdt \leqslant \int\limits_{Q_T}f(u-v)dxdt \\&-\int\limits_{Q_T} v_t (u-v) -\int\limits_{Q_T}\nabla v \cdot \nabla (u-v)dxdt. \end{aligned} \end{equation} Since $\tau \in H_{\operatorname{div} }(Q_T)$ and $u-v\in H^1_0(\Omega)$, the identity \begin{equation} \label{eq:3i} \int\limits_{\Omega} \tau \cdot \nabla (u-v)dx=-\int\limits_{\Omega} (u-v) \operatorname{div} \tau dx \end{equation} holds for almost all $t\in (0,T)$. Notice that $$ \int\limits_{Q_T} \frac{\partial (u-v)^2}{\partial t}dxdt=\int\limits_{\Omega} (u-v)^2 dx \bigg|^T_0. $$ Hence using definitions of $\mathcal{R}_f(v,\tau )$ and $\mathcal{F}_f(v,\tau )$, we write (\ref{eq:3.3}) in the form \begin{equation} \label{eq:3.4} \begin{aligned} \frac{1}{2} \|(u-v)(\cdot,T)\|^2_{\Omega} &-\frac{1}{2} \|(u-v)(\cdot,0)\|^2_{\Omega} +\|\nabla (u-v)\|^2_{Q_T}\\ &\leqslant \int\limits_{Q_T} \left( \mathcal{R}_f(v,\tau )\right)(u-v)dxdt + \int\limits_{Q_T}\left( \tau - \nabla v\right)\nabla (u-v)dxdt\\ &\leqslant \int\limits_{Q_T} \mathcal{F}_f(v,\tau )(u-v)dxdt + \int\limits_{Q_T}\left( \tau - \nabla v \right)\nabla (u-v)dxdt. \end{aligned} \end{equation} We set $e:=u-v$. Estimating the first term on the right-hand side of (\ref{eq:3.4}) by the Friedrich's type inequality and the second term there by the H{\"o}lder inequality, we arrive at \begin{equation} \label{eq:3.5} \begin{aligned} \frac{1}{2}\|e(\cdot, T)\|^2_{\Omega}+\|\nabla e\|^2_{Q_T} &\leqslant \frac{1}{2}\|e(\cdot, 0)\|^2_{\Omega} +\|\tau - \nabla v\|_{Q_T} \|\nabla e\|_{Q_T}\\ &+C_F \|\mathcal{F}_f(v,\tau ) \|_{Q_T} \|\nabla {e}\|_{Q_T}. \end{aligned} \end{equation} Since \begin{equation} \label{eq:3.5a} \begin{aligned} \bigg( \|\tau - \nabla v\|_{Q_T}&+C_F \|\mathcal{R}_f(v,\tau ) \|_{Q_T} \bigg) \|\nabla {e}\|_{Q_T}\\ & \leqslant \frac{\alpha}{2} \bigg( \|\tau - \nabla v\|_{Q_T}+C_F \|\mathcal{F}_f(v,\tau ) \|_{Q_T} \bigg)^2+\frac{1}{2\alpha} \| \nabla {e}\|^2_{Q_T}, \end{aligned} \end{equation} the inequality (\ref{eq:3.5}) yields the estimate \begin{equation} \label{eq:3.6} \begin{aligned} \frac{1}{2}\|e(\cdot, T)\|^2_{\Omega}+ \left(1-\frac{1}{2\alpha}\right) \| \nabla {e}\|^2_{Q_T} &\leqslant \frac{1}{2}\|e(\cdot, 0)\|^2_{\Omega} \\ &+\frac{\alpha}{2} \bigg(\|\nabla v-\tau \|_{Q_T} +C_F\|\mathcal{F}(v,\tau )\|_{Q_T}\bigg)^2. \end{aligned} \end{equation} Now (\ref{eq:mainest}) follows from (\ref{eq:3.6}) provided that the condition $\|e(\cdot,0)\|^2_{\Omega}=0$ is fulfilled (which is obviously the case for $v\in \mathbb{K}$). Assume that the right hand side of (\ref{eq:mainest}) vanishes. Then $\tau=\nabla v$, $v(x,0)=u_0(x)$ and \begin{align} \mathcal{R}_f(v,\tau )&=0\quad{\rm in}\;Q^+_T(v), \notag\\ \label{eq:Rf2} \mathcal{R}_f(v,\tau )&\leq 0\quad{\rm in}\;Q^\phi_T(v). \end{align} We use these relations to estimate the integral \begin{multline} \label{eq:unique} \int\limits_{Q_T}\left( v_t (w-v)dxdt+\nabla v \cdot \nabla (w-v)-f(w-v) \right)dxdt\\=\int\limits_{Q^+_T(v)}\left( v_t (w-v)dxdt+\tau \cdot \nabla (w-v)-f(w-v) \right)dxdt\\ + \int\limits_{Q^\phi_T(v)}\left( v_t (w-v)dxdt+\tau \cdot \nabla (w-v)-f(w-v) \right)dxdt\\ =\int\limits_{Q^\phi_T(v)}\left( ({\rm div}\tau +f-v_t)(v-w) \right) dxdt. \end{multline} In view of (\ref{eq:Rf2}), both terms of the integrand in the right-hand side of (\ref{eq:unique}) are nonpositive for any $w\in {\mathbb K}$. Hence we see that $v$ satisfies (\ref{eq:eq1}). Since the solution is unique, we conclude that $v$ coincides with $u$ and $\tau$ coincides with the exact flux $p=\nabla u$. \end{proof} It is worth adding some comments to Theorem \ref{Th1}. \begin{remark} The left hand side of (\ref{eq:mainest}) is a natural measure of the distance between $v$ and $u$, which particular form depend on the parameter $\alpha$. The left hand side is directly computable. Since $\alpha$ appear as a multiplier in the right hand side, we should not select it too large. For $\alpha=1$, we obtain a useful estimate \begin{equation} \label{eq:spec1} \|e(\cdot,T)\|^2_{\Omega}+\|\nabla e\|^2_{Q_T}\leq \|e(\cdot,0)\|^2_{\Omega} + \left(\|\tau -\nabla v\|_{Q_T}+C_F\|\mathcal{F}_f(v,\tau)\|_{Q_T}\right)^2. \end{equation} Another simple estimate corresponds to the limit case $\alpha=\frac12$: \begin{equation} \label{eq:spec2} \|e(\cdot,T)\|^2_{\Omega}\,\leq \|e(\cdot,0)\|^2_{\Omega} + \frac{1}{2}\left(\|\tau -\nabla v\|_{Q_T}+C_F\|\mathcal{F}_f(v,\tau)\|_{Q_T}\right)^2. \end{equation} We outline that the estimates (\ref{eq:mainest}), (\ref{eq:spec1}), and (\ref{eq:spec2}) are valid for {\em any} function $v\in {\mathbb K}$ regardless of the method by which it is constructed. This is the principal difference of functional type a posteriori estimates from other a posteriori estimates, which usually impose special conditions on approximations (e.g., Galerkin orthogonality). In the next section, we use this universality feature to deduce simple bounds of errors caused by simplifications of the source term and initial condition. \end{remark} \begin{remark} A particular form of (\ref{eq:mainest}) can be viewed as a generalisation of the well known hypercircle estimate to the case of the parabolic obstacle problem. Define the set \begin{eqnarray*} Q_{f,\phi}:=\{\tau\in L^2(\Omega,{\mathbb R}^n): \mathcal{R}_f(v, \tau )=0\;\text{in}\; Q^+_{T}(v), \;\mathcal{R}_f(v, \tau )\leq 0\;\text{in}\; Q^\phi_{T}(v)\} \end{eqnarray*} We have \begin{equation*} |[e]|^2_{\alpha,Q_T} \leqslant \|e(\cdot,0)\|^2_{\Omega} +\alpha^2 \|\tau -\nabla v\|^2_{Q_T} \end{equation*} for any pair of functions $v$ and $\tau$ such that $(v,\tau)\in Q_{f,\phi}$. Notice that unlike the estimates known for linear problems (which contain only the restriction ${\rm div} \tau+f=0$ for the dual variable), this estimate impose the condition on both functions $v$ and $\tau$. This effect is generated by the nonlinearity associated with existence of a coincidence set and free boundary. \end{remark} \section{Special cases} \subsection{Errors generated by simplification of the model} Simplification (coarsening, defeaturing) of a mathematical model may be very useful if we can eliminate insignificant details without essential loss of the accuracy. Simplification methods for elliptic type problems are well studied (the reader can find a complete theory in \cite{RS2020}). Here we briefly discuss these questions in the context of Problem ${\mathcal P}$. Consider Problem $\widetilde{\mathcal P}$, which uses a function $\widetilde{f}(x,t)$ instead of $f(x,t)$ and $\widetilde{u}_0(x)$ instead of $u_0(x)$. Let $\widetilde{u}(x,t)$ and $\widetilde{p}=\nabla \widetilde{u}$ be the corresponding exact solution and exact flux. Substituting these functions in (\ref{eq:3.1}), we obtain a simple estimate \begin{equation*} |[u-\widetilde{u}]|^2_{\alpha, Q_T} \leqslant \|u_0-\widetilde{u}_0\|^2_{\Omega}+\alpha C_F\|\mathcal{F}(\widetilde{u}, \widetilde{p})\|^2_{Q_T}. \end{equation*} If $(x,t) \in Q^+_T(\widetilde{u})$ then $$ \mathcal{R}_f(\widetilde{u}, \widetilde{p})=f+ \operatorname{div}\,\widetilde{p}-\partial_t \widetilde{u}=f-\widetilde{f}. $$ If $(x,t)\in Q^\phi_T(\widetilde{u})$, then $$ \mathcal{R}_{\widetilde f}(\widetilde{u}, \widetilde{p})=\widetilde{f}+\operatorname{div}\,\widetilde{p}-\partial_t\widetilde{u} \leqslant 0 $$ and we find that $$ \max \{0, f+\operatorname{div}\,\widetilde{p}-\partial_t \widetilde{u}\} =\max \{0,f-\widetilde{f}+\mathcal{R}_{\widetilde f}(\widetilde{u}, \widetilde{p})\} \leqslant \max \{0, f-\widetilde{f}\}=\{f-\widetilde{f}\}_{\oplus}. $$ Hence (\ref{eq:3.3}) yields the estimate \begin{equation} \label{eq:simple} |[u-\widetilde{u}]|^2_{\alpha, Q_T} \leqslant \|u_0-\widetilde{u}_0\|^2_{\Omega}+\alpha C_F \|g(x,t)\|^2_{Q_T}, \end{equation} where $$ g(x,t)=\left\{ \begin{aligned} &f-\widetilde{f}, && \text{if}\ (x,t)\in Q^+_T(\widetilde{u}),\\ &\{f-\widetilde{f}\}_{\oplus}, && \text{if}\ (x,t)\in Q^\phi_T(\widetilde{u}). \end{aligned} \right. $$ Since $\|g(x,t)\|_{Q_T} \leqslant \|f-\widetilde{f}\|_{Q_T}$, we have a simplified estimate \begin{equation} \label{eq:sim} |[u-\widetilde{u}]|^2_{\alpha,Q_T} \leqslant \|u_0-\widetilde{u}_0\|^2_{\Omega}+\alpha C_F \|f-\widetilde{f}\|^2_{Q_T}. \end{equation} In general, the estimate (\ref{eq:sim}) is coarser than (\ref{eq:simple}), but it does not require knowledge on the coincidence set $Q^\phi_T(\widetilde{u})$. \subsection{Errors of time--incremental approximations} Now we consider a special class of approximations, which are typically used in time-incremental methods for various evolutionary problems. Let the interval $(0,T)$ be split into a collection of subintervalls \begin{equation*} I_k:=(t_k,t_{k+1}), \quad t_{k+1}-t_k=\Delta_k>0, \quad t_0=0, \quad t_N=T, \end{equation*} and the approximation $v(x,t)$ has the form \begin{equation} \label{eq:increment} v(x,t)=v_k(x)+\frac{v_{k+1}(x)-v_k(x)}{\Delta_k}(t-t_k) \end{equation} for $(x,t)\in Q_k:=\Omega \times I_k$. Here $v_k(x)\in H^1_0(\Omega)$, $v_k(x) \geqslant \phi$ are the approximations computed by a time-incremental numerical method. Notice that so defined function $v$ belongs to ${\mathbb K}$. Indeed \begin{eqnarray*} v(x,t)-\phi(x)=v_k(x)\frac{t_{k+1}-t}{\Delta_k}+ v_{k+1}(x)\frac{t-t_k}{\Delta_k}-\phi(x)\geq 0. \end{eqnarray*} This property also holds in more complicated cases, where $\phi$ depends on on time (e.g., if $\phi$ is linear function of $t$). However, for simplicity in this paper we consider only the case $\phi=\phi(x)$. We define the sets $$ \Omega^\phi_k(v_k):=\{x\in \Omega : v_k(x)=\phi\}\qquad\text{and}\quad \Omega^+_k(v_k):=\{x\in \Omega : v_k(x)>\phi\}. $$ Notice that for the function $v$ defined by (\ref{eq:increment}), the set $Q^\phi_k:=Q_k\cap Q^\phi(v)$ is defined as follows: \begin{equation*} Q^\phi_k(v):=Q^\phi_T(v)\cap Q_k=\Omega^{\phi}_{k+1/2}\times I_k. \end{equation*} where $\Omega^{\phi}_{k+1/2}:=\Omega_k^\phi \cap \Omega^ \phi_{k+1}$. For $(x,t)\in Q_k$ we have \begin{equation*} v_t =\frac{1}{\Delta_k}(v_{k+1}-v_k), \qquad \nabla v=\nabla v_k+\frac{\nabla(v_{k+1}-v_k)}{\Delta_k}(t-t_k). \end{equation*} Below we deduce two different a posteriori estimates for semi-discrete approximation. The simplest estimate is valid for the case, where $\Delta_k$ is so small, that we can neglect changes of the source term $f(x,t)$ and replace it by the averaged function \begin{equation} \label{eq:4.2} \widetilde{f}(x,t)=\mean{f}_{I_k}(x):=\frac{1}{\Delta_k}\int\limits_{I_k}f(x,t)dt. \end{equation} In this situation, it is natural to select for the flux the simplest approximation also: \begin{equation} \label{eq:4.3} \tau (x,t)=\tau_k(x) \quad \text{for}\quad \;(x,t)\in Q_k. \end{equation} A more advanced version uses an affine approximation of $f(x,t)$: \begin{equation} \label{eq:4.4} \widetilde{f}(x,t)=f_k(x)+\frac{f_{k+1}(x)-f_k(x)}{\Delta_k}(t-t_k)+\zeta_k(x) \quad \text{for}\quad (x,t)\in Q_k, \end{equation} where $\zeta_k(x)=\mean{f}_{I_k}(x)-\frac12(f_k(x)+f_{k+1}(x))$. It is selected such that $$ \int\limits_{I_k}\widetilde f dt= \int\limits_{I_k}f(x,t) dt=\Delta_k\mean{f}_{I_k}(x). $$ A similar time--incremental form can be used to approximate the flux $\tau(x,t)$ in $Q_k$. Let $\tau_k(x)$, $k=0,1,2,...$ be approximations related to $t_k$. We set \begin{equation} \label{eq:4.5} \tau (x,t)=\tau_k(x)+\frac{\tau_{k+1}(x)-\tau_k(x)}{\Delta_k} (t-t_k). \end{equation} Let $\widetilde{u}$ be the exact solution of the problem $\widetilde{\mathcal P}$ (where $f$ is replaced by $\widetilde{f})$. We have \begin{equation*} |[u-v]|_{\alpha,Q_T} \leqslant |[u-\widetilde{u}]|_{\alpha,Q_T}+|[\widetilde{u}-v]|_{\alpha,Q_T}. \end{equation*} Here the first term in the right-hand side is estimated by (\ref{eq:sim}), which contains only the second term (because in our case $\widetilde u_0=u_0$). Hence we need to estimate the last term only. For this purpose, we use (\ref{eq:3.4}) and obtain \begin{equation} \label{eq:4.6} \begin{aligned} 2\|\nabla (v-\widetilde{u})\|^2_{Q_T}&+\|(v-\widetilde{u})(\cdot,T)\|^2_{\Omega} \leqslant \|v_0-u_0\|^2_{\Omega}\\ &+2\sum\limits_{k=0}^{N-1} \int\limits_{Q_k}\left[(\tau -\nabla v)\nabla (\widetilde{u}-v)+\mathcal{F}_{\widetilde{v}}(v,\tau )(\widetilde{u}-v)\right]dxdt. \end{aligned} \end{equation} Notice that $$ \int\limits_{Q_k}\mathcal{F}_{\widetilde{f}}(v,\tau )(\widetilde{u}-v)dxdt=\int\limits_{I_k}\int\limits_{\Omega}\mathcal{F}^k_{\widetilde{f}}(v,\tau )(\widetilde{u}-v)dxdt, $$ where $$ \mathcal{F}^k_{\widetilde{f}}(v,\tau ):=\left\{ \begin{aligned} &\mathcal{R}_{\widetilde{f}}(v,\tau ) &&\text{if}\quad x\in \Omega \setminus \Omega^\phi_{k+1/2},\\ \{&\mathcal{R}_{\widetilde{f}}(v,\tau )\}_{\oplus} &&\text{if}\quad x\in \Omega^\phi_{k+1/2}. \end{aligned} \right. $$ Consider first the simplest estimate that follows from (\ref{eq:4.6}) with $\widetilde{f}$ and $\tau$ selected in accordance with (\ref{eq:4.2}) and (\ref{eq:4.3}), respectively. In this case, $$ \mathcal{F}^k_{\widetilde{f}}(v,\tau )=\left\{ \begin{aligned} &\mathcal{R}^k_{\widetilde{f}}(v_k,v_{k+1}, \tau_k) &&\text{if}\quad x\in \Omega \setminus \Omega^\phi_{k+1/2},\\ \{&\mathcal{R}^k_{\widetilde{f}}(v_k,v_{k+1}, \tau_k)\}_{\oplus} &&\text{if}\quad x\in \Omega^\phi_{k+1/2}, \end{aligned} \right. $$ where \begin{equation*} \mathcal{R}^k_{\widetilde{f}}(v_k,v_{k+1}, \tau_k):=\widetilde{f}(x)+\operatorname{div}\,\tau_k(x)-\frac{v_{k+1}-v_k}{\Delta_k} \end{equation*} depends on $x$ only. Therefore, \begin{equation} \label{eq:4.7} \begin{aligned} \int\limits_{Q_k}\mathcal{F}^k_{\widetilde{f}}(v,\tau )(\widetilde{u}-v)dxdt & \leqslant C_F\|\mathcal{F}^k_{\widetilde{f}}(v,\tau_k )\|_{\Omega} \int\limits_{I_k}\|\nabla (\widetilde{u}-v)\|_{\Omega}dt\\ &\leqslant C_F \Delta^{1/2}_k \|\mathcal{F}^k_{\widetilde{f}}(v,\tau_k )\|_{\Omega} \|\nabla (\widetilde{u}-v)\|_{Q_k}. \end{aligned} \end{equation} Next, \begin{equation*} \int\limits_{Q_k}(\tau -\nabla v)(\widetilde{u}-v)dxdt \leqslant \|\tau -\nabla v\|_{Q_k}\|\nabla(\widetilde{u}-v)\|_{Q_k}, \end{equation*} where \begin{equation*} \nabla v-\tau=\frac{\nabla (v_{k+1}-v_k)}{\Delta_k}(t-t_k)+\nabla v_k -\tau_k. \end{equation*} Let $$ D_1^k(v_k,v_{k+1}):=\frac{1}{12}\|\nabla (v_{k+1}-v_k)\|^2_{\Omega} $$ and $$ D_2^k(v_k,v_{k+1},\tau_k):=\|\frac{1}{2}\nabla (v_{k+1}+v_k)-\tau_k\|^2_{\Omega}. $$ Then, \begin{equation} \label{eq:4.8} \int\limits_{Q_k}|\tau -\nabla v|^2dxdt=\int\limits_{\Omega}\int\limits_{I_k} |\tau -\nabla v|^2dtdx=\Delta_k\Bigl(D_1(v_k,v_{k+1})+D_2(v_k,v_{k+1}, \tau_k)\Bigr). \end{equation} By (\ref{eq:4.6}), (\ref{eq:4.7}), and (\ref{eq:4.8}), we obtain \begin{align*} 2\|\nabla (\widetilde{u}-v)\|^2_{Q_T} &+ \|(\widetilde{u}-v)(\cdot, T)\|^2_{\Omega} \leqslant \|u_0-v_0\|^2_{\Omega}\\&+ 2\sum\limits_{k=0}^{N-1}\Delta^{1/2}_k \left[ (D_1^k+D_2^k)^{1/2}+C_F\|\mathcal{F}^k(v, \tau )\|_{\Omega}\right]\|\nabla(\widetilde{u}-v)\|_{Q_k}\\ & \leqslant \|u_0-v_0\|^2_{\Omega}\\ &+2\|\nabla (\widetilde{u}-v)\|_{Q_T}\Bigl(\sum\limits_{k=0}^{N-1}\Delta_k \left[(D_1^k+D_2^k)^{1/2}+{C_F}\|\mathcal{F}^k(v,\tau )\|_{\Omega}\right]^2\Bigr)^{1/2}. \end{align*} After using Young's inequality, we arrive at the estimate \begin{equation} \label{eq:4.9} |[\widetilde{u}-v]|^2_{\alpha,Q_T} \leqslant \|u_0-v_0\|^2_{\Omega}+\alpha \left(\sum\limits_{k=0}^{n-1} \Delta_k \left[(D_1^k+D_2^k)^{1/2}+C_F \|\mathcal{F}^k(v, \tau )\|_{\Omega}\right]^2\right)^{\frac{1}{2}}. \end{equation} Now (\ref{eq:3.4}) and (\ref{eq:4.9}) imply the desired error majorant \begin{equation} \label{eq:4.10} \begin{aligned} |[u-v]|^2_{\alpha,Q_T} &\leqslant \|u_0-v_0\|^2_{\Omega}+\alpha C_F\|f-\widetilde{f}\|^2_{Q_T}\\&+\alpha \left(\sum\limits_{k=0}^{n-1} \Delta_k \left[(D_1^k+D_2^k)^{1/2}+C_F \|\mathcal{F}^k(v, \tau )\|_{\Omega}\right]^2\right)^{\frac{1}{2}}. \end{aligned} \end{equation} The first two terms in the right hand side of (\ref{eq:4.10}) reflect errors generated by simplification of the initial data and source term. The last term reflects the errors caused by semi-discrete approximations. If the approximation is sufficiently regular and has no jumps, then the term $D_1(v_k,v_{k+1})$ is of the order $\Delta^2_k$, i.e. it is a minor term. The main term is $D_2(v_k, v_{k+1},\tau_k)$. It penalises inaccuracy in the relation $p^*=\nabla u$, which must hold for the exact solution and its flux. This term is small if the mean gradient $\nabla \left(\frac{v_{k+1}+v_k}{2}\right)$ is close to the flux approximation $\boldsymbol{\sigma}_k$ in $Q_k$. The functions $\boldsymbol{\sigma}_k$ can be viewed as images of the quantity $\frac{1}{\Delta_k}\int\limits_{I_k}p(x,t)dt$ associated with the true flux $p=\nabla u$. They can be extracted from the numerical solution $v$. For example, let $\Omega$ be a poligonal domain discretized by a simplicial mesh $\mathcal{F}_h$ and $v^h_k$, $k=0,1,\dots,N$ denote the respective numerical solutions computed using the finite element method for each step of the time-incremental sequence. Well known gradient averaging methods generate ``averaged'' fluxes \begin{equation*} \boldsymbol{\sigma}^h_k(x)=G_k\nabla v^h_k \in H_{\operatorname{div}}(\Omega), \end{equation*} where $G_h$ is an averaging operator. Such an operator can be based on a simple patch-averaging procedure or use more complicated procedures of global averaging (see, for example, \cite{CaBa2002} and \cite{BaCa2004}). Then, we can set \begin{equation*} \tau_k=\frac{\boldsymbol{\sigma}^h_k+\boldsymbol{\sigma}^h_{k+1}}{2}. \end{equation*} Moreover, we are not limited to such a choice of $\tau_k$, $k=1,2,\dots, N$, which may be regarded as an initial guess only. The majorant (\ref{eq:4.10}) allows us to modify them this function in order to minimise the right hand side of (\ref{eq:4.10}). Now, we consider a more advanced error bound, which follows from (\ref{eq:4.4}), (\ref{eq:4.5}), and (\ref{eq:4.6}). In this case $$ \mathcal{R}^k_{\widetilde{f}}(v,\tau )= \mathcal{R}^k_{f_k}(v_k,v_{k+1},\tau_k)+\frac{\mathcal{R}^k_{f_{k+1}}-\mathcal{R}^k_{f_k}}{\Delta_k}(t-t_k), $$ where \begin{align*} \mathcal{R}^k_{f_k}(v_k,v_{k+1}, \tau_k)&=f_k+\operatorname{div}\,\boldsymbol{\sigma}_k-\frac{v_{k+1}-v_k}{\Delta_k},\\ \mathcal{R}^k_{f_{k+1}}(v_k,v_{k+1}, \boldsymbol{\sigma}_{k+1})&=f_{k+1}+\operatorname{div}\,\boldsymbol{\sigma}_{k+1}-\frac{v_{k+1}-v_k}{\Delta_k} \end{align*} and define the sets \begin{align*} \omega^k_1&:=\left\{x\in \Omega^{\phi}_{k+1/2} : \mathcal{R}^k_{f_k}(v_k, v_{k+1},\tau_k) \leqslant 0 \right\},\\ \omega^k_2&:=\left\{x\in \Omega^{\phi}_{k+1/2} : \mathcal{R}^k_{f_{k+1}}(v_k, v_{k+1},\boldsymbol{\sigma}_{k+1}) \leqslant 0 \right\},\\ \omega^k&:=\omega^k_1 \cap \omega^k_2. \end{align*} The \ functions \ $\mathcal{R}^k_{f_k}(v_k, v_{k+1},{\tau}_k)$ \ and \ $\mathcal{R}^k_{f_{k+1}}(v_k, v_{k+1},{\tau}_{k+1})$ \ have \ clear \ meanings. \ They \ present residuals of the differential equation (in the incremental form, where time derivative is replaced by the finite difference) associated with the boundary points $t_k$ and $t_{k+1}$ of the interval$I_k$. It is not difficult to see that \begin{equation} \label{eq:4.11} \begin{aligned} \int\limits_{I_k}\int\limits_{\Omega}\mathcal{F}^k_{\widetilde{f}}(v,\tau )(\widetilde{u}-v)dxdt &\leqslant \int\limits_{I_k}\int\limits_{\Omega \setminus \omega^k} \mathcal{F}^k_{\widetilde{f}}(v,\tau )(\widetilde{u}-v)dxdt \\ &\leqslant C_F \|\mathcal{F}^k_{\widetilde{f}}(v,\tau )\|_{I_k \times (\Omega \setminus \omega^k)}\|\nabla (\widetilde{u}-v)\|_{Q_k}. \end{aligned} \end{equation} Here \begin{equation}\label{eq:4.12} \begin{aligned} \|\mathcal{F}^k_{\widetilde{f}}(v,\tau )\|_{I_k \times (\Omega \setminus \omega^k)}^2&=\int\limits_{\Omega \setminus \omega^k}\int\limits_{I_k}|\mathcal{F}^k_{\widetilde{f}}(v,\tau )|^2dtdx \\ &= \int\limits_{\Omega \setminus \omega^k}\int\limits_{t_k}^{t_{k+1}}\left( \mathcal{R}^k_{f_k}+\frac{\mathcal{R}^k_{f_{k+1}}-\mathcal{R}^k_{f_k}}{\Delta_k}(t-t_k)\right)^2dtdx\\ &=\frac{\Delta_k}{4}\int\limits_{\Omega \setminus \omega^k} \left[\left(\mathcal{R}^k_{f_{k+1}}-\mathcal{R}^k_{f_k}\right)^2 +\frac{1}{3}\left(\mathcal{R}^k_{f_{k+1}}+\mathcal{R}^k_{f_k}\right)^2 \right]dx\\ &=\frac{\Delta_k}{4}\left[ \|\mathcal{R}^k_{f_{k+1}}-\mathcal{R}^k_{f_k}\|_{\Omega \setminus \omega^k}^2+\frac{1}{3}\|\mathcal{R}^k_{f_{k+1}}+\mathcal{R}^k_{f_k}\|^2_{\Omega \setminus \omega^k} \right]. \end{aligned} \end{equation} Consider another term. We have \begin{equation*} \int\limits_{Q_k}|\tau -\nabla v|^2dxdt=\frac{\Delta_k}{4} \left[ \|D^{k+1}-D^k\|^2_{\Omega}+\frac{1}{3}\|D^{k+1}+D^k\|^2_{\Omega} \right], \end{equation*} where $D^k:=\tau _k-\nabla v_k$. By (\ref{eq:4.6}), (\ref{eq:4.11}), and (\ref{eq:4.12}), we obtain the following estimate: \begin{equation}\label{eq:4.13} \begin{aligned} |[u-v]|^2_{\alpha, Q_T} &\leqslant \|u_0-v_0\|^2_{\Omega}+\alpha C_F \|\widetilde{f}-f\|^2_{Q_T}\\ &+\alpha \bigg(\sum\limits_{k=0}^{N-1} \frac{\Delta_k}{4} \bigg[ \|D^{k+1}-D^k\|^2_{\Omega}+\frac{1}{3}\|D^{k+1}+D^k\|^2_{\Omega} \\ &+\|\mathcal{R}^k_{f_{k+1}}-\mathcal{R}^k_{f_k}\|_{\Omega \setminus \omega^k}^2 +\frac{1}{3} \|\mathcal{R}^k_{f_{k+1}}+\mathcal{R}^k_{f_k}\|_{\Omega \setminus \omega^k}^2 \bigg]^{1/2}\bigg). \end{aligned} \end{equation} The sum in the right hand side of (\ref{eq:4.13}) consists of the quantities that depend on the functions $v_k(x)$ and $\boldsymbol{\sigma}_k(x)$ that form semi-discrete approximations of the solution $u(x,t)$ and its flux $\nabla u(x,t)$. If the functions $D_k$, $\mathcal{R}^k_{f_k}$, $\mathcal{R}^k_{f_{k+1}}$ are small (i.e., residuals of the relations generating the incremental posing of the problem), then (\ref{eq:4.13}) confirms accuracy of the computed solution. \begin{remark} A posteriori estimates for incremental approximations of an evolutionary problem with obstacles has been studied \cite{NSV2000} within the framework of the residual method, which operates with Galerkin approximations and uses special interpolation operators related to the type of approximations selected. We use another conception, where the estimates are derived on the functional level and, therefore, are independent of the numerical method by which the approximation $v$ has been constructed. The estimates do not contain local constants and use only the global constant $C_F$ associated with the domain $\Omega$. At the same time, they contain a vector valued function $\tau$, which selection changes the majorant so that proper selection of this function is important to have a sharp error estimate. Depending on the method used it could be a difficult task (e.g., if we use standard low order finite elements which usually produce rather coarse approximations of fluxes) or a relatively simple one (e.g., if our method generates mixed approximations and the corresponding fluxes can be used without post--processing). These issues require further investigation. In simple examples presented below, we show that realistic error bounds follow from the majorant even for very simple reconstructions of the flux $\tau$. \end{remark} \subsection{The parabolic Signorini problem} A specific version of the problem $\mathcal{P}$ arises if an obstacle function $\phi$ is given on the part of the lateral surface of $Q_T$ instead of inside $Q_T$. Throughout of this subsection we assume that $f\in L^{\infty}(Q_T)$, $u_0\in W^2_{\infty}(\Omega)$, $\mathcal{M}$ is a relatively open subset of $\partial\Omega$ (in its relative topology), $\mathcal{S}=\partial\Omega \setminus \mathcal{M}$, and the obstacle function $\phi \in H^2(\mathcal{M})$ satisfies the compatibility conditions: $$ \phi \leqslant u_0\ \text{on}\ \mathcal{M}, \quad \phi \leqslant 0\ \text{on}\ \partial\mathcal{M}. $$ Such a function $\phi$ is called the \textit{thin obstacle}. In this case, the the problem (\ref{eq:eq1})--(\ref{eq:bc2}) is considered for almost all $t\in (0,T)$ and all functions $w$ from the set \begin{align*} \mathbb{K}_{\mathbb{S}}(\phi):=\{w\in L^2((0,T), H^1(\Omega)): w_t\in &L^2((0,T), H^{-1}(\Omega)), \\ &w\geqslant \phi\quad \text{a.e. on}\ \, \mathcal{M}_T, \quad w=0\quad \text{a.e. on}\ \, \mathcal{S}_T\}, \end{align*} where $\mathcal{M}_T:=\mathcal{M}\times ]0,T[$, and $\mathcal{S}_T:=\mathcal{S}\times ]0,T[$. This problem is known as the \textit{parabolic thin obstacle problem} or \textit{parabolic Signorini problem}. Under the above assumptions on the problem data, the existence of the unique solution has been established in \cite{LS1967}. The exact solution $u$ satisfies the equations (\ref{obs1}) in $Q_T$ and the so-called \textit{Signorini boundary conditions} $$ u \geqslant \phi,\quad \frac{\partial u}{\partial \mathbf{n}} \geqslant 0, \quad (u-\phi)\frac{\partial u}{\partial \mathbf{n}}=0 \quad \text{on}\ \, \mathcal{M}_T, $$ where $\mathbf{n}$ denotes the unit outward normal to $\partial\Omega$. Moreover, according to \cite{ArU1988} and \cite{ArU1996}, the exact solution possesses H{\"o}lder continuous spatial gradient: $\nabla u \in \mathcal{H}^{\alpha, \alpha/2}_{\text{loc}}$ with the H{\"o}lder exponent $\alpha >0$ depending only on the dimension. Furthermore, consider the set \begin{align*} H_{\operatorname{div}}^{\mathbb{S}}(Q_T):=\{\tau (x,t)\in &L^2(Q_T, \mathbb{R}^n) \mid \operatorname{div}\tau \in L^2(\Omega), \\ &\tau\cdot \mathbf{n} \in L^2(\mathcal{M}), \ \text{and}\ \tau \cdot \mathbf{n} \geqslant 0 \ \text{for a.e.}\ t\in (0,T) \}. \end{align*} Since for $v\in \mathbb{K}_{\mathbb{S}}$ we have $u-v =0$ a.e. on $\mathcal{S}_T$ only, the identity (\ref{eq:3i}) takes the form $$ \int\limits_{\Omega} \tau \cdot \nabla (u-v)dx = -\int\limits_{\Omega}(u-v) \operatorname{div} \tau dx +\int\limits_{\mathcal{M}}(u-v)\,\tau\cdot \mathbf{n}d\mu $$ for all $\tau \in H^{\mathbb{S}}_{\operatorname{div}}(Q_T)$ and for almost all $t\in (0,T)$. Repeating all the arguments used in the derivation of (\ref{eq:3.4}) we conclude that \begin{equation} \label{eq:33.0} \begin{aligned} \frac{1}{2}\|(u-v)(\cdot, T)\|^2_{\Omega}&-\frac{1}{2}\|(u-v)(\cdot,0)\|^2_{\Omega}+\|\nabla (u-v)\|^2_{Q_T}\\ &\leqslant \int\limits_{Q_T} \left(f+\operatorname{div}\tau-v_t\right)(u-v)dxdt\\ &+\int\limits_{Q_T}(\tau-\nabla v)\nabla(u-v)dxdt +\int\limits_{\mathcal{M}_T}(v-u)\tau\cdot \mathbf{n}d\mu dt. \end{aligned} \end{equation} As in Section~2 we set $e:=u-v$ and estimate the first and second integrals on the right-hand side of (\ref{eq:33.0}) by the Friedrich's type inequality and the H{\"o}lder inequality, respectively. Notice that for all $\tau \in H^{\mathbb{S}}_{\operatorname{div}}(Q_T)$ and all $v\in \mathbb{K}_{\mathbb{S}}$ we can estimate the third integral on the right-hand side of (\ref{eq:33.0}) as follows: $$ \begin{aligned} \int\limits_{\mathcal{M}_T}(v-u)\tau\cdot \mathbf{n}d\mu dt&= \int\limits_{\mathcal{M}_T}(v-\phi)\tau\cdot \mathbf{n}d\mu dt- \int\limits_{\mathcal{M}_T}(u-\phi)\tau\cdot \mathbf{n}d\mu dt\\ &\leqslant \int\limits_{\mathcal{M}_T}(v-\phi)\,\tau\cdot \mathbf{n} \, d\mu dt. \end{aligned} $$ As a result we arrive at the estimate \begin{equation} \begin{aligned} \frac{1}{2}\|e(\cdot,T)\|^2_{\Omega}+\|\nabla e\|^2_{Q_T}&\leqslant \frac{1}{2}\|e(\cdot,0)\|^2_{\Omega}+\|\tau-\nabla v\|_{Q_T}\|\nabla e\|_{Q_T}\\ &+C_F\|f+\operatorname{div}\tau-v_t\|_{Q_T}\|\nabla e\|_{Q_T}+\int\limits_{\mathcal{M}_T} (v-\phi)\, \tau \cdot \mathbf{n}\, d\mu dt. \end{aligned} \end{equation} Taking into account (\ref{eq:3.5a}) we conclude that the estimate \begin{equation} \label{eq: 33.1} \begin{aligned} \frac{1}{2}\|e(\cdot,T)\|^2_{\Omega}&+\left(1-\frac{1}{2\alpha}\right)\|\nabla e\|^2_{Q_T}\\ &\leqslant \frac{1}{2}\|e(\cdot,0)\|^2_{\Omega}+ \frac{\alpha}{2}\left( \|\nabla v-\tau\|_{Q_T}+C_F\|\mathcal{F}(v,\tau)\|_{Q_T}\right)^2\\ & +\int\limits_{\mathcal{M}_T} (v-\phi) \, \tau \cdot \mathbf{n}\, d\mu dt \end{aligned} \end{equation} holds true for any $\alpha \geqslant \frac{1}{2}$, all $v\in \mathbb{K}_{\mathbb{S}}$, and all $\tau\in H^{\mathbb{S}}_{\operatorname{div}}(Q_T)$. In view of the Signorini boundary conditions, the right-hand side of (\ref{eq: 33.1}) vanishes if $v=u$ and $\tau=\nabla u$. \section{Examples} In this section, we consider several examples that demonstrate how the estimate (\ref{eq:mainest}) works in practice. Namely, we consider the slightly modified model problem taken from \cite{NSV2000}, where the exact solution is known. Due to this it is possible to examine the efficiency of error estimates for different approximate solutions $v$ and different vector-functions $\tau$. Let $\Omega=(-1.0,1.0)$, $T=0.5$, and $\phi \equiv 0$. On the lateral surface $\partial''Q_T=\partial\Omega \times (0,T)$ we impose the boundary conditions \begin{equation*} u(-1,t)=u(1,t)=\frac{9-12t+4t^2}{(2t+1)^2}, \qquad \quad \forall t\in (0,0.5), \end{equation*} and set \begin{equation*} u(x,0)=\left\{ \begin{aligned} &16 x^2-8|x|+1, && \frac{1}{4}<|x|<1,\\ &0, && |x| \leqslant \frac{1}{4}. \end{aligned} \right. \end{equation*} If \begin{eqnarray*} f(x,t)&=\left\{ \begin{array}{cc} -\frac{16}{(2t+1)^2}\left(\frac{4}{2t+1}x^2-|x|+2\right), & (x,t) \in N:=\{4|x|>2t+1\},\\ 0, & (x,t) \in \Lambda:=\{4|x| \leqslant 2t+1\} \end{array} \right. \end{eqnarray*} then the exact solution is defined by the relation \begin{eqnarray*} \begin{aligned} u(x,t)&=\left\{ \begin{aligned} &\frac{16}{(2t+1)^2}x^2-\frac{8}{2t+1}|x|+1, && (x,t) \in N\\ &0, && (x,t) \in \Lambda, \end{aligned} \right. \end{aligned} \end{eqnarray*} The function $u$ is depicted in Fig.~\ref{fig:1d_solution} (left) and in Fig.~\ref{fig:v_e_sol} (left) at $t=0$, $0.25$, and $0.5$. The coincidence set $Q_T^{\phi}(u)$ in Fig.~\ref{fig:1d_solution} is highlighted in light green. \begin{figure}[htbp] \centering \includegraphics[width=0.44\textwidth]{u_plot_1-eps-converted-to.pdf}\qquad \includegraphics[width=0.44\textwidth]{v_plot_1-eps-converted-to.pdf} \caption{The exact solution $u (x,t)$ (left) and approximate solution $v_{\epsilon}(x,t)$ for $\varepsilon=0.5$ (right).} \label{fig:1d_solution} \end{figure} Consider a set of approximations $$ v_{\varepsilon}(x,t) =\left\{ \begin{aligned} &u(x,t)+100\varepsilon t (1-|x|)\bigg(x-\sgn{x}\frac{(2-\varepsilon)t+1}{4}\bigg)^2, && (x,t) \in N_{\varepsilon},\\ &0, &&(x,t) \in \Lambda_{\varepsilon}, \end{aligned} \right. $$ where the sets $N_{\varepsilon}$ and $\Lambda_{\varepsilon}$ are defined as follows: \begin{align*} N_{\varepsilon}&:=\{(x,t)\in Q_T: 4|x| > (2-\varepsilon)t+1\},\\ \Lambda_{\varepsilon}&:= \{(x,t)\in Q_T: 4|x| \leqslant (2-\varepsilon)t+1\}. \end{align*} Approximations depend on the parameter $\varepsilon\in [0, 1/2]$. The function $v_{\varepsilon}$ for $\varepsilon=0.5$ is depicted in Fig.~\ref{fig:1d_solution} (right). The coincidence set $Q_T^{\psi}(v)$ is marked in light green, while the values of $v_{\varepsilon}$ corresponding for $(x,t)\in \Lambda \setminus \Lambda_{\varepsilon}$ are highlighted in dark green. \begin{figure}[!h] \centering \includegraphics[width=0.38\textwidth]{u_smooth-eps-converted-to.pdf}\quad \includegraphics[width=0.49\textwidth]{v_smooth-eps-converted-to.pdf} \caption{The exact 1D-solution $u (\cdot,t)$ at times $t=0$, $0.25$, and $0.5$ (left) and fragments of the functions\\ $v_{\varepsilon} (\cdot,0.4)$ for $\varepsilon_1=0.5$, $\varepsilon_2=0.35$, $\varepsilon_3=0.15$, and $\varepsilon_4=0$ (right). } \label{fig:v_e_sol} \end{figure} For any $\varepsilon$, the function $v_{\varepsilon}$ belongs to the set ${\mathbb{K}}$ and $\Lambda_{\varepsilon} \subset \Lambda$. If $\varepsilon\rightarrow 0$, then $\Lambda_\varepsilon$ tends to $\Lambda=\{(x,t) \mid u=\psi\}$ and $v_\varepsilon$ tends to $u$ (see, Fig.~\ref{fig:v_e_sol} (right)). \vspace{0.2cm} First, we set $$ \tau (x,t)= \left\{ \begin{aligned} &\frac{32x}{(2t+1)^2}-\sgn{x}\cdot\frac{8}{2t+1}, && (x,t) \in N,\\ &0, && (x,t) \in \Lambda. \end{aligned} \right. $$ So defined $\tau$ belongs to $L^2(-1,1)$ ( Fig.~\ref{fig:tau_bild}), and, consequently, $\tau \in H_{\operatorname{div}}(Q_T)$. \begin{figure}[htbp] \centering \includegraphics[width=0.93\textwidth]{fig3.png} \caption{The functions $\tau (\cdot,t)=\nabla u(\cdot, t)$ (left) and $\tau _x(\cdot,t)$ (right) at times $t=0, 0.25$, and $0.5$.} \label{fig:tau_bild} \end{figure} In our case, $\Omega=(-1,1)$ and $C_F=2/\pi$. We verify the validity of the estimate (\ref{eq:mainest}) for $v=v_{\epsilon}$ with different $\varepsilon$. Table~\ref{tb:table1} presents the results related to the components of (\ref{eq:mainest}) for $\varepsilon= 0.05j$, $j=10, 7, 5, 3,$ and $0$. It shows how different terms of the error measure and error majorant decrease as $\varepsilon \to 0$. \begin{table}[!h] \centering \begin{tabular}{c|ccccc} $\varepsilon $ & $\|e(\cdot,0.5)\|^2_{\Omega}$ & $\|\nabla e\|^2_{Q_T}$ & $\|e(\cdot,0)\|^2_{\Omega}$ & $\|\tau-\nabla v_{\varepsilon} \|_{Q_T}$ & $\|\mathcal{F}_f(v_{\varepsilon},\tau )\|_{Q_T}$ \\ \midrule 0.50 & $21.21\cdot 10^{-2}$ & $2.42$ & 0 & 1.56 & 0.87\\ 0.35 & $8.20\cdot 10^{-2}$ & $1.07$ & 0 & 1.03 & 0.59\\ 0.25 & $3.55\cdot 10^{-2}$ & $0.51$ & 0 & 0.71 & 0.41\\ 0.15 & $1.08\cdot 10^{-2}$ & $0.17$ & 0 & 0.41 & 0.24 \\ 0.05 & $1.02\cdot 10^{-3}$ & $1.75\cdot 10^{-2}$ & 0 & 0.13 & $7.87\cdot 10^{-2}$\\ 0.00 & 0 & 0 & 0 & 0 & 0\\ \end{tabular} \caption{Components of the estimate (\ref{eq:mainest}) for $v=v_{\epsilon}$ and $\tau =\nabla u$ for different $\varepsilon$.} \label{tb:table1} \end{table} Table~\ref{tb:table2} presents the results in the integral form. It compares exact errors (l.h.s. of (\ref{eq:mainest})) and error majorants (r.h.s. of (\ref{eq:mainest})) computed for $\alpha=1$, and $2$ together with the corresponding efficiency indices $$ 1 \leqslant I_{\rm{eff}}=\sqrt{\frac{\rm{r.h.s.\ of} (\ref{eq:mainest})}{\rm{l.h.s.\ of (\ref{eq:mainest})}}}. $$ We see that the results are very good, what one may await because in these tests the function $\tau$ coincides with the exact gradient $\nabla u$. \vspace{0.5cm} \begin{table}[!h] \centering \begin{tabular}{c|cccc||cccc} {} &\multicolumn{3}{c}{$\alpha=1$} &{} &\multicolumn{3}{c}{$\alpha=2$} \\ \midrule $\varepsilon $ & l.h.s. of (\ref{eq:mainest}) & r.h.s. of (\ref{eq:mainest}) &$I_{\rm{eff}}$ & \, & \ l.h.s. of (\ref{eq:mainest}) & r.h.s. of (\ref{eq:mainest}) &$I_{\rm{eff}}$ \\ \midrule 0.50 & 2.63 & 4.47 & 1.304 &\ & 3.84 & 8.94 & 1.526 \\ 0.35 & 1.15 & 1.98 & 1.312 && 1.69 & 3.95 & 1.529 \\ 0.25 & 0.55 & 0.94 & 1.307 && 0.80 & 1.89 & 1.537 \\ 0.15 & 0.18 & 0.32 & 1.333 && 0.27 & 0.63 & 1.528 \\ 0.05 & $1.85\cdot 10^{-2}$ & $3.24\cdot 10^{-2}$ & 1.323 && $2.73 \cdot 10^{-2}$& $6.49 \cdot 10^{-2}$ & 1.542 \\ 0 & 0 & 0 & \ && 0 & 0 && \\ \end{tabular} \caption{Estimate (\ref{eq:mainest}) for $v=v_{\varepsilon}$ and $\tau =\nabla u$ with $\alpha=1$ and $\alpha=2$ for different $\varepsilon$.} \label{tb:table2} \end{table} \vspace{-0.5cm} \begin{figure}[!h] \centering \includegraphics[width=0.95\textwidth]{fig4.png} \caption{The real error (l.h.s. of (\ref{eq:mainest})) vs the error majorant (r.h.s. of (\ref{eq:mainest})) computed for $\alpha=1$ (left)\\ and $\alpha=2$ (right).} \label{fig:error-vs-majorant} \end{figure} Graphically, these results are depicted in Fig.~\ref{fig:error-vs-majorant}. Further, we investigate the question on ability to get close results using relatively simple approximations of $\tau$ that contain only constants and terms proportional to $t$ and $x$. Certainly, so simple functions may be useful only for a small time interval (e.g., for time incremental methods that operate with small intervals). Therefore, we set the time parameter $\delta \in (0,0.5]$ and define the space--time domain $Q_{\delta}:=\Omega\times (0,\delta)$. Consider a sequence of functions $\tau=\tau_{\delta}(x,t)$ defined by the relation \begin{equation} \label{eq:tau_d} \tau_{\delta}(x,t)= \left\{ \begin{aligned} &0, && \text{in}\ Q_{\delta}\cap\left\{|x| \leqslant \frac{1+2t}{4}\right\}\\ &\frac{4\eta}{3-2\delta}\left(x- \sgn{x}\cdot\frac{1+2t}{4}\right), && \text{in}\ Q_{\delta}\cap\left\{\frac{1+2t}{4}<|x|\leqslant \frac{\delta +3t}{4\delta}\right\},\\ &\frac{4\xi}{3}\left(x-\frac{\sgn{x}}{4}\right)+\sgn{x}(\eta-\xi)\frac{t}{\delta}, && \text{in}\ Q_{\delta}\cap\left\{\frac{\delta+3t}{4\delta}<|x|\leqslant 1\right\}. \end{aligned} \right. \end{equation} The values of coefficients $\xi=\xi(\delta)$ and $\eta=\eta (\delta)$ in formula (\ref{eq:tau_d}) are determined by minimising the corresponding right-hand sides of (3.3). One can easily check that for all $\delta$ the functions $\tau_{\delta}$ are continuous (see, for example, Fig.~\ref{fig:T_delta}). Therefore, $\tau_{\delta}$ again belongs to the set $H_{\operatorname{div}}(Q_{\delta})$. \begin{figure}[!h] \centering \includegraphics[scale=1.4]{fig5-eps-converted-to.pdf} \caption{The graph of $\tau_{\delta}$ for $\delta=0.5$, $\xi=16.07$, and $\eta=5.62$.} \label{fig:T_delta} \end{figure} \vspace{0.2cm} Table~\ref{tb:table3} reports on values of the real errors and the error majorants included in (\ref{eq:mainest}) for few approximations $v_{\epsilon}$, where $\epsilon=0.2$. \begin{table}[!h] \centering \begin{tabular}{c|cccccccccc} $\delta$ & \ & $\xi (\delta)$& \ & $\eta (\delta)$ &\ & l.h.s. of (\ref{eq:mainest}) &\ & r.h.s. of (\ref{eq:mainest}) &\ & $I_{\rm{eff}}$ \\ \midrule 0.5 & \ & 16.07 &\ & 5.62 &\ & 0.33 &\ & 19.89&\ & 7.722\\ 0.3 & \ & 18.68 &\ & 9.61 &\ & 0.13 &\ & 8.27 &\ & 7.829\\ 0.2 & \ & 20.3 &\ & 12.78 &\ & 0.06 &\ & 3.64 &\ & 7.857\\ 0.1 & \ & 22.18 & \ & 17.33 & \ & 0.01 &\ & 0.72 &\ & 8.487\\ \end{tabular} \caption{Estimate (\ref{eq:mainest}) for $v=v_{\epsilon}$, $\varepsilon=0.2$, $\tau =\tau_{\delta}$, $\alpha=1$, and different $\delta$.} \label{tb:table3} \end{table} Table~\ref{tb:table4} collects values of left- and right-hand sides of (\ref{eq:mainest}) for $\tau=\tau_{\delta}$ and for time-incremental approximations $v=w_{\delta}(x,t):=u(x,0)+\frac{u(x,\delta)-u(x,0)}{\delta}t$ for $(x,t)\in Q_{\delta}$. Here the estimates are coarser, what is not surprising because rather coarse approximations are estimated using a simple function $\tau_\delta$. Anyway, even in this case, the majorant gives a presentation on the actual value of the error. \begin{table}[!h] \centering \begin{tabular}{c|cccccccccc} $\delta$ & \ & $\xi (\delta)$& \ & $\eta (\delta)$ &\ & l.h.s. of (\ref{eq:mainest}) &\ & r.h.s. of (\ref{eq:mainest}) &\ & $I_{\rm{eff}}$ \\ \midrule 0.5 & \ & 18.08 &\ & 7.22 &\ & 4.54 &\ & 27.41&\ & 2.456 \\ 0.3 & \ & 21.54 &\ & 10.23 &\ & 0.89 &\ & 22.43 &\ & 5.024 \\ 0.2 & \ & 22.38 &\ & 12.92 &\ & 0.20 &\ & 9.86 &\ & 6.963 \\ \end{tabular} \caption{Estimate (\ref{eq:mainest}) for $v=w_{\delta}$, $\tau =\tau_{\delta}$, $\alpha=1$, and different $\delta$.} \label{tb:table4} \end{table} \vskip-0.2cm \begin{figure}[!h] \centering \includegraphics[scale=1.4]{fig6-eps-converted-to.pdf} \caption{The graph of $\hat{\tau}_{\delta}$ for $\delta=0.5$, $\xi=24$, and $\eta=5.62$.} \label{fig:T_polylinear} \end{figure} In the final series of tests, we used \begin{equation*} \hat{\tau}_{\delta}(x,t)= \left\{ \begin{aligned} &32\left(x- \sgn{x}\frac{1+2t}{4}\right)-\frac{128(1+\delta)t}{(1+2\delta )^2}\left(x-\sgn{x}\frac{1+2\delta}{4}\right), && (x,t)\in N_{\delta},\\ &0, && (x,t)\in \Lambda_{\delta}\\ \end{aligned} \right. \end{equation*} where $\delta \in (0,0.5]$ and sets are defined as follows: \begin{align*} N_{\delta}&:=Q_{\delta}\cap\left\{\frac{(1+2\delta)(1+2\delta-2t)}{4(1+4\delta^2-4\delta (t-1)-4t)}<|x|\leqslant 1\right\},\\ \Lambda_{\delta}&:=Q_{\delta} \cap \left\{|x| \leqslant \frac{(1+2\delta)(1+2\delta-2t)}{4(1+4\delta^2-4\delta (t-1)-4t)}\right\}. \end{align*} For all $\delta$, the functions $\hat{\tau}_{\delta}$ are continuous (see, e.g. Fig.~\ref{fig:T_polylinear}), and $\left(\hat{\tau}_{\delta}\right)_x \in L^2(-1,1)$. Thus, the condition $\hat{\tau}_{\delta} \in H_{\operatorname{div}}(Q_{\delta})$ is fulfilled as well. Table~\ref{tb:table5} shows values of the exact errors and respective majorants from (\ref{eq:mainest}) computed for two types of the approximate solutions: $v=v_{\epsilon}$ and $v=w_{\delta}$. Comparing the results with Tables~\ref{tb:table3}--\ref{tb:table5}, we see that using $\hat{\tau}_{\delta}$ instead of $\tau=\tau_{\delta}$ improves the estimates, especially for small values of $\delta$. \vspace{-0.2cm} \begin{table}[!h] \centering \begin{tabular}{c|cccc||cccc} {} &\multicolumn{3}{c}{$v=v_{\varepsilon}, \quad \varepsilon=0.2$} &{} &\multicolumn{3}{c}{$v=w_{\delta}$} \\ \midrule $\delta $ & l.h.s. of (\ref{eq:mainest})& r.h.s. of (\ref{eq:mainest}) &$I_{\rm{eff}}$ & \quad & \ l.h.s. of (\ref{eq:mainest}) & r.h.s. of (\ref{eq:mainest}) & $I_{\rm{eff}}$ \\ \midrule 0.5 & & & &\ & 4.54 & 24.68 & 2.331 \\ 0.3 & 0.13 & 8.43 & 7.901 && 0.89 & 9.13 & 3.021 \\ 0.2 & 0.06 & 1.58 & 5.179 && 0.20 & 3.93 & 1.983 \\ 0.1 & 0.01 & 0.34 & 5.860 && && & \\ \end{tabular} \caption{Estimate (\ref{eq:mainest}) for $\tau =\hat{\tau}_{\delta}$ with $\alpha=1$ and different $\delta$.} \label{tb:table5} \end{table} The first author was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID AP 252/3-1. \bibliographystyle{alpha}
1,116,691,501,040
arxiv
\section{Introduction} Following \cite{Staffans2012_phys}, a natural class of $C_{0}$-semigroup generators $-A$ arising in the context of scattering passive systems in system theory, can be described as a block operator matrix of the following form: Let $E_{0},E,H,U$ be Hilbert spaces with $E_{0}\subseteq E$ dense and continuous and let $L\in L(E_{0},H),\, K\in L(E_{0},U).$ Moreover, denote by $L^{\diamond}\in L(H,E_{0}')$ and $K^{\diamond}\in L(U,E_{0}')$ the dual operators of $L$ and $K,$ respectively, where we identify $H$ and $U$ with their dual spaces. Then $A$ is a restriction of $\left(\begin{array}{cc} K^{\diamond}K & -L^{\diamond}\\ L & 0 \end{array}\right)$ with domain \begin{equation} \mathcal{D}(A)\coloneqq\left\{ (u,w)\in E_{0}\times H\,|\, K^{\diamond}Ku-L^{\diamond}w\in E\right\} ,\label{eq:domain_syst} \end{equation} where we consider $E\cong E'$ as a subspace of $E_{0}'$. It is proved in \cite[Theorem 1.4]{Staffans2012_phys} that for such operator matrices $A$, the operator $-A$ generates a contractive $C_{0}$-semigroup on $E\oplus H$ and a so-called scattering passive system, containing $-A$ as the generator of the corresponding system node, is considered (see \cite{Staffans2005_book} for the notion of system nodes and scattering passive systems). This class of semigroup generators were particularly used to study boundary control systems, see e.g. \cite{Staffans2013_Maxwell,Tucsnak_Weiss2003,Tucsnak2003_thinair2}. In these cases, $L$ is a suitable realization of a differential operator and $K$ is a trace operator associated with $L$. More precisely, $G_{0}\subseteq L\subseteq G,$ where $G_{0}$ and $G$ are both densely defined closed linear operators, such that $K|_{\mathcal{D}(G_{0})}=0$ (as a typical example take $G_{0}$ and $G$ as the realizations of the gradient on $L_{2}(\Omega)$ for some open set $\Omega\subseteq\mathbb{R}^{n}$ with $\mathcal{D}(G_{0})=H_{0}^{1}(\Omega)$ and $\mathcal{D}(G)=H^{1}(\Omega)$). It turns out that in this situation, the operator $A$ is a restriction of the operator matrix $\left(\begin{array}{cc} 0 & D\\ G & 0 \end{array}\right),$ where $D\coloneqq-(G_{0})^{\ast}$ (see \prettyref{lem:D_extends} below). Such restrictions were considered by the author in \cite{Trostorff2013_bd_maxmon}, where it was shown that such (also nonlinear) restrictions are maximal monotone (a hence, $-A$ generates a possibly nonlinear contraction semigroup), if and only if an associated boundary relation on the so-called boundary data space of $G$ is maximal monotone.\\ In this note, we characterize the class of boundary relations, such that the corresponding operator $A$ satisfies \prettyref{eq:domain_syst} for some Hilbert spaces $E_{0},U$ and operators $L\in L(E_{0},H),\, K\in L(E_{0},U)$. We hope that this result yields a better understanding of the semigroup generators used in boundary control systems and provides a possible way to generalize known system-theoretical results to a class of nonlinear problems.\\ The article is structured as follows. In Section 2 we recall the basic notion of maximal monotone relations, we state the characterization result of \cite{Trostorff2013_bd_maxmon} and introduce the class of block operator matrices considered in \cite{Staffans2012_phys}. Section 3 is devoted to the main result (\prettyref{thm:main}) and its proof.\\ Throughout, every Hilbert space is assumed to be complex, its inner product $\langle\cdot|\cdot\rangle$ is linear in the second and conjugate linear in the first argument and the induced norm is denoted by $|\cdot|$. \section{Preliminaries} \subsection{Maximal monotone relations} In this section we introduce the basic notions for maximal monotone relations. Throughout let $H$ be a Hilbert space. \begin{defn*} Let $C\subseteq H\oplus H.$ We call $C$ \emph{linear, }if $C$ is a linear subspace of $H\oplus H$. Moreover, we define for $M,N\subseteq H$ the \emph{pre-set of $M$ under $C$ }by\emph{ \[ [M]C\coloneqq\{x\in H\,|\,\exists y\in M:(x,y)\in C\} \] }and the \emph{post-set of $N$ under $C$ }by \[ C[N]\coloneqq\{y\in H\,|\,\exists x\in N:(x,y)\in C\}. \] The \emph{inverse relation $C^{-1}$ }of $C$ is defined by \[ C^{-1}\coloneqq\{(v,u)\in H\oplus H\,|\,(u,v)\in C\}. \] A relation $C$ is called \emph{monotone, }if for each $(x,y),(u,v)\in C$: \[ \Re\langle x-u|y-v\rangle\geq0. \] A monotone relation $C$ is called \emph{maximal monotone, }if for each monotone relation $B\subseteq H\oplus H$ with $C\subseteq B$ we have $C=B.$ Moreover, we define the \emph{adjoint relation $C^{\ast}\subseteq H\oplus H$ of $C$ }by \[ C^{\ast}\coloneqq\left\{ (v,-u)\in H\oplus H\,|\,(u,v)\in C\right\} ^{\bot}, \] where the orthogonal complement is taken in $H\oplus H$. A relation $C$ is called \emph{selfadjoint, }if $C=C^{\ast}$.\end{defn*} \begin{rem} $\,$ \begin{enumerate}[(a)] \item A pair $(x,y)\in H\oplus H$ belongs to $C^{\ast}$ if and only if for each $(u,v)\in C$ we have \[ \langle v|x\rangle_{H}=\langle u|y\rangle_{H}. \] Thus, the definition of $C^{\ast}$ coincides with the usual definition of the adjoint operator for a densely defined linear operator $C:\mathcal{D}(C)\subseteq H\to H.$ \item Note that a selfadjoint relation is linear and closed, since it is an orthogonal complement. \end{enumerate} \end{rem} We recall the famous characterization result for maximal monotone relations due to G. Minty. \begin{thm}[\cite{Minty}] \label{thm:minty} Let $C\subseteq H\oplus H$ be monotone. Then the following statements are equivalent: \begin{enumerate}[(i)] \item $C$ is maximal monotone, \item $\exists\lambda>0:\,(1+\lambda C)[H]=H,$ where $(1+\lambda C)\coloneqq\{(u,u+\lambda v)\,|\,(u,v)\in C\}$, \item $\forall\lambda>0:\,(1+\lambda C)[H]=H.$ \end{enumerate}\end{thm} \begin{rem} $\,$\label{rem:max_mon} \begin{enumerate}[(a)] \item We note that for a monotone relation $C\subseteq H\oplus H$ the relations $(1+\lambda C)^{-1}$ for $\lambda>0$ are Lipschitz-continuous mappings with best Lipschitz-constant less than or equal to $1$. By the latter Theorem, maximal monotone relations are precisely those monotone relations, where $(1+\lambda C)^{-1}$ for $\lambda>0$ is defined on the whole Hilbert space $H$. \item If $C\subseteq H\oplus H$ is closed and linear, then $C$ is maximal monotone if and only if $C$ and $C^{\ast}$ are monotone. Indeed, if $C$ is maximal monotone, then $(1+\lambda C)^{-1}\in L(H)$ for each $\lambda>0$ with $\sup_{\lambda>0}\|(1+\lambda C)^{-1}\|\leq1.$ Hence, $(1+\lambda C^{\ast})^{-1}=\left(\left(1+\lambda C\right)^{-1}\right)^{\ast}\in L(H)$ for each $\lambda>0$ with $\sup_{\lambda>0}\|(1+\lambda C^{\ast})^{-1}\|\leq1$. The latter gives for each $(x,y)\in C^{\ast}$ and $\lambda>0$ \begin{align*} |x+\lambda y|_{H}^{2} & =|x|_{H}^{2}+2\Re\lambda\langle x|y\rangle_{H}+\lambda^{2}|y|_{H}^{2}\\ & =|(1+\lambda C^{\ast})^{-1}(x+\lambda y)|_{H}^{2}+2\Re\lambda\langle x|y\rangle_{H}+\lambda^{2}|y|_{H}^{2}\\ & \leq|x+\lambda y|_{H}^{2}+2\Re\lambda\langle x|y\rangle_{H}+\lambda^{2}|y|_{H}^{2} \end{align*} and hence, \[ -\frac{\lambda}{2}|y|^{2}\leq\Re\langle x|y\rangle_{H}. \] Letting $\lambda$ tend to $0$, we obtain the monotonicity of $C^{\ast}$. If on the other hand $C$ and $C^{\ast}$ are monotone, we have that $[\{0\}](1+\lambda C^{\ast})=\{0\}$ for each $\lambda>0$ and thus, $\overline{[H](1+\lambda C)^{-1}}=\overline{(1+\lambda C)[H]}=\left([\{0\}](1+\lambda C^{\ast})\right)^{\bot}=H$. Since, moreover $(1+\lambda C)^{-1}$ is closed and Lipschitz-continuous due to the monotonicity of $C$, we obtain that $[H](1+\lambda C)^{-1}$ is closed, from which we derive the maximal monotonicity by \prettyref{thm:minty}. \end{enumerate} \end{rem} \subsection{Boundary data spaces and a class of maximal monotone block operator matrices} In this section we will recall the main result of \cite{Trostorff2013_bd_maxmon}. For doing so, we need the following definitions. Throughout, let $E,H$ be Hilbert spaces and $G_{0}:\mathcal{D}(G_{0})\subseteq E\to H$ and $D_{0}:\mathcal{D}(D_{0})\subseteq H\to E$ be two densely defined closed linear operators satisfying \[ G_{0}\subseteq-\left(D_{0}\right)^{\ast}. \] We set $G\coloneqq\left(-D_{0}\right)^{\ast}\supseteq G_{0}$ and $D\coloneqq-\left(G_{0}\right)^{\ast}\supseteq D_{0}$, which are both densely defined closed linear operators. \begin{example} \label{exa:G_D}As a guiding example we consider the following operators. Let $\Omega\subseteq\mathbb{R}^{n}$ open and define $G_{0}$ as the closure of the operator \begin{align*} C_{c}^{\infty}(\Omega)\subseteq L_{2}(\Omega) & \to L_{2}(\Omega)^{n}\\ \phi & \mapsto\left(\partial_{i}\phi\right)_{i\in\{1,\ldots,n\},} \end{align*} where $C_{c}^{\infty}(\Omega)$ denotes the set of infinitely differentiable functions compactly supported in $\Omega$. Moreover, let $D_{0}$ be the closure of \begin{align*} C_{c}^{\infty}(\Omega)\subseteq L_{2}(\Omega)^{n} & \to L_{2}(\Omega)\\ (\phi_{i})_{i\in\{1,\ldots,n\}} & \mapsto\sum_{i=1}^{n}\partial_{i}\phi_{i}. \end{align*} Then, by integration by parts, we obtain $G_{0}\subseteq-\left(D_{0}\right)^{\ast}$. Moreover, we have that $G:\mathcal{D}(G)\subseteq L_{2}(\Omega)\to L_{2}(\Omega)^{n},\, u\mapsto\operatorname{grad} u$ with $\mathcal{D}(G)=H^{1}(\Omega)$ as well as $D:\mathcal{D}(D)\subseteq L_{2}(\Omega)^{n}\to L_{2}(\Omega),\, v\mapsto\operatorname{div} v$ with $\mathcal{D}(D)=\{v\in L_{2}(\Omega)^{n}\,|\,\operatorname{div} v\in L_{2}(\Omega)\},$ where $\operatorname{grad} u$ and $\operatorname{div} v$ are meant in the sense of distributions. We remark that in case of a smooth boundary $\partial\Omega$ of $\Omega,$ elements $u\in\mathcal{D}(G_{0})=H_{0}^{1}(\Omega)$ are satisfying $u=0$ on $\partial\Omega$ and elements $v\in\mathcal{D}(D_{0})$ are satisfying $v\cdot n=0$ on $\partial\Omega$, where $n$ denotes the unit outward normal vector field. Thus, $G_{0}$ and $D_{0}$ are the gradient and divergence with vanishing boundary conditions, while $G$ and $D$ are the gradient and divergence without any boundary condition. In the same way one might treat the case of $G_{0}=\operatorname{curl}_{0}$, the rotation of vectorfields with vanishing tangential component and $G=\operatorname{curl}.$ Note that then $D_{0}=-\operatorname{curl}_{0}$ and $D=-\operatorname{curl}.$ \end{example} As the previous example illustrates, we want to interpret $G_{0}$ and $D_{0}$ as abstract differential operators with vanishing boundary conditions, while $G$ and $D$ are the respective differential operators without any boundary condition. This motivates the following definition. \begin{defn*} We define the spaces% \footnote{For a closed linear operator $C$ we denote by $\mathcal{D}_{C}$ its domain, equipped with the graph-norm of $C$.% } \[ \mathcal{BD}(G)\coloneqq\left(\mathcal{D}(G_{0})\right)^{\bot_{\mathcal{D}_{G}}},\quad\mathcal{BD}(D)\coloneqq\left(\mathcal{D}(D_{0})\right)^{\bot_{\mathcal{D}_{D}}}, \] where the orthogonal complements are taken in $\mathcal{D}_{G}$ and $\mathcal{D}_{D}$, respectively. We call $\mathcal{BD}(G)$ and $\mathcal{BD}(D)$ \emph{abstract boundary data spaces associated with $G$ }and\emph{ $D$, }respectively. Consequently, we can decompose $\mathcal{D}_{G}=\mathcal{D}_{G_{0}}\oplus\mathcal{BD}(G)$ and $\mathcal{D}_{D}=\mathcal{D}_{D_{0}}\oplus\mathcal{BD}(D).$ We denote by $\pi_{\mathcal{BD}(G)}:\mathcal{D}_{G}\to\mathcal{BD}(G)$ and by $\pi_{\mathcal{BD}(D)}:\mathcal{D}_{D}\to\mathcal{BD}(D)$ the corresponding orthogonal projections. In consequence, $\pi_{\mathcal{BD}(G)}^{\ast}:\mathcal{BD}(G)\to\mathcal{D}_{G}$ and $\pi_{\mathcal{BD}(D)}^{\ast}:\mathcal{BD}(D)\to\mathcal{D}_{D}$ are the canonical embeddings and we set $P_{\mathcal{BD}(G)}\coloneqq\pi_{\mathcal{BD}(G)}^{\ast}\pi_{\mathcal{BD}(G)}:\mathcal{D}_{G}\to\mathcal{D}_{G}$ as well as $P_{\mathcal{BD}(D)}\coloneqq\pi_{\mathcal{BD}(D)}^{\ast}\pi_{\mathcal{BD}(D)}:\mathcal{D}_{D}\to\mathcal{D}_{D}$. An easy computation gives \[ \mathcal{BD}(G)=[\{0\}](1-DG),\quad\mathcal{BD}(D)=[\{0\}](1-GD) \] and thus, $G[\mathcal{BD}(G)]\subseteq\mathcal{BD}(D)$ as well as $D[\mathcal{BD}(D)]\subseteq\mathcal{BD}(G).$ We set \begin{align*} \stackrel{\bullet}{G}:\mathcal{BD}(G) & \to\mathcal{BD}(D),x\mapsto Gx\\ \stackrel{\bullet}{D}:\mathcal{BD}(D) & \to\mathcal{BD}(G),x\mapsto Dx \end{align*} and observe that both are unitary operators satisfying $\left(\stackrel{\bullet}{G}\right)^{\ast}=\stackrel{\bullet}{D}$ (see \cite[Section 5.2]{Picard2012_comprehensive_control} for details). \end{defn*} Having these notions at hand, we are ready to state the main result of \cite{Trostorff2013_bd_maxmon}. \begin{thm}[{\cite[Theorem 3.1]{Trostorff2013_bd_maxmon}}] \label{thm:char_max_mon} Let $G_{0},D_{0},G$ and $D$ be as above and let \[ A\subseteq\left(\begin{array}{cc} 0 & D\\ G & 0 \end{array}\right):\mathcal{D}(A)\subseteq H_{0}\oplus H_{1}\to H_{0}\oplus H_{1} \] be a (possibly nonlinear) restriction of $\left(\begin{array}{cc} 0 & D\\ G & 0 \end{array}\right):\mathcal{D}(G)\times\mathcal{D}(D)\subseteq H_{0}\oplus H_{1}\to H_{0}\oplus H_{1},(u,w)\mapsto(Dw,Gu).$ Then $A$ is maximal monotone, if and only if there exists a maximal monotone relation $h\subseteq\mathcal{BD}(G)\oplus\mathcal{BD}(G)$ such that \[ \mathcal{D}(A)=\left\{ (u,w)\in\mathcal{D}(G)\times\mathcal{D}(D)\,\left|\,\left(\pi_{\mathcal{BD}(G)}u,\stackrel{\bullet}{D}\pi_{\mathcal{BD}(D)}w\right)\in h\right.\right\} . \] We call $h$ the \emph{boundary relation associated with $A$.} \end{thm} \subsection{A class of block operator matrices in system theory} In \cite{Staffans2012_phys} the following class of block operator matrices is considered: Let $E,E_{0},H,U$ be Hilbert spaces such that $E_{0}\subseteq E$ with dense and continuous embedding. Moreover, let $L\in L(E_{0},H)$ and $K\in L(E_{0},U)$ such that \[ \left(\begin{array}{c} L\\ K \end{array}\right):E_{0}\subseteq E\to H\oplus U \] is closed. This assumption particularly yields that the norm on $E_{0}$ is equivalent to the graph norm of $\left(\begin{array}{c} L\\ K \end{array}\right).$ We define $L^{\diamond}\in L(H,E_{0}')$ and $K^{\diamond}\in L(U,E_{0}')$ by $\left(L^{\diamond}x\right)(w)\coloneqq\langle x|Lw\rangle_{H}$ and $\left(K^{\diamond}u\right)(w)\coloneqq\langle u|Kw\rangle_{U}$ for $x\in H,w\in E_{0},u\in U$ and consider the following operator \begin{equation} A\subseteq\left(\begin{array}{cc} K^{\diamond}K & -L^{\diamond}\\ L & 0 \end{array}\right):\mathcal{D}(A)\subseteq E\oplus H\to E\oplus H\label{eq:A_Staffans} \end{equation} with $\mathcal{D}(A)\coloneqq\left\{ (u,w)\in E_{0}\times H\,|\, K^{\diamond}Ku-L^{\diamond}w\in E\right\} ,$ where we consider $E\cong E'\subseteq E_{0}'$ as a subspace of $E_{0}'.$ We recall the following result from \cite{Staffans2012_phys}, which we present in a slight different formulation% \footnote{We note that in \cite{Staffans2012_phys} an additional operator $G\in L(E_{0},E_{0}')$ is incorporated in $A$, which we will omit for simplicity.% }. \begin{thm}[{\cite[Theorem 1.4]{Staffans2012_phys}}] \label{thm:Staffans} The operator $A$ defined above is maximal monotone. \end{thm} \begin{rem} We remark that in \cite[Theorem 1.4]{Staffans2012_phys} the operator $-A$ is considered and it is proved that $-A$ is the generator of a contraction semigroup. \end{rem} We note that operators of the form (\ref{eq:A_Staffans}) were applied to discuss boundary control problems. For instance in \cite{Tucsnak_Weiss2003,Tucsnak2003_thinair2} the setting was used to study the wave equation with boundary control on a smooth domain $\Omega\subseteq\mathbb{R}^{n}$. In this case the operator $L$ was a suitable realization of the gradient on $L_{2}(\Omega)$ and $K$ was the Dirichlet trace operator. More recently, Maxwell's equations on a smooth domain $\Omega\subseteq\mathbb{R}^{3}$ with boundary control were studied within this setting (see \cite{Staffans2013_Maxwell}). In this case $L$ was a suitable realization of $\operatorname{curl}$, while $K$ was the trace operator mapping elements in $\mathcal{D}(L)$ to their tangential component on the boundary. \\ In both cases, there exist two closed operators $G_{0}:\mathcal{D}(G_{0})\subseteq E\to H,\: D_{0}:\mathcal{D}(D_{0})\subseteq H\to E$ with $G_{0}\subseteq-(D_{0})^{\ast}\eqqcolon G$ such that $G_{0}\subseteq L\subseteq G$ and $K|_{\mathcal{D}(G_{0})}=0$ (cp. \prettyref{exa:G_D}). It is the purpose of this paper to show how the operators $A$ in \prettyref{eq:A_Staffans} and in \prettyref{thm:char_max_mon} are related in this case. \section{Main result} Let $E,H$ be Hilbert spaces and $G_{0}:\mathcal{D}(G_{0})\subseteq E\to H$ and $D_{0}:\mathcal{D}(D_{0})\subseteq H\to E$ be densely defined closed linear operators with $G_{0}\subseteq-(D_{0})^{\ast}\eqqcolon G$ and $D_{0}\subseteq-(G_{0})^{\ast}\eqqcolon D$. \begin{hyp}\label{hyp: standing} We say that two Hilbert spaces $E_{0},U$ and two operators $L\in L(E_{0},H)$ and $K\in L(E_{0},U)$ satisfy the hypothesis, if \begin{enumerate}[(a)] \item $E_{0}\subseteq E$ dense and continuous. \item $\left(\begin{array}{c} L\\ K \end{array}\right):E_{0}\subseteq E\to H\oplus U$ is closed. \item $G_{0}\subseteq L\subseteq G$ and $K|_{\mathcal{D}(G_{0})}=0$. \end{enumerate} \end{hyp} \begin{lem} \label{lem:D_extends} Assume that $E_{0},U$ and $L,K$ satisfy the hypothesis. Let $(u,w)\in E_{0}\times H$ such that $K^{\diamond}Ku-L^{\diamond}w\in E.$ Then $w\in\mathcal{D}(D)$ and $Dw=K^{\diamond}Ku-L^{\diamond}w.$\end{lem} \begin{proof} For $v\in\mathcal{D}(G_{0})$ we compute \begin{align*} \langle w|G_{c}v\rangle_{H} & =\langle w|Lv\rangle_{H}\\ & =\left(L^{\diamond}w\right)(v)\\ & =(-K^{\diamond}Ku+L^{\diamond}w)(v)+(K^{\diamond}Ku)(v)\\ & =\langle-K^{\diamond}Ku+L^{\diamond}w|v\rangle_{E}+\langle Ku|Kv\rangle_{U}\\ & =\langle-K^{\diamond}Ku+L^{\diamond}w|v\rangle_{E}, \end{align*} where we have used $G_{0}\subseteq L$ and $Kv=0$. The latter gives $w\in\mathcal{D}(G_{0}^{\ast})=\mathcal{D}(D)$ and $Dw=-G_{0}^{\ast}w=K^{\diamond}Ku-L^{\diamond}w.$ \end{proof} The latter lemma shows that if the hypothesis holds and $A$ is given as in \prettyref{eq:A_Staffans}, then $A$ is a restriction of $\left(\begin{array}{cc} 0 & D\\ G & 0 \end{array}\right)$, which, by \prettyref{thm:Staffans}, is maximal monotone. However, such restrictions are completely characterized by their associated boundary relation (see \prettyref{thm:char_max_mon}). The question, which now arises is: can we characterize those boundary relations, allowing to represent $A$ as in \prettyref{eq:A_Staffans}? The answer gives the following theorem. \begin{thm} \label{thm:main} Let $A\subseteq\left(\begin{array}{cc} 0 & D\\ G & 0 \end{array}\right)$. Then the following statements are equivalent. \begin{enumerate}[(i)] \item There exist Hilbert spaces $E_{0},U$ and operators $L\in L(E_{0},H),\, K\in L(E_{0},U)$ satisfying the hypothesis, such that \[ \mathcal{D}(A)=\{(u,w)\in E_{0}\times H\,|\, K^{\diamond}Ku-L^{\diamond}w\in E\}. \] \item There exists $h\subseteq\mathcal{BD}(G)\oplus\mathcal{BD}(G)$ maximal monotone and selfadjoint, such that \[ \mathcal{D}(A)=\{(u,w)\in\mathcal{D}(G)\times\mathcal{D}(D)\,|\,(\pi_{\mathcal{BD}(G)}u,\stackrel{\bullet}{D}\pi_{\mathcal{BD}(D)}v)\in h\}. \] \end{enumerate} \end{thm} We begin to prove the implication (i)$\Rightarrow$(ii). \begin{lem} \label{lem:h}Assume (i) in \prettyref{thm:main} and set \[ h\coloneqq\left\{ (x,y)\in\mathcal{BD}(G)\oplus\mathcal{BD}(G)\,|\,\pi_{\mathcal{BD}(G)}^{\ast}x\in E_{0},K^{\diamond}K\pi_{\mathcal{BD}(G)}^{\ast}x-L^{\diamond}\pi_{\mathcal{BD}(D)}^{\ast}\stackrel{\bullet}{G}y\in E\right\} . \] Then $(u,w)\in\mathcal{D}(A)$ if and only if $(u,w)\in\mathcal{D}(G)\times\mathcal{D}(D)$ with $(\pi_{\mathcal{BD}(G)}u,\stackrel{\bullet}{D}\pi_{\mathcal{BD}(D)}w)\in h$. \end{lem} \begin{proof} Let $(u,w)\in\mathcal{D}(A).$ Then we know by \prettyref{lem:D_extends}, that $(u,w)\in\mathcal{D}(G)\times\mathcal{D}(D).$ We decompose $u=u_{0}+P_{\mathcal{BD}(G)}u,$ where $u\in\mathcal{D}(G_{0})\subseteq E_{0}.$ Since $u,u_{0}\in E_{0}$ we get that $\pi_{\mathcal{BD}(G)}^{\ast}\pi_{\mathcal{BD}(G)}u=P_{\mathcal{BD}(G)}u\in E_{0}.$ In the same way we decompose $w=w_{0}+P_{\mathcal{BD}(D)}w,$ where $w_{0}\in\mathcal{D}(D_{0}).$ Since \[ \left(L^{\diamond}w_{0}\right)(z)=\langle w_{0}|Lz\rangle_{H}=\langle w_{0}|Gz\rangle_{H}=\langle-D_{c}w_{0}|z\rangle_{E} \] for each $z\in E_{0},$ we obtain $L^{\diamond}w=-D_{0}w_{0}\in E$ and thus, \begin{align} K^{\diamond}K\pi_{\mathcal{BD}(G)}^{\ast}\pi_{\mathcal{BD}(G)}u-L^{\diamond}\pi_{\mathcal{BD}(D)}^{\ast}\stackrel{\bullet}{G}\stackrel{\bullet}{D}\pi_{\mathcal{BD}(D)}w & =K^{\diamond}KP_{\mathcal{BD}(G)}u-L^{\diamond}P_{\mathcal{BD}(D)}w\nonumber \\ & =K^{\diamond}K\left(u-u_{0}\right)-L^{\diamond}(w-w_{0})\nonumber \\ & =K^{\diamond}Ku-L^{\diamond}w-D_{c}w_{0}\in E,\label{eq:decomp} \end{align} where we have used $\stackrel{\bullet}{G}\stackrel{\bullet}{D}=1,$ $Ku_{0}=0$ and $(u,w)\in\mathcal{D}(A).$ Thus, we have $(\pi_{\mathcal{BD}(G)}u,\stackrel{\bullet}{D}\pi_{\mathcal{BD}(D)}w)\in h.$ \\ Assume now, that $(u,w)\in\mathcal{D}(G)\times\mathcal{D}(D)$ with $(\pi_{\mathcal{BD}(G)}u,\stackrel{\bullet}{D}\pi_{\mathcal{BD}(D)}w)\in h.$ Since $u_{0}\coloneqq u-P_{\mathcal{BD}(G)}u\in\mathcal{D}(G_{0})\subseteq E_{0}$ and by assumption $P_{\mathcal{BD}(G)}u=\pi_{\mathcal{BD}(G)}^{\ast}\pi_{\mathcal{BD}(G)}u\in E_{0}$, we infer that $u\in E_{0}.$ Moreover, decomposing $w=w_{0}+P_{\mathcal{BD}(D)}w$ with $w\in\mathcal{D}(D_{0})$ and using $D_{0}w_{0}=-L^{\diamond}w_{0}$ we derive that \[ K^{\diamond}Ku-L^{\diamond}w=K^{\diamond}K\pi_{\mathcal{BD}(G)}^{\ast}\pi_{\mathcal{BD}(G)}u-L^{\diamond}\pi_{\mathcal{BD}(D)}^{\ast}\stackrel{\bullet}{G}\stackrel{\bullet}{D}\pi_{\mathcal{BD}(D)}w+D_{c}w_{0}\in E \] by \prettyref{eq:decomp} and $(\pi_{\mathcal{BD}(G)}u,\stackrel{\bullet}{D}\pi_{\mathcal{BD}(D)}w)\in h$. Hence, $(u,w)\in\mathcal{D}(A)$. \end{proof} Although we already know that $h$ in the previous Lemma is maximal monotone by \prettyref{thm:Staffans} and \prettyref{thm:char_max_mon}, we will present a proof for this fact, which does not require these Theorems. \begin{prop} \label{prop:h_max_mon}Assume (i) in \prettyref{thm:main} holds and let $h\subseteq\mathcal{BD}(G)\oplus\mathcal{BD}(G)$ be as in \prettyref{lem:h}. Then $h$ is linear and maximal monotone.\end{prop} \begin{proof} The linearity of $h$ is clear due to the linearity of all operators involved. Let now $(x,y)\in h.$ Then we compute \begin{align*} \Re\langle x|y\rangle_{\mathcal{BD}(G)} & =\Re\langle\pi_{\mathcal{BD}(G)}^{\ast}x|\pi_{\mathcal{BD}(G)}^{\ast}y\rangle_{E}+\Re\langle G\pi_{\mathcal{BD}(G)}^{\ast}x|G\pi_{\mathcal{BD}(G)}^{\ast}y\rangle_{E}\\ & =\Re\langle\pi_{\mathcal{BD}(G)}^{\ast}x|\pi_{\mathcal{BD}(G)}^{\ast}y\rangle_{E}+\Re\langle L\pi_{\mathcal{BD}(G)}^{\ast}x|G\pi_{\mathcal{BD}(G)}^{\ast}y\rangle_{E}\\ & =\Re\langle\pi_{\mathcal{BD}(G)}^{\ast}x|\pi_{\mathcal{BD}(G)}^{\ast}y\rangle_{E}+\Re\left(L^{\diamond}G\pi_{\mathcal{BD}(G)}^{\ast}y\right)(\pi_{\mathcal{BD}(G)}^{\ast}x)\\ & =\Re\left(L^{\diamond}\pi_{\mathcal{BD}(D)}^{\ast}\stackrel{\bullet}{G}y+\pi_{\mathcal{BD}(G)}^{\ast}y\right)(\pi_{\mathcal{BD}(G)}^{\ast}x)\\ & =\Re\left(L^{\diamond}\pi_{\mathcal{BD}(D)}^{\ast}\stackrel{\bullet}{G}y+\pi_{\mathcal{BD}(G)}^{\ast}y-K^{\diamond}K\pi_{\mathcal{BD}(G)}^{\ast}x\right)(\pi_{\mathcal{BD}(G)}^{\ast}x)+\\ & \quad+\langle K\pi_{\mathcal{BD}(G)}^{\ast}x|K\pi_{\mathcal{BD}(G)}^{\ast}x\rangle_{U}. \end{align*} Since $\pi_{\mathcal{BD}(G)}^{\ast}x\in E_{0}$ and $K^{\diamond}K\pi_{\mathcal{BD}(G)}^{\ast}x-L^{\diamond}\pi_{\mathcal{BD}(D)}^{\ast}\stackrel{\bullet}{G}y\in E$, we get from \prettyref{lem:D_extends} $K^{\diamond}K\pi_{\mathcal{BD}(G)}^{\ast}x-L^{\diamond}\pi_{\mathcal{BD}(D)}^{\ast}\stackrel{\bullet}{G}y=D\pi_{\mathcal{BD}(D)}^{\ast}\stackrel{\bullet}{G}y=\pi_{\mathcal{BD}(G)}^{\ast}y,$ since $\stackrel{\bullet}{D}\stackrel{\bullet}{G}=1.$ Thus, we obtain \begin{align*} \Re\langle x|y\rangle_{\mathcal{BD}(G)} & =\Re\left(L^{\diamond}\pi_{\mathcal{BD}(D)}^{\ast}\stackrel{\bullet}{G}y+\pi_{\mathcal{BD}(G)}^{\ast}y-K^{\diamond}K\pi_{\mathcal{BD}(G)}^{\ast}x\right)(\pi_{\mathcal{BD}(G)}^{\ast}x)+\\ & \quad+\langle K\pi_{\mathcal{BD}(G)}^{\ast}x|K\pi_{\mathcal{BD}(G)}^{\ast}x\rangle_{U}.\\ & =\langle K\pi_{\mathcal{BD}(G)}^{\ast}x|K\pi_{\mathcal{BD}(G)}^{\ast}x\rangle_{U}\geq0. \end{align*} This proves the monotonicity of $h$. To show that $h$ is maximal monotone, it suffices to prove $(1+h)[\mathcal{BD}(G)]=\mathcal{BD}(G)$ by \prettyref{thm:minty}. Let $f\in\mathcal{BD}(G)$ and consider the linear functional \[ E_{0}\ni x\mapsto\langle\pi_{\mathcal{BD}(G)}^{\ast}f|x\rangle_{E}+\langle G\pi_{\mathcal{BD}(G)}^{\ast}f|Lx\rangle_{H}. \] This functional is continuous and thus there is $w\in E_{0}$ with% \footnote{Recall that the norm on $E_{0}$ is equivalent to the graph norm of $\left(\begin{array}{c} L\\ K \end{array}\right)$.% } \begin{equation} \forall x\in E_{0}:\langle w|x\rangle_{E}+\langle Lw|Lx\rangle_{H}+\langle Kw|Kx\rangle_{U}=\langle\pi_{\mathcal{BD}(G)}^{\ast}f|x\rangle_{E}+\langle G\pi_{\mathcal{BD}(G)}^{\ast}f|Lx\rangle_{H}.\label{eq:riesz} \end{equation} In particular, for $x\in\mathcal{D}(G_{0})\subseteq E_{0}$ we obtain that \begin{align*} \langle Gw|G_{c}x\rangle_{H} & =\langle Lw|Lx\rangle_{H}\\ & =\langle\pi_{\mathcal{BD}(G)}^{\ast}f|x\rangle_{E}+\langle G\pi_{\mathcal{BD}(G)}^{\ast}f|Lx\rangle_{H}-\langle w|x\rangle_{E}\\ & =\langle\pi_{\mathcal{BD}(G)}^{\ast}f|x\rangle_{E}+\langle G\pi_{\mathcal{BD}(G)}^{\ast}f|G_{c}x\rangle_{H}-\langle w|x\rangle_{E}\\ & =\langle\pi_{\mathcal{BD}(G)}^{\ast}f|x\rangle_{E}-\langle DG\pi_{\mathcal{BD}(G)}^{\ast}f|x\rangle_{E}-\langle w|x\rangle_{E}\\ & =-\langle w|x\rangle_{E}, \end{align*} where we have used $Kx=0$ and $DG\pi_{\mathcal{BD}(G)}^{\ast}f=\pi_{\mathcal{BD}(G)}^{\ast}f.$ The latter gives $w\in\mathcal{D}(D)$ and $DGw=w$ or, in other words, $P_{\mathcal{BD}(G)}w=w.$ Set $u\coloneqq\pi_{\mathcal{BD}(G)}w$ and $v=f-u$. It is left to show that $(u,v)\in h.$ First, note that $\pi_{\mathcal{BD}(G)}^{\ast}u=P_{\mathcal{BD}(G)}w=w\in E_{0}.$ Moreover, we compute for $x\in E_{0}$ using \prettyref{eq:riesz} \begin{align*} \left(K^{\diamond}K\pi_{\mathcal{BD}(G)}^{\ast}u-L^{\diamond}\pi_{\mathcal{BD}(D)}^{\ast}\stackrel{\bullet}{G}v\right)(x) & =\langle K\pi_{\mathcal{BD}(G)}^{\ast}u|Kx\rangle_{U}-\langle\pi_{\mathcal{BD}(D)}^{\ast}\stackrel{\bullet}{G}v|Lx\rangle_{H}\\ & =\langle Kw|Kx\rangle_{U}-\langle G\pi_{\mathcal{BD}(G)}^{\ast}f|Lx\rangle_{H}+\langle G\pi_{\mathcal{BD}(G)}^{\ast}u|Lx\rangle_{H}\\ & =\langle Kw|Kx\rangle_{U}-\langle G\pi_{\mathcal{BD}(G)}^{\ast}f|Lx\rangle_{H}+\langle Lw|Lx\rangle_{H}\\ & =\langle\pi_{\mathcal{BD}(G)}^{\ast}f|x\rangle_{E}-\langle w|x\rangle_{E}, \end{align*} which gives $K^{\diamond}K\pi_{\mathcal{BD}(G)}^{\ast}u-L^{\diamond}\pi_{\mathcal{BD}(D)}^{\ast}\stackrel{\bullet}{G}v=\pi_{\mathcal{BD}(G)}^{\ast}f-w\in E.$ This completes the proof. \end{proof} The only thing, which is left to show is that $h$ is selfadjoint. \begin{prop} Assume (i) in \prettyref{thm:main} holds and let $h\subseteq\mathcal{BD}(G)\oplus\mathcal{BD}(G)$ be as in \prettyref{lem:h}. Then $h$ is selfadjoint.\end{prop} \begin{proof} We note that $h^{\ast}$ is monotone, since $h$ is maximal monotone by \prettyref{prop:h_max_mon} and \prettyref{rem:max_mon}. Thus, due to the maximality of $h,$ it suffices to prove $h\subseteq h^{\ast}.$ For doing so, let $(u,v),(x,y)\in h.$ Then \begin{align*} \langle y|u\rangle_{\mathcal{BD}(G)} & =\langle\pi_{\mathcal{BD}(G)}^{\ast}y|\pi_{\mathcal{BD}(G)}^{\ast}u\rangle_{E}+\langle G\pi_{\mathcal{BD}(G)}^{\ast}y|G\pi_{\mathcal{BD}(G)}^{\ast}u\rangle_{H}\\ & =\langle\pi_{\mathcal{BD}(G)}^{\ast}y|\pi_{\mathcal{BD}(G)}^{\ast}u\rangle_{E}+\langle\pi_{\mathcal{BD}(D)}^{\ast}\stackrel{\bullet}{G}y|L\pi_{\mathcal{BD}(G)}^{\ast}u\rangle_{H}\\ & =\langle\pi_{\mathcal{BD}(G)}^{\ast}y|\pi_{\mathcal{BD}(G)}^{\ast}u\rangle_{E}+\left(L^{\diamond}\pi_{\mathcal{BD}(D)}^{\ast}\stackrel{\bullet}{G}y-K^{\diamond}K\pi_{\mathcal{BD}(G)}^{\ast}x\right)(\pi_{\mathcal{BD}(G)}^{\ast}u)+\\ & \quad+\langle K\pi_{\mathcal{BD}(G)}^{\ast}x|K\pi_{\mathcal{BD}(G)}^{\ast}u\rangle_{U}. \end{align*} Using that $\pi_{\mathcal{BD}(G)}^{\ast}x\in E_{0}$ and $L^{\diamond}\pi_{\mathcal{BD}(D)}^{\ast}\stackrel{\bullet}{G}y-K^{\diamond}K\pi_{\mathcal{BD}(G)}^{\ast}x\in E$ we have $L^{\diamond}\pi_{\mathcal{BD}(D)}^{\ast}\stackrel{\bullet}{G}y-K^{\diamond}K\pi_{\mathcal{BD}(G)}^{\ast}x=-D\pi_{\mathcal{BD}(D)}^{\ast}\stackrel{\bullet}{G}y=-\pi_{\mathcal{BD}(D)}^{\ast}y$ according to \prettyref{lem:D_extends}. Thus, \[ \langle y|u\rangle_{\mathcal{BD}(G)}=\langle K\pi_{\mathcal{BD}(G)}^{\ast}x|K\pi_{\mathcal{BD}(G)}^{\ast}u\rangle_{U}. \] Repeating this argumentation and interchanging $y$ and $x$ as well as $u$ and $v$, we get that \[ \langle v|x\rangle_{\mathcal{BD}(G)}=\langle K\pi_{\mathcal{BD}(G)}u|K\pi_{\mathcal{BD}(G)}^{\ast}x\rangle_{U} \] and hence, $\langle y|u\rangle_{\mathcal{BD}(G)}=\langle x|v\rangle_{\mathcal{BD}(G)}$, which implies $h\subseteq h^{\ast}.$ \end{proof} This completes the proof of (i)$\Rightarrow$(ii) in \prettyref{thm:main}. To show the converse implication, we need the following well-known result for selfadjoint relations, which for sake of completeness, will be proved. \begin{thm}[{\cite[Theorem 5.3]{Arens1961}}] Let $Y$ be a Hilbert space and $C\subseteq Y\oplus Y$ a selfadjoint relation. Let $U\coloneqq\overline{[Y]C}$. Then there exists a selfadjoint operator $S:[Y]C\subseteq U\to U$ such that \[ C=S\oplus\left(\{0\}\times U^{\bot}\right). \] \end{thm} \begin{proof} Due to the selfadjointness of $C,$ we have that \begin{equation} U=\overline{[Y]C}=\overline{[Y]C^{\ast}}=\left(C[\{0\}]\right)^{\bot}.\label{eq:U} \end{equation} We define the relation $S\coloneqq\left\{ (u,v)\in U\oplus U\,|\,(u,v)\in C\right\} $ and prove that $S$ is a mapping. First we note that $S$ is linear as $C$ and $U$ are linear. Thus, it suffices to show that $(0,v)\in S$ for some $v\in U$ implies $v=0.$ Indeed, if $(0,v)\in S,$ we have $(0,v)\in C$ and hence, $v\in C[\{0\}]=U^{\bot}$. Thus, $v\in U\cap U^{\bot}=\{0\}$ and hence, $S$ is a mapping. Next, we show that $C=S\oplus\left(\{0\}\times U^{\bot}\right).$ First note that $S\subseteq C$ as well as $\{0\}\times U^{\bot}\subseteq C$ by definition and hence, $S\oplus(\{0\}\times U^{\bot})\subseteq C$ due to the linearity of $C$. Let now $(u,v)\in C$ and decompose $v=v_{0}+v_{1}$ for $v_{0}\in U,v_{1}\in U^{\bot}=C[\{0\}].$ Hence, $(0,v_{1})\in C$ and $\left(u,v_{0}\right)=(u,v)-(0,v_{1})\in C.$ Moreover, $u\in U$ by \prettyref{eq:U} and thus, we derive $(u,v_{0})\in S$ and consequently, $(u,v)=(u,v_{0})+(0,v_{1})\in S\oplus\left(\{0\}\times U^{\bot}\right).$ Finally, we show that $S$ is selfadjoint. Using that $C=S\oplus\left(\{0\}\times U^{\bot}\right)$, we obtain that \begin{align*} (x,y)\in S^{\ast} & \Leftrightarrow x,y\in U\wedge\forall(u,v_{0})\in S:\langle v_{0}|x\rangle_{U}=\langle u|y\rangle_{U}\\ & \Leftrightarrow x,y\in U\wedge\forall(u,v_{0})\in S,v_{1}\in U^{\bot}:\langle v_{0}+v_{1}|x\rangle_{Y}=\langle v_{0}|x\rangle_{Y}=\langle u|y\rangle_{Y}\\ & \Leftrightarrow x,y\in U\wedge\forall(u,v)\in C:\langle v|x\rangle_{Y}=\langle u|y\rangle_{Y}\\ & \Leftrightarrow x,y\in U\wedge(x,y)\in C^{\ast}=C\\ & \Leftrightarrow(x,y)\in S, \end{align*} which gives $S=S^{\ast},$ i.e. $S$ is selfadjoint.\end{proof} \begin{rem} It is obvious that in case of a monotone selfadjoint relation $C$ in the latter theorem, the operator $S$ is monotone, too.\end{rem} \begin{lem} \label{lem:L_and_K}Assume (ii) in \prettyref{thm:main} and let $h=S\oplus\left(\{0\}\times U^{\bot}\right)$ with $U\coloneqq\overline{[\mathcal{BD}(G)]h}$ and $S:\left[\mathcal{BD}(G)\right]h\subseteq U\to U$ selfadjoint. We define the vectorspace \[ E_{0}\coloneqq\left\{ u\in\mathcal{D}(G)\,|\,\pi_{\mathcal{BD}(G)}u\in\mathcal{D}(\sqrt{S})\right\} \subseteq E \] and the operators $L:E_{0}\to H,w\mapsto Gw$ and $K:E_{0}\to U,u\mapsto\sqrt{S}\pi_{\mathcal{BD}(G)}u.$ Then the operator $\left(\begin{array}{c} L\\ K \end{array}\right):E_{0}\subseteq E\to H\oplus U$ is closed and $E_{0},U,L$ and $K$ satisfy the hypothesis, if we equip $E_{0}$ with the graph norm of $\left(\begin{array}{c} L\\ K \end{array}\right)$.\end{lem} \begin{proof} Let $(w_{n})_{n\in\mathbb{N}}$ be a sequence in $E_{0}$ such that $w_{n}\to w$ in $E$, $Gw_{n}=Lw_{n}\to v$ in $H$ and $\sqrt{S}\pi_{\mathcal{BD}(G)}w_{n}=Kw_{n}\to z$ in $U$ for some $w\in E,v\in H,z\in U.$ Due to the closedness of $G$ we infer that $w\in\mathcal{D}(G)$ and $v=Gw.$ Thus, $(w_{n})_{n\in\mathbb{N}}$ converges to $w$ in $\mathcal{D}_{G}$ and hence, $\pi_{\mathcal{BD}(G)}w_{n}\to\pi_{\mathcal{BD}(G)}w$. By the closedness of $\sqrt{S},$ we get $\pi_{\mathcal{BD}(G)}w\in\mathcal{D}(\sqrt{S})$ and $z=\sqrt{S}\pi_{\mathcal{BD}(G)}w$. Thus, $w\in E_{0}$ and $\left(\begin{array}{c} L\\ K \end{array}\right)w=\left(\begin{array}{c} v\\ z \end{array}\right)$ and hence, $\left(\begin{array}{c} L\\ K \end{array}\right)$ is closed. Thus, $E_{0}$ equipped with the graph norm of $\left(\begin{array}{c} L\\ K \end{array}\right)$ is a Hilbert space. Moreover, $E_{0},U,L$ and $K$ satisfy the hypothesis, since clearly $\mathcal{D}(G_{0})\subseteq E_{0}\subseteq\mathcal{D}(G)$, which gives $E_{0}\subseteq E$ dense and $G_{0}\subseteq L\subseteq G$. Moreover, by definition we have $K|_{\mathcal{D}(G_{0})}=0$. \end{proof} The only thing, which is left to show, is that $\mathcal{D}(A)$ is given as in \prettyref{thm:main} (i). \begin{lem} Assume that (ii) in \prettyref{thm:main} holds and let $E_{0},U$ and $K,L$ be as in \prettyref{lem:L_and_K}. Then \[ \mathcal{D}(A)=\left\{ (u,w)\in E_{0}\times H\,|\, K^{\diamond}Ku-L^{\diamond}w\in E\right\} . \] \end{lem} \begin{proof} Let $(u,w)\in\mathcal{D}(A),$ i.e. $(u,w)\in\mathcal{D}(G)\times\mathcal{D}(D)$ with $(\pi_{\mathcal{BD}(G)}u,\stackrel{\bullet}{D}\pi_{\mathcal{BD}(D)}w)\in h.$ Then, by definition of $U$ and $S$, we have that $\pi_{\mathcal{BD}(G)}u\in\mathcal{D}(S)\subseteq E_{0}$ and $\stackrel{\bullet}{D}\pi_{\mathcal{BD}(D)}w-S\pi_{\mathcal{BD}(G)}u\in U^{\bot}.$ Let $x\in E_{0}$ and set $w_{0}\coloneqq w-P_{\mathcal{BD}(D)}w\in\mathcal{D}(D_{0})$ as well as $x_{0}\coloneqq x-P_{\mathcal{BD}(G)}x\in\mathcal{D}(G_{0}).$ Then we compute \begin{align*} \left(K^{\diamond}Ku-L^{\diamond}w\right)(x) & =\langle Ku|Kx\rangle_{U}-\langle w|Lx\rangle_{H}\\ & =\langle\sqrt{S}\pi_{\mathcal{BD}(G)}u|\sqrt{S}\pi_{\mathcal{BD}(G)}x\rangle_{U}-\langle w|Gx\rangle_{H}\\ & =\langle S\pi_{\mathcal{BD}(G)}u-\stackrel{\bullet}{D}\pi_{\mathcal{BD}(D)}w|\pi_{\mathcal{BD}(G)}x\rangle_{\mathcal{BD}(G)}+\\ & \quad+\langle\stackrel{\bullet}{D}\pi_{\mathcal{BD}(D)}w|\pi_{\mathcal{BD}(G)}x\rangle_{\mathcal{BD}(G)}-\langle w|Gx\rangle_{H}\\ & =\langle P_{\mathcal{BD}(D)}w|GP_{\mathcal{BD}(G)}x\rangle_{H}+\langle DP_{\mathcal{BD}(G)}w|P_{\mathcal{BD}(G)}x\rangle_{E}-\langle w|Gx\rangle_{H}\\ & =\langle P_{\mathcal{BD}(D)}w|GP_{\mathcal{BD}(G)}x\rangle_{H}+\langle DP_{\mathcal{BD}(G)}w|P_{\mathcal{BD}(G)}x\rangle_{E}-\\ & \quad-\langle w|GP_{\mathcal{BD}(G)}x\rangle_{H}-\langle w|G_{0}x_{0}\rangle_{H}\\ & =\langle-w_{0}|GP_{\mathcal{BD}(G)}x\rangle_{H}+\langle DP_{\mathcal{BD}(G)}w|P_{\mathcal{BD}(G)}x\rangle_{E}+\langle Dw|x_{0}\rangle_{E}\\ & =\langle D_{0}w_{0}|P_{\mathcal{BD}(G)}x\rangle_{E}+\langle DP_{\mathcal{BD}(G)}w|P_{\mathcal{BD}(G)}x\rangle_{E}+\langle Dw|x_{0}\rangle_{E}\\ & =\langle Dw|x\rangle_{E}, \end{align*} where we have used $\pi_{\mathcal{BD}(G)}x\in\mathcal{D}(\sqrt{S})\subseteq U$ in the fourth equality. Thus, $K^{\diamond}Ku-L^{\diamond}w=Dw\in E.$ Moreover, $u\in E_{0}$ since $\pi_{\mathcal{BD}(G)}u\in\mathcal{D}(S)\subseteq\mathcal{D}(\sqrt{S})$. This proves one inclusion. Let now $(u,w)\in E_{0}\times H$ with $K^{\diamond}Ku-L^{\diamond}w\in E.$ Then by \prettyref{lem:D_extends} $w\in\mathcal{D}(D)$ with $K^{\diamond}Ku-L^{\diamond}w=Dw$. We need to prove that $\pi_{\mathcal{BD}(G)}u\in\mathcal{D}(S).$ We already have $\pi_{\mathcal{BD}(G)}u\in\mathcal{D}(\sqrt{S})$, by definition of $E_{0}$. Let now $v\in\mathcal{D}(\sqrt{S}).$ Then we have \begin{align*} \langle\sqrt{S}\pi_{\mathcal{BD}(G)}u|\sqrt{S}v\rangle_{U} & =\langle Ku|K\pi_{\mathcal{BD}(G)}^{\ast}v\rangle_{U}\\ & =\left(K^{\diamond}Ku-L^{\diamond}w\right)(\pi_{\mathcal{BD}(G)}^{\ast}v)+\langle w|L\pi_{\mathcal{BD}(G)}^{\ast}v\rangle_{H}\\ & =\langle Dw|\pi_{\mathcal{BD}(G)}^{\ast}v\rangle_{E}+\langle w|G\pi_{\mathcal{BD}(G)}^{\ast}v\rangle_{H}\\ & =\langle w|G\pi_{\mathcal{BD}(G)}^{\ast}v\rangle_{\mathcal{D}_{D}}\\ & =\langle\pi_{\mathcal{BD}(D)}w|\stackrel{\bullet}{G}v\rangle_{\mathcal{BD}(D)}\\ & =\langle\stackrel{\bullet}{D}\pi_{\mathcal{BD}(D)}w|v\rangle_{U}, \end{align*} which gives $\pi_{\mathcal{BD}(G)}u\in\mathcal{D}(S)$ with $S\pi_{\mathcal{BD}(G)}u=\stackrel{\bullet}{D}\pi_{\mathcal{BD}(D)}w.$ Hence, $(\pi_{\mathcal{BD}(G)}u,\stackrel{\bullet}{D}\pi_{\mathcal{BD}(D)}w)\in S\subseteq h,$ and thus, $(u,w)\in\mathcal{D}(A).$ \end{proof} \section*{Acknowledgement} The author would like to thank George Weiss, who asked the question on the relation between the operators considered in \cite{Staffans2012_phys} and \cite{Trostorff2013_bd_maxmon} during a workshop in Leiden.
1,116,691,501,041
arxiv
\section{Active Tensor Sampling Method} \label{sec:active_strategy} The complete flowchart of our MRI reconstruction framework is illustrated in Fig.~\ref{fig:whole_framework}. In order to design an active sampling method for MRI reconstruction, two questions need to be answered: (1) How can we pick informative samples? (2) How can we guarantee that the new samples obey the patterns of MRI scans? We will provide the details in this section. \subsection{Query-by-Committee-based Active Sampling} \label{sec:element_sample} We are inspired by a classical active learning approach called Query-by-Committee~\cite{seung1992query}. The key idea is to employ a committee of different models to predict the response values at some candidate samples respectively. With such a committee, we can measure the quality of a candidate sample and pick the optimal one. The two key components of the Query-by-Committee approach are \begin{itemize}[leftmargin=*] \item{\bf A committee of models.} The different tensor mode unfolding matrices obtained from solving Eq.~\reff{eq:raw_Obj} can naturally form a model committee required by our active sampling method. This model committee enables us to define an element-wise utility measure, denoted as $u(\xi)$, where $\xi$ is an element in the tensor. \item {\bf Measure of sample quality.} After constructing a committee of model, we can define a measure of sample quality $u$. The sample with the largest $u$ will be selected and added into the observation set: $\Omega \leftarrow \Omega \cup \mathop{\mathrm{argmax}}_{\xi}~u\left( {\xi} \right)$. We consider the predictive variance and leverage score as well as their combinations as our measure of sample quality, as detailed in next sub-section. \end{itemize} \subsection{Measure of Sample Quality} {\bf Predictive variance.} The first choice is the predicted variance from different tensor modes. Based on the solved ${\mat{X}_i}$, if we unfold it and enforcing its consistency with the observed data, we can obtain mode-$i$ low-rank approximation to $\ten{M}$: \begin{equation} \label{eq:mode_appro} \begin{aligned} & {\tilde{\ten{M}}_i} {\rm{ := }} {{{\text{Fold}_i}(\mat{X}}_i}), \quad {\tilde{\ten{M}}_i}(\Omega) \leftarrow \ten{T}(\Omega). \end{aligned} \end{equation} We define the difference tensor of each approximation ${\Delta \ten{M}_i}$ and the predictive variance tensor $\ten{V}$ as: \begin{equation} \label{eq:var_measure} {\Delta \ten{M}_i} := {\tilde{\ten{M}}_i} - \mathbb{E}\left[ {\tilde{\ten{M}}} \right],\; \; \ten{V} := \sum\limits_{i = 1}^n {{w_i}} {\left( {\Delta \ten{M}_i} \circ {\Delta \ten{M}_i} \right)} \end{equation} where $\circ$ denotes a Hadamard (element-wise) product, and ${w_i}$ is the weight associated with mode-$i$ approximation ${{\tilde{\ten{M}}}_i}$. The value of ${w_i}$ depends on the solver to Eq.~\reff{eq:raw_Obj}. In our implementation, we adopt ADMM solver and have $\{w_i\}_{i=1}^n = \frac{1}{n}$. Here, $\mathbb{E}\left[ {\tilde{\ten{M}}} \right] := \sum\limits_{i = 1}^n {{w_i}{{\tilde{\ten{M}}}_i}}$ is the expectation of all $n$ mode approximations. Among the $n$ mode approximation tensors $\left\{{\tilde{\ten{M}}_i}\right\}_{i=1}^n$, some of the estimated entries are the same or similar with the true data (with small variance), while some other estimated entries are very different (with large variance). Therefore, we can treat $\ten{V}$ as a measure of disagreement. The following lemma describes the relation between the approximation error among different modes. \begin{lemma} Let ${\mu}$ be the sum of all elements of the predictive variance tensor $\mu:= \|{\rm vec}(\ten{V}) \|_1 = \sum\limits_{i = 1}^n {{w_i}} {\left\| {\Delta \ten{M}_i} \right\|_{\rm F}^2}$. Denote ${\mu_i}$ be the mode-$i$ approximation error: ${\mu_i}=\left\|\ten{M} - {\tilde{\ten{M}_i}} \right\|_{\rm F}^2$, and ${\mu _{\rm tot}}$ be the reconstruction error of the whole data set: $\mu _{\rm tot}=\left\|\ten{M}-\mathbb{E}\left[ {\tilde{\ten{M}}} \right] \right\|_{\rm F}^2$, then we have \begin{equation} \label{eq:relation_err_var} {\mu _{\rm tot}} = \sum\limits_{i = 1}^n {w_i}{\mu_i } - \mu. \end{equation} \end{lemma} \begin{comment} \begin{proof} Given two tensors $\ten{A}$ and $\ten{B}$ which are of the same size, we have $\|\ten{A} - \ten{B}\|_{\rm F}^2 = \|\ten{A}\|_{\rm F}^2 + \|\ten{B}\|_{\rm F}^2 - 2 \| {\rm {vec}}(\ten{A} \circ \ten{B})\|_1$. Eq.~\eqref{eq:relation_err_var} is obtained by applying such an expansion on $\mu$, $\mu_i$ and $\mu _{\rm tot}$. \end{proof} \end{comment} The approximation results of different modes gradually converge to the same one as more samples are observed. When the tensor is fully sampled, both $\mu$ and $\mu_{\rm tot}$ approach zero. Heuristically, $\mu_{\rm tot}$ decreases quickly when a sample with the highest predictive variance is selected. {\bf Leverage score.} Our second quality measurement $u$ is the leverage score. It comes from the incoherence property of a matrix~\cite{candes2009exact,eftekhari2018mc2}: a low-rank matrix can be recovered via nuclear norm minimization if it satisfies the incoherent property. Given the singular value decomposition (SVD) of a rank-$r$ matrix $\mat{A}= \mat{U}\mat{\Sigma} {\mat{V}^H} \in {\mathbb{C}^{{n_1}\times{n_2}}}$, the left and right leverage scores of a matrix are defined as: \begin{equation} \begin{array}{l} {\mat{l}(i)} := \frac{n_1}{r}{\left\| {{\mat{U}^T}{\mat{e}_i}} \right\|^2_2},\quad i = 1,2,\ldots,{n_1}, \\ {\mat{r}(j)} := \frac{n_2}{r}{\left\| {{\mat{V}^T}{\mat{e}_j}} \right\|^2_2},\quad j = 1,2,\ldots,{n_2}. \end{array} \end{equation} A leverage score measures the coherence of a row/column with a coordinate direction. Generally, the rows/columns with larger left/right scores are more informative. For the mode-$k$ unfolded matrix of a tensor, we can perform SVD and calculate its left and right leverage scores as $\mat{l}_k$ and $\mat{r}_k$ respectively, then the element-level leverage score of a sample $\left(i,j \right)$ in mode $k$ is defined as: \begin{equation} \label{eq:mode_lev_measure} {{\mathbf{\ell}_k} \left(i,j\right)}: = {\mat{l}_k(i)} \times {\mat{r}_k(j)},\quad k=1,2,\ldots,n. \end{equation} In a tensor structure, we can average the element-level leverage scores over modes and treat it as the utility measurement: \begin{equation} \label{eq:lev_measure} {\ten{\ell}_ {{i_1}, \ldots, {i_n}}} := \sum\limits_{i = 1}^n {w_i} {\text{Fold}_i}\left( {\mat{\ell}_i} \right). \end{equation} \begin{comment} \begin{figure}[!t] \centering \includegraphics[scale=0.35]{Figure/QBC_fig.png} \caption{Flowchart of the Query-by-Committee-based active sampling method.} \label{fig:qbc_fig} \end{figure} \end{comment} Based on the above two measurements, we propose four greedy methods for our active tensor sampling as follows: \begin{itemize}[leftmargin=*] \item \textbf{Method 1 (Var):} We take the utility measurement $u$ as the variance among different modes [Eq.~\reff{eq:var_measure}]. \item \textbf{Method 2 (Lev):} We take the utility measurement $u$ as the averaged leverage scores [Eq.~\reff{eq:lev_measure}]. \item \textbf{Method 3 / 4 (Var + Lev/ Var $\times$ Lev):} We take the utility measurement $u$ as the sum/ Hadamard product of the normalized variance [Eq.~\reff{eq:var_measure}] and the leverage scores [Eq.~\reff{eq:lev_measure}]. \end{itemize} \subsection{Sampling under Pattern Constraint} \label{sec:pattern_sample} Now we show how to pick the samples subject to a certain pattern constraint. Suppose the unobserved $k$-space data is partitioned into $K$ patterns $\{\Pi_i\}_{i=1}^K$. For example, in a Cartesian sampling, the pattern ${\Pi_i}$ is a rectilinear sampling, \textit{i}.\textit{e}.~a full column or row in a $k$-space matrix (${k_x}$-${k_y}$). We are required to sample all samples of a certain pattern $\Pi_i$ rather than just one element. Since our utility measurements are calculated element-wisely in the Query-by-Committee, they can be easily applied to any sampling pattern by summing the utility measurement over all elements included in the pattern. Consequently, we have a pattern-wise measurement: \begin{equation}\label{eq:pattern_calculate} u\left( \Pi_i \right) = \sum\limits_{\xi \in \Pi_i} {u\left( \xi \right)}, \quad i = 1, 2, \ldots, K. \end{equation} As a result, the observation set can be updated as $\Omega \leftarrow \Omega \cup \mathop{\mathrm{argmax}}_{\Pi_i}~u\left( {\Pi_i} \right) $. Without any pattern constraints, Eq,~\reff{eq:pattern_calculate} will degenerate to an element-wise measurement. In practical implementations, the active learning algorithm can be easily extended to a batch version. We can stop the algorithm when the algorithm runs out of a sampling budget. \begin{comment} The complete active tensor reconstruction algorithm is summarized in Algorithm~\ref{alg:act_framework}. \begin{algorithm}[t] \caption{The comprehensive active tensor completion framework.} \label{alg:act_framework} \KwIn{Initial setting for Alg.~\ref{alg:alt_solver}, batch size $P$, active sampling method, and sampling pattern constraint} \KwOut{Reconstructed tensor $\ten{M}$} \textbf{Step 1:} Run Alg.~\ref{alg:alt_solver} as initialization\\ \textbf{Step 2:} Calculate element-wise utility measurement based on adopted sampling method\\ { \textbf{Step 3:} Select $P$ patterns based on pattern-wise utility measurement [Eq.~\reff{eq:pattern_calculate}]\\ \textbf{Step 4:} Rerun tensor completion algorithm (Alg.~\ref{alg:alt_solver}). \textbf{Step 5:} Check convergence. If not, back to step 2. } \end{algorithm} \end{comment} \begin{comment} \subsection{Method Analysis} \begin{lemma} Denote ${\mu}$ as the sum of all elements of the predictive variance tensor [Eq.~\eqref{eq:var_measure}] $\mu:= \|{\rm vec}(\ten{V}) \|_1 = \sum\limits_{i = 1}^n {{w_i}} {\left\| {\Delta \ten{M}_i} \right\|_{\rm F}^2}$, then we have: \begin{equation} \label{eq:relation_err_var} {\mu _{\rm tot}} = \sum\limits_{i = 1}^n {w_i}{\mu_i } - \mu. \end{equation} Here ${\mu _{\rm tot}}$ is the reconstruction error of the whole data set $\mu _{\rm tot}=\left\|\ten{M}-\mathbb{E}\left[ {\tilde{\ten{M}}} \right] \right\|_{\rm F}^2$, and ${\mu_i}$ is the mode-$i$ approximation error ${\mu_i}=\left\|\ten{M} - {\tilde{\ten{M}_i}} \right\|_{\rm F}^2$. \end{lemma} \begin{proof} Given two tensors $\ten{A}$ and $\ten{B}$ which are of the same size, we have $\|\ten{A} - \ten{B}\|_{\rm F}^2 = \|\ten{A}\|_{\rm F}^2 + \|\ten{B}\|_{\rm F}^2 - 2 \| {\rm {vec}}(\ten{A} \circ \ten{B})\|_1$. Eq.~\eqref{eq:relation_err_var} is obtained by applying such an expansion on $\mu$, $\mu_i$ and $\mu _{\rm tot}$. \end{proof} The approximation results of different modes converge gradually to the same one as more data samples are observed. When the tensor is fully sampled, both the sum of predictive variance $\mu$ and reconstruction error $\mu_{\rm tot}$ approach zero. Heuristically, $\mu$ decreases quickly when some informative samples are selected. This is consistent with our sampling Method 1 which selects new samples with the highest predictive variance greedily. Similar formulations and ideas have been explored used in some other tasks, such as ensemble-based regression~\cite{brown2005managing,mendes2012ensemble} and surrogate modeling~\cite{fuhg2019adaptive,liu2018survey}, where one improves the diversity of training data to improve the performance of some output estimators or surrogate models. \end{comment} \section{Subproblems in the alternating algorithms} \section{Sub-Problems in the Block Coordinate Descent Solver} \label{sec:appendix_BCD} When we employ a block coordinate descent solver, \textit{i}.\textit{e}.~solving Eq.~\reff{eq:BCD_Obj}, the decomposed subproblems are as follows: \begin{itemize}[leftmargin=*] \item Sub-${\mat{L}_i}$ problem: we fix $\ten{S}$ and $\ten{M}$, and then update $\{ \mat{L}_i\}$ by solving: \begin{equation} \label{eq:sub-L} \begin{array}{ll} \mathop {\min } \limits_{\{\mat{L}_i\}} & \sum\limits_{i = 1}^n {{\alpha_i}{{\left\| {{\mat{L}_i}} \right\|}_*}} + \frac{{{\lambda_i}}}{2}\left\| {{\mat{L}_i} + {\mat{S}_{(i)}} - {\mat{M}_{(i)}}} \right\|_F^2 \\ & \equiv \sum\limits_{i = 1}^n \frac{1}{2}\left\| {{\mat{L}_i} + {\mat{S}_{(i)}} - {\mat{M}_{(i)}}} \right\|_F^2 + \frac{{{\alpha_i}}}{{{\lambda_i}}}{\left\| {{\mat{L}_i}} \right\|_*}. \end{array} \end{equation} \item Sub-$\ten{S}$ problem: we fix $\{ \mat{L}_i\}_{i=1}^n$ and $\ten{M}$, and then update $\ten{S}$ by solving: \begin{equation} \label{eq:sub-S} \begin{array}{ll} \mathop {\min }\limits_{\ten{S}} & \sum\limits_{i = 1}^n \frac{{{\lambda_i}}}{2}\left\| {{\mat{L}_i} + {\mat{S}_{(i)}} - {\mat{M}_{(i)}}} \right\|_F^2 + {\lambda}{\left\| {\mathbb{T}{\mathbb{F}^{-1}}\ten{S}} \right\|_1}\\ \end{array} \end{equation} \item Sub-$\ten{M}$ problem: we fix $\{ \mat{L}_i\}_{i=1}^n$ and $\ten{S}$, and then update $\ten{M}$ by solving: \begin{equation} \label{eq:sub-M} \begin{aligned} \mathop {\min }\limits_\ten{M} \quad & \sum\limits_{i = 1}^n {\frac{{{\lambda_i}}}{2}\left\| {{\mat{L}_i} + {\mat{S}_{(i)}} - {\mat{M}_{(i)}}} \right\|_F^2}\\ \mathop{\mathrm{s.t.}} \quad & {\ten{M}_\Omega } = {\ten{T}_\Omega }. \end{aligned} \end{equation} \end{itemize} \section{Sub-Problems in the Alternating direction method of multipliers (ADMM) solver} \label{sec:appendix_admm} Similarly, the sub-problems of ADMM, \textit{i}.\textit{e}.~solving Eq.~\reff{eq:admm_Obj}, are shown as follows. \begin{itemize}[leftmargin=*] \item Sub-${\mat{L}_i}$ problem: \begin{equation} \label{eq:sub-L_admm} \begin{aligned} \mathop{\min}\limit_{\mat{L}_i} \quad& {\alpha_i}{{\left\| {{\mat{L}_i}} \right\|}_* } + \langle \ten{M} - \text{Fold}_i({\mat{L}_i}) - \ten{S}, {\ten{Y}_i} \rangle \\ & + \frac{\rho}{2}\left\| {{\mat{L}_i} + {\mat{S}_{(i)}} - {\mat{M}_{(i)}}} \right\|_F^2. \end{aligned} \end{equation} \item Sub-$\ten{S}$ problem: \begin{equation} \label{eq:sub-S_admm} \begin{aligned} \mathop{\min}\limit_{\ten{S}} \quad& {\lambda}{\left\| \mathbb{T}{\mathbb{F}^{-1}}S \right\|_1} + \sum\limits_{i = 1}^n \langle \ten{M} - \text{Fold}_i({\mat{L}_i}) - \ten{S}, {\ten{Y}_i} \rangle\\ & + \frac{\rho}{2}\left\| {{\mat{L}_i} + {\mat{S}_{(i)}} - {\mat{M}_{(i)}}} \right\|_F^2 \end{aligned} \end{equation} \item Sub-$\ten{M}$ problem: \begin{equation} \label{eq:sub-M_admm} \begin{aligned} \mathop{\min}\limit_{\ten{M}} \quad & \sum\limits_{i = 1}^n \frac{\rho}{2}\left\| {{\mat{L}_i} + {\mat{S}_{(i)}} - {\mat{M}_{(i)}}} \right\|_F^2 + \langle \ten{M} - \text{Fold}_i({\mat{L}_i}) - \ten{S}, {\ten{Y}_i} \rangle\\ \mathop{\mathrm{s.t.}} \quad & \ten{M}_{\Omega} = \ten{T}_{\Omega} \end{aligned} \end{equation} \item Sub-$\ten{Y}_i$ problem: \begin{equation} \mathop{\min}\limit_{\ten{Y}_i} \quad \langle \ten{M} - \text{Fold}_i({\mat{L}_i}) - \ten{S}, {\ten{Y}_i} \rangle \end{equation} \end{itemize} \section{Conclusion} \label{sec:conclusion} In this paper, we have presented a tensor-format active sampling model for reconstructing high-dimensional low-rank MR images. The proposed $k$-space active sampling approach is based on the Query-by-Committee method. It can easily handle various pattern constraints in practical MRI scans. Numerical results have shown that the proposed active sampling methods outperform the existing matrix-coherence-based adaptive sampling method. In the future, we will develop the theoretical analysis of the sampling model and verify it on more realistic MRI data. \section{Introduction} \label{sec: intro} Magnetic resonance imaging is a major medical imaging modality, which is widely used in clinical diagnosis and neuroscience. Due to the limited imaging speed, it is often highly desirable to speed up its imaging process. A lot of image models have been proposed to accelerate MR imaging, including sparsity-constrained~\cite{lustig2007sparse}, low-rank-constrained~\cite{liang2007spatiotemporal,lingala2011accelerated,zhao2012image}, data-driven, learning-based approaches, etc~\cite{ravishankar2019image}. Most of these methods recover MRI data with matrix computational techniques. They either focus on 2-D MRI problems~\cite{huang2011efficient} or reshape the high-dimensional MRI data into a matrix and then solve the problem using matrix-based techniques~\cite{otazo2015low}. As a multi-dimensional generalization of matrix computation, tensor computation has been recently employed in MRI due to its capability of handling high-dimensional data~\cite{yu2014multidimensional,liu2019dynamic,trzasko2013unified}. In many applications, MRI data sets naturally have a higher physical dimension. In these cases, tensors often better capture the hidden high-dimensional data pattern, achieving better reconstruction performance~\cite{he2016accelerated,kanatsoulis2019tensor}. The quality and efficiency of an MRI reconstruction also highly depend on the sampling method. Practical samples are measured in the spatial frequency domain of an MR image, often known as $k$-space. Some adaptive sampling techniques have been proposed for matrix-format MR imaging based on compressive sensing or low-rank models~\cite{8632928,levine2017fly}. The experimental design methodologies include the Bayesian model~\cite{seeger2010optimization}, learning-based framework~\cite{gozcu2018learning,zhao2018optimal,sherry2020learning}, self-supervised framework~\cite{jin2019self}, and so forth. Adaptive samplings may also be considered for streaming data~\cite{mardani2015subspace}. However, active sampling has not been explored for fast MR imaging with low-rank tensor models. Beyond the application of MR imaging, there are also limited works of adaptive sampling for tensor-structured data. Existing works mainly rely on the matrix coherence property~\cite{krishnamurthy2013low,liu2015adaptive,deng2020network}. They need to reshape the tensor to a matrix or can only apply on a 3-D tensor. Additionally, the practical MRI sampling is usually subject to some pattern constraints, like a Cartesian line sampling. Few papers have considered the pattern constraints caused by the practical MRI sampling. Therefore, designing an active sampling method for tensor-structured data under certain pattern constraints is an important and open problem. \textbf{Paper contributions.} This paper presents an active sampling method for accelerating high-dimensional MR Imaging with low-rank tensors. Our specific contributions include: \begin{itemize}[leftmargin=*] \item Novel active sampling methods for low-rank tensor-structured MRI data. A Query-by-Committee method is used to search for the most informative sample adaptively. Making use of the special tensor structure, the approximations towards the unfolding matrices naturally forms a committee. The sample quality is measured by the predictive variance, averaged leverage scores, or their combinations. \item Extending the sampling method to handle some pattern constraints in MR imaging. Our proposed sampling method can be applied broadly beyond MRI reconstruction. \item Numerical validation on an MRI example with Cartesian sampling. Numerical results show that the proposed methods outperform the existing tensor sampling methods. \end{itemize} \begin{comment} \begin{figure}[t] \centering \includegraphics[width=3.3in]{Figure/masks.eps} \caption{Some popular $k$-space MRI sampling patterns: (a) Non-Cartesian sampling in a radial trajectory; (b) Fully sampled central region and randomly non-Cartesian sampled non-central region; (c) Fully sampled central region and randomly Cartesian sampled non-central region; (d) Randomly Cartesian sampling.} \label{fig:sample_pattern} \end{figure} \end{comment} \section*{Acknowledge} \small{ \bibliographystyle{Bib/IEEEtran} \section{Numerical results} \label{sec:numerical_results} In this section, we validate our active sampling methods on a low-rank tensor-format MRI data set. Our codes are implemented in MATLAB and run on a computer with 2.3GHz CPU and 16GB memory. \begin{comment} \begin{table*}[t] \centering \caption{The number of added samples to achieve the expected metrics change.} \label{Tb:needed_sample} \begin{tabular}{cccccccc} \toprule Metric & Change & Var &Lev & Var+Lev & Var$\times$Lev & Rand & Reduction\\ \midrule $k$-test & 0.1085$\to$0.076 & \textbf{$<$0.31\%} & $<$0.61\% & $<$0.31\% & $<$0.31\% & 6.1\% & 20x \\ SER (dB)& 10.29$\to$12.13 & \textbf{$<$0.31\%} & $<$0.61\% & $<$0.31\% & $<$0.31\% & 6.1\% & 20x \\ PSNR (dB)& 32.74$\to$34.03 & \textbf{$<$0.31\%} & $<$0.31\% & $<$0.31\% & $<$0.31\% & 6.1\% & 20x \\ \bottomrule \end{tabular} \end{table*} \end{comment} \begin{figure*}[t] \centering \includegraphics[width=6.6in]{Figure/active.eps} \caption{Reconstruction results of different adaptive sampling methods. The proposed methods all outperform the existing Ada\_Coh and random methods. Method 4 (Var $\times$ Lev) is the most suitable one in this example.} \label{fig:combine_act} \end{figure*} \textbf{Data set.} We use a 3D-spatial MRI data\footnote{Available at http://www.cse.yorku.ca/~mridataset/} to demonstrate the proposed active tensor completion model. This data set has some cardiac MR images acquired from 33 subjects. The sequence of each subject consists of 20 time frames and 8-15 slices along the third spatial axis~\cite{andreopoulos2008efficient}. We select one frame of one subject with size $256 \times 256 \times 10$ for validation here. \textbf{Sampling Patterns.} In each spatial slice, we initialize the sampling as a fully Cartesian sampled central region plus randomly Cartesian sampled non-central regions. In this case, the newly added samples should be a fiber in the $k$-space, which is obtained via fixing all but one index of the tensor. \textbf{Evaluation metrics.} Our methods sample and predict $k$-space data, and the final reconstruction needs to be visualized in the image space. Therefore, we choose the evaluation metrics from both spaces. In the $k$-space, the accuracy is measured by the relative mean square error evaluated over the fully-sampled $k$-space data, denoted as $k$-test. In the image space, we use the signal to error ratio (SER) and peak signal to noise ratio (PSNR) as our metrics: \begin{equation} {\rm {SER}} := - 10{\log _{10}}\frac{{{{\left\| {{\ten{I}_{\rm {res}}} - {\ten{I}_{\rm {full}}}} \right\|}_F}}}{{{{\left\| {{\ten{I}_{\rm {full}}}} \right\|}_F}}} \end{equation} \begin{equation} {\rm {PSNR}} := 20{\log _{10}}\frac{{\max \left( {{\ten{I}_{\rm {res}}}} \right)}}{{\sqrt { {\rm {MSE}}(\ten{I})} }}. \end{equation} Here $\ten{I}_{\rm {res}}$ and $\ten{I}_{\rm {full}}$ denote the reconstructed and true image space data, and ${{\rm {MSE}}(\ten{I})}$ is their mean square error. {\bf Baselines for Comparisons.} We perform the experiments via adopting the proposed active learning methods, a matrix-coherence-based adaptive tensor sampling method (denoted as Ada\_Coh)~\cite{krishnamurthy2013low}, and a random sampling method. \textbf{Results Summmary.} In this example, the $k$-space data is scanned continuously as a fiber in the Cartesian coordinate. The initial data is generated with a sampling ratio of 28.37\%, including a 10.55\% fully sampled center. In active sampling, each sampling batch includes 10 fibers, and we select a total of 40 batches in sequentially. Fig.~\ref{fig:combine_act} plots the $k$-test, SER and PSNR as the sampling ratios increase. All proposed active sampling methods can significantly improve the evaluation metrics and outperform Ada\_Coh. Note that in~\cite{krishnamurthy2013low}, the sampling is driven by the coherence of one mode matricization of a tensor. Our Method 2 (Lev) can be seen as its generalization via taking consideration of the connections among different modes. Fig.~\ref{fig:Spatial} shows the reconstructed one frame of the MRI data and the evaluations of the whole data set are shown in Table~\ref{tab:evaluation}. \begin{table}[t] \centering \caption{Evaluations on the whole data set.} \label{tab:evaluation} \begin{tabular}{cccc} \toprule & Initial & Ada\_Coh & \textbf{Proposed (Method 4)} \\ \midrule SER (dB) & 10.15 & 13.77 & \textbf{15.17} \\ PSNR (dB) & 30.19 & 34.55 & \textbf{36.45} \\ \bottomrule \end{tabular} \end{table} The proposed sampling method is shown to have the least reconstruction error. \begin{figure} \centering \includegraphics[width=1\columnwidth]{Figure/rec_heart.png} \caption{Reconstructed one frame of the data.} \label{fig:Spatial} \end{figure} \begin{comment} \begin{figure}[t] \centering \subfigure{ \begin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=1\columnwidth]{Figure/3D_spatial/initial_err.eps} \end{minipage} }% \subfigure{ \begin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=1\columnwidth]{Figure/3D_spatial/rand_err.eps} \end{minipage} }% \\ \subfigure{ \begin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=1\columnwidth]{Figure/3D_spatial/ada_coh_err.eps} \end{minipage} }% \subfigure{ \begin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=1\columnwidth]{Figure/3D_spatial/tensor_err.eps} \end{minipage} }% \centering \caption{Reconstruction error. The initial reconstruction: SER=10.15dB and PSNR=30.19dB. The reconstruction with random sampling: SER=10.82dB and PSNR=31.22dB. The reconstruction with Ada\_Coh sampling: SER=13.77dB and PSNR=34.55dB. The reconstruction with proposed sampling Method 4 (Var $\times$ Lev): SER=15.17dB and PSNR=36.45dB. \hzc{Sorry, the experiments are still running, but will finish them.} } \label{fig:Spatial_rec} \end{figure} \end{comment} \begin{comment} \begin{figure*}[!t] \centering \includegraphics[width=2\columnwidth]{Figure/recon.png} \caption{Reconstruction of the fused multi-coil data. The initial observation: SER=11.50dB and PSNR=32.83dB. The tensor-structure reconstruction with adaptively adding samples via Method 2 (Lev): SER=12.27dB and PSNR=35.78dB for the initial; SER=19.31dB and PSNR=42.83dB for adding 125 fibers; SER=25.13dB and PSNR=48.64dB for adding 250 fibers. The tensor-structure reconstruction with adaptively adding samples randomly: SER=12.27dB and PSNR=35.78dB for the initial; SER=13.17dB and PSNR=35.11dB for adding 125 fibers; SER=14.52dB and PSNR=36.47dB for adding 250 fibers} \label{fig:coil_rec} \end{figure*} \end{comment} \section{Background} \label{sec:preliminaries} \subsection{Notation} Throughout this paper, a scalar is represented by a lowercase letter, \textit{e}.\textit{g}., $x$; a vector or matrix is represented by a boldface lowercase or capital letter respectively, \textit{e}.\textit{g}., $\mat{x}$ and $\mat{X}$. A tensor, which describes a multidimensional data array, is represented by an Euler script calligraphic letter. For instance, a $n$-dimensional tensor is denoted as $\ten{X} \in \mathbb{R}^{{I_1} \times {I_2}\ldots \times {I_n}}$, where ${I_i}$ is the mode size of the $i$-th mode (or dimension). An element indexed by $({i_1}, {i_2}\ldots, {i_n})$ in tensor $\ten{X}$ is denoted as $x_{{i_1}{i_2}\ldots{i_n}}$. A tensor Frobenius norm is defined as $\left\|\ten{X} \right\|_F := \sqrt{\sum\limits_{{i_1},{i_2},\ldots,{i_n}}({x_{{i_1}{i_2}\ldots{i_n}}})^2}$. A tensor $\ten{X} \in \mathbb{R}^{{I_1} \times {I_2}\ldots \times {I_n}}$ can be unfolded into a matrix along the $k$-th mode/dimension, denoted as ${\text{Unfold}_k}(\ten{X}):={\mat{X}_{(k)}} \in \mathbb{R}^{{I_k}\times {{I_1}\ldots{I_{k-1}}{I_{k+1}}\ldots{I_{n}} }}$. Conversely, folding the $k$-mode matrization back to the original tensor is denoted as ${\text{Fold}_k}({\mat{X}_{(k)}}):=\ten{X}$. \subsection{Tensor-Format MRI Reconstruction} In tensor completion, one aims to predict a whole tenser given only partial elements of the tensor, which is similar with matrix completion. In matrix cases, one often uses the nuclear norm ${\left\| \cdot \right\|_*}$ (sum of singular values) as a surrogate of a matrix rank, and seeks for a matrix with the minimal nuclear norm. Exactly computing the tensor rank is NP-hard~\cite{hillar2013most}. A popular heuristic surrogate of tensor (Tucker) rank is the generalization of matrix nuclear norms~\cite{liu2012tensor}: \begin{equation}\label{eq:ten_nuclear} {\left\| \ten{X} \right\|_*} = \sum\limits_{i = 1}^n {{\alpha _i}{{\left\| {{\mat{X}_{\left( i \right)}}} \right\|}_* }}, \end{equation} where $\{{\alpha _i}\}^n_{i=1}$ are the weights, satisfying ${\alpha _i}>0$ and $\sum\nolimits_{i = 1}^n {{\alpha _i}} = 1$. Then the tensor completion problem can be formulated as minimizing Eq.~\reff{eq:ten_nuclear} given existing observations. \begin{comment} Consequently, a low-rank tensor completion problem can be formulated as: \begin{equation} \label{eq:min_lr_ten} \begin{aligned} \mathop {\min }\limits_{\ten{X}} \quad & {\sum\limits_{i = 1}^n {{\alpha _i}{{\left\| {{\mat{X}_{\left( i \right)}}} \right\|}_* }}} \\ \mathop{\mathrm{s.t.}} \quad & {\ten{X}_\Omega } = {\ten{T}_\Omega }, \end{aligned} \end{equation} where $\ten{T}$ is the original tensor, and $\Omega$ is its observation set. \end{comment} The above mode-$n$ matricization $\{{\mat{X}_{(i)}}\}^n_{i=1}$ represents the same set of data and thus are coupled to each other, which makes it hard to solve. Therefore, we replace them with $n$ additional matrices $\{{\mat{X}_i}\}^n_{i=1}$ and introduce an additional tensor $\ten{M}$. The optimization problem can be reformulated as: \begin{equation}\label{eq:raw_Obj} \begin{aligned} \mathop {\min }\limits_{\left\{\mat{X}_i\right\}^n_{i=1},\ten{M}} \quad & \sum\limits_{i = 1}^n {{\alpha_i}{{\left\| {{\mat{X}_i}} \right\|}_ * }}\\ \mathop{\mathrm{s.t.}} \quad & {\mat{X}_ i} = {\mat{M}_{(i)}},\quad i = 1,2,\ldots,n\\ &{\ten{M}_\Omega } = {\ten{T}_\Omega }. \end{aligned} \end{equation} where $\ten{M}$ is a tensor in the $k$-space; $\mat{M}_{(i)}$ is the $i$-th mode matricization of $\ten{M}$; $\ten{T}$ is the fully sampled $k$-space tensor and $\Omega$ is its observation set. Essentially, we can either model the low-rankness of the $k$-space or the image space data. We choose the former one since it enables the design of our active sampling methods in Section~\ref{sec:active_strategy}. If the MRI data is known to hold some additional structure, we can further modify the imagining model by adding a regularization term, like the low rank plus sparsity model~\cite{roohi2017multi}. Eq.~\reff{eq:raw_Obj} can be efficiently solved via some alternating solvers, like block coordinate descent and alternating direction method of multipliers (ADMM)~\cite{liu2012tensor}. Our proposed active sampling strategies will exploit these additional mode approximations $\left\{\mat{X}_i\right\}^n_{i=1}$ to acquire new $k$-space samples. \begin{figure}[!t] \centering \includegraphics[scale=0.38]{Figure/framework.png} \caption{Flowchart of the MRI reconstruction framework.} \label{fig:whole_framework} \end{figure} \section{Related works} \label{sec:related_works} \par \textbf{Tensor Methods for MRI Reconstruction:} Tensor has been employed in the MR imaging since the ambient dimension of MRI data sets is often high. Yang~\textit{et al}.~\cite{yang2017dynamic} showed that low-rank tensor completion with a sparsity regularizer could accelerate MRI reconstruction. Christodoulou~\textit{et al}.~\cite{christodoulou2018magnetic} applied low-rank tensor modeling for motion-resolved quantitative imaging. The sparse transform of compressive-sensing MRI is formulated as a Tucker tensor decomposition in~\cite{yu2014multidimensional}. Liu~\textit{et al}.~\cite{liu2018accelerated} proposed a phase-constrained low-rank model for high b-value diffusion-weighted MRI. The theory of regular sub-Nyquist samplings for tensor completion was studied in \cite{kanatsoulis2019tensor}. He~\textit{et al}.~\cite{he2016accelerated} firstly estimated an explicit low-rank tensor subspace and reconstructed high-dimensional data via fitting the subspace. A locally low-rank tensor method for dynamic cardiac MRI reconstruction is proposed by Liu~\textit{et al}.~\cite{liu2019dynamic}, where patch-based local structure is exploited. Although tensor models have been exploited to enable MRI reconstruction from highly-undersampled data, this paper is the first to leverage tensor model for active sampling in MR imaging. \textbf{Experimental Design for MRI:} Experimental design or active sampling is an important approach to accelerate MRI reconstruction. Seeger~\textit{et al}.~\cite{seeger2010optimization} proposed a Bayesian experimental design framework by maximizing the information gain. A learning-based method was proposed in \cite{gozcu2018learning} to select samples for compressive MRI. Ravishankar~\textit{et al}.~\cite{ravishankar2011adaptive} partitioned the sampling space and proposed an adaptive sampling pattern by moving the samples among blocks. Zhao~\textit{et al}.~\cite{zhao2018optimal} proposed an optimal experimental design to maximize the signal-to-noise ratio efficiency with estimation-theoretic bounds. A self-supervised framework is proposed by Jin~\textit{et al}.~\cite{jin2019self} to accelerate compressive MRI by considering both the data acquisition and the reconstruction process within a single deep network. Zhang~\textit{et al}.~\cite{zhang2019reducing} proposed an active acquisition model based on uncertainty reduction via introducing an evaluator network which evaluates the quality gain in reconstruction of each $k$-space measurement. Once the MRI reconstruction is formulated as a matrix/ tensor completion problem, the experimental design becomes an active matrix/tensor completion problem. Some researchers developed Bayesian frameworks for active matrix completion~\cite{chakraborty2013active,sutherland2013active}. Claude~\textit{et al}.~\cite{claude2017efficient} offered an adaptive sampling method under smoothness assumption on matrix components. An entropy-driven approach for active matrix completion with uncertainty quantification was also proposed in~\cite{mak2017active}. However, the active sampling for tensor completion problems is much less studied. An adaptive tubal-sampling method is proposed for a 3D low-tubal-rank tensor completion~\cite{liu2015adaptive}, but it could not be generalized to higher-order tensors. Krishnamurthy and Singh~\cite{krishnamurthy2013low} proposed a probabilistic sampling method to adaptively estimate and update the tensor singular subspace based on an incoherent condition. Both of them did not consider the pattern constraints in MRI sampling. To the best of our knowledge, this work is the first to propose a deterministic active sampling method for tensor MR imaging while incorporating sampling constraint inherent in MR data acquisition process. \section{Low-rank Tensor-Format MRI Reconstruction} \label{sec:ten_completion} As shown in~\reff{eq:ten_nuclear}, $n$ mode matricization $\{{\mat{X}_{(i)}}\}^n_{i=1}$ represents the same set of data and thus are coupled to each other, which makes it hard to solve. Therefore, we replace them with $n$ additional matrices $\{{\mat{X}_i}\}^n_{i=1}$ and introduce an additional tensor $\ten{M}$. The optimization problem can be reformulated as: \begin{equation}\label{eq:raw_Obj} \begin{aligned} \mathop {\min }\limits_{\left\{\mat{X}_i\right\}^n_{i=1},\ten{M}} \quad & \sum\limits_{i = 1}^n {{\alpha_i}{{\left\| {{\mat{X}_i}} \right\|}_ * }}\\ \mathop{\mathrm{s.t.}} \quad & {\mat{X}_ i} = {\mat{M}_{(i)}},\quad i = 1,2,\ldots,n\\ &{\ten{M}_\Omega } = {\ten{T}_\Omega }. \end{aligned} \end{equation} where $\ten{M}$ is a tensor in the $k$-space; $\mat{M}_{(i)}$ is the $i$-th mode matricization of $\ten{M}$; $\ten{T}$ is the fully sampled $k$-space tensor and $\Omega$ is its observation set; Essentially, we can model the low-rankness of both the $k$-space and image space data. We choose the former one since it enables the design of our active sampling methods in Section~\ref{sec:active_strategy}. If the MRI data is known to hold some additional structure, we can further modify the imagining model by adding a regularization term, like the low rank plus sparsity model~\cite{roohi2017multi}. Eq.~\reff{eq:raw_Obj} can be efficiently solved via some alternating solvers, like block coordinate descent and alternating direction method of multipliers (ADMM)~\cite{liu2012tensor}. Our proposed active sampling strategies will exploit these additional mode approximations $\left\{\mat{X}_i\right\}^n_{i=1}$ to acquire new $k$-space samples. The complete flowchart of our MRI reconstruction framework is illustrated in Fig.~\ref{fig:whole_framework}. \begin{figure}[!t] \centering \includegraphics[scale=0.4]{Figure/framework.png} \caption{Flowchart of the MRI reconstruction framework.} \label{fig:whole_framework} \end{figure} \begin{comment} \subsection{Block Coordinate Descent Solver to Eq.~\reff{eq:raw_Obj}} The first equality constraint in Eq.~\reff{eq:raw_Obj} still makes the variables coupled to each other. To address this issue, we further relax the problem and convert it to the following one: \begin{equation} \label{eq:BCD_Obj} \begin{aligned} \min_\limits_{\left\{\mat{L}_i\right\}^n_{i=1},\ten{S},\ten{M}} \quad & \sum\limits_{i = 1}^n ({{\alpha_i}{{\left\| {\mat{L}_i} \right\|}_ * }} + \frac{{\lambda_i}}{{2}}\left\| {{\ten{L}_ i} + {\mat{S}_{(i)}} - {\mat{M}_{(i)}}} \right\|_F^2) \\ & + {\lambda}{\left\| \mathbb{T}{\mathbb{F}^{-1}}\ten{S} \right\|_1}\\ \mathop{\mathrm{s.t.}} \quad &{\ten{M}_\Omega } = {\ten{T}_\Omega } \end{aligned} \end{equation} where $\lambda_i$ is a fixed positive constant associated with each relaxed constraint. The resulting problem~\eqref{eq:BCD_Obj} is convex and decoupled, and it can be easily solved via a block coordinate descent (BCD) algorithm. The block coordinate descent method divides all variables into several groups and solves a sub-problem of each group while keeping others fixed. Specifically, we divide all variables into $n+2$ groups: $\left\{ {{\mat{L}_i}} \right\}_{i = 1}^n$, $\ten{S}$, and $\ten{M}$. In every iteration, the subproblems shown in appendix~\ref{sec:appendix_BCD} are solved with the following closed-form updates: \begin{itemize}[leftmargin=*] \item Update ${\mat{L}_i}$: \begin{equation}\label{eq:BCD_L} {\mat{L}_i} = {\text{SVT}_{\frac{{{\alpha_i}}}{{{\lambda _i}}}}}\left( {{\mat{M}_{\left( i \right)}} - {\mat{S}_{(i)}}} \right). \end{equation} Here $\text{SVT}(\cdot)$ denotes a singular value thresholding operator~\cite{cai2010singular}. Let $\mat{X} = \mat{U}\mat{{\Sigma }}\mat{{V^H}}$ be a singular value decomposition, then ${\text{SVT}_\tau }\left( \mat{X} \right) = \mat{U}\mat{{\Sigma _\tau }}\mat{{V^H}}$. Here $\mat{\Sigma}_\tau$ is a diagonal matrix and its $i$-th element is $ \max (\sigma_i - \tau, 0)$ with $\sigma_i$ being the $i$-th largest singular value of $\mat{X}$. \item Update $\ten{S}$: \begin{equation}\label{eq:BCD_S} \ten{S} = {\mathbb{T}^{ - 1}}\left[\mathbb{F}\left( {{\Lambda _{{\lambda}}}\left( {\mathbb{T}{\mathbb{F}^{ - 1}}\left[ {\ten{M} - \frac{{\sum\nolimits_i {{\lambda_i}{\text{Fold}_i}\left( {{\mat{L}_i}} \right)} }}{{\sum\nolimits_i {{\lambda_i}} }}} \right]} \right)} \right)\right ]. \end{equation} Here ${\Lambda _\lambda }\left( x \right)$ denotes a soft-thresholding operator ${\Lambda _\lambda }\left( x \right) = \frac{x}{{\left| x \right|}}\max \left( {\left| x \right| - \lambda ,0} \right)$ where $\left|x\right|$ is the amplitude of a complex number $x$. The operator $\Lambda(\cdot)$ can be extended to a matrix or tensor case by applying it on each element. \item Update $\ten{M}$: each element is updated as follows: \begin{equation} \label{eq:BCD_M} \begin{array}{l} {m_{{i_1} \ldots {i_n}}} = \left\{{ \begin{array}{cc} {{{\hat{m}}_{{i_1}, \ldots ,{i_n}}}}, & \text{if } {\left( {{i_1}, \ldots ,{i_n}} \right) \notin \Omega } \\ {{t_{{i_1}, \ldots ,{i_n}}}}, & {\text{otherwise} } \end{array} } \right. \\ \text{with tensor } {\hat{\ten{M}}} ={ {\frac{{\sum\nolimits_i {{\lambda_i}{\text{Fold}_i}\left( {{\mat{L}_i}} \right)} }}{{\sum\nolimits_i {{\lambda_i}} }} + \ten{S}. }} \end{array} \end{equation} Here $t_{i_1 i_2 \cdots i_n}$ denotes one element of $\ten{T}$. \end{itemize} According to~\cite{tseng2001convergence}, the above implementation is guaranteed to converge to a coordinate-wise minimum point and stationary point since our function holds the separability and regularity properties. Different from~\cite{liu2012tensor}, our method cannot ensure global convergence since the additional $\ell_1$-norm term is not strictly convex \hzc{when the number of variables p $>$ the number of observations n}, thus the optimal solution may not be unique~\cite{tibshirani2013lasso}. However, the additional sparsity term does help for reconstruction, which will be shown in our experiments. \end{comment} \begin{comment} \subsection{Alternating Direction Method of Multipliers for Eq.~\reff{eq:raw_Obj}} The alternating direction method of multipliers (ADMM) does not relax the equality constraint in Eq.~\reff{eq:raw_Obj}. Instead, it introduces some additional dual variables to handle the constraints. In our tensor completion formulation, it is essentially a consensus ADMM approach~\cite{boyd2011distributed} to decompose the original problem. The original problem in~\reff{eq:raw_Obj} is transformed to a problem of minimizing the augmented Lagrangian function: \begin{equation}\label{eq:admm_Obj} \begin{aligned} \mathop {\min } \quad& L_\rho(\ten{M},{\mat{L}_1},\ldots,{\mat{L}_n},\ten{S},{\ten{Y}_1},\ldots,{\ten{Y}_n})\\ \mathop{\mathrm{s.t.}} \quad & {\ten{M}_\Omega } = {\ten{T}_\Omega }, \end{aligned} \end{equation} with \begin{equation} \begin{aligned} & L_\rho(\ten{M},{\mat{L}_1},\ldots,{\mat{L}_n},\ten{S},{\ten{Y}_1},\ldots,{\ten{Y}_n})\\ = & \sum\limits_{i = 1}^n \left[{{\alpha_i}{{\left\| {{\mat{L}_i}} \right\|}_ * }} + \frac{\rho}{2}\left\| {{\mat{L}_i} + {\mat{S}_{(i)}} - {\mat{M}_{(i)}}} \right\|_F^2 + \\ & \left \langle \ten{M} - \text{Fold}_i({\mat{L}_i}) - \ten{S}, {\ten{Y}_i} \right \rangle \right] + {\lambda}{\left\| \mathbb{T}{\mathbb{F}^{-1}}S \right\|_1}. \end{aligned} \end{equation} Similarly, Eq.~\reff{eq:admm_Obj} can be also efficiently solved in an alternating scheme. In every iteration, the subproblems are shown in appendix~\ref{sec:appendix_admm} and the updating rules are listed as follows: \begin{equation}\label{eq:admm_L} {\mat{L}_i} = {\text{SVT}_{\frac{{{\alpha_i}}}{{\rho}}}}\left( {{\mat{M}_{\left( i \right)}} - {\mat{S}_{(i)}}} + \frac{1}{\rho} \mat{Y}_{i(i)} \right), \quad i=1,\ldots,n. \end{equation} \begin{equation}\label{eq:admm_S} \ten{S} = {\mathbb{T}^{ - 1}}\left[\mathbb{F}\left( {{\Lambda _{\frac{\lambda}{\rho}}}\left( {\mathbb{T}{\mathbb{F}^{ - 1}}\left[ {\ten{M} - {\sum\nolimits_i (\frac{{\mat{L}_i}}{n}}-\frac{\mat{Y}_{i(i)}}{n \rho})}\right]}\right)}\right)\right]. \end{equation} \begin{equation} \label{eq:admm_M} \begin{array}{l} {m_{{i_1} \ldots {i_n}}} = \left\{{ \begin{array}{cc} {{{\hat{m}}_{{i_1}, \ldots ,{i_n}}}}, & \text{if } {\left( {{i_1}, \ldots ,{i_n}} \right) \notin \Omega } \\ {{t_{{i_1}, \ldots ,{i_n}}}}, & {\text{otherwise} } \end{array} } \right. \\ \text{with tensor } {\hat{\ten{M}}} = { \sum_{i} ({{\text{Fold}_i}\left( {{\mat{L}_i}} \right)} + \ten{S} - \frac{1}{\rho}{\ten{Y}_i}) / n}. \end{array} \end{equation} \begin{equation}\label{eq:admm_Y} \ten{Y}_i= \ten{Y}_i-\rho({\text{Fold}_i}\left( {{\mat{L}_i}} \right)+\ten{S}-\ten{M}). \end{equation} \begin{algorithm}[t] \caption{Alternating solvers to Eq.~\reff{eq:raw_Obj}.} \label{alg:alt_solver} \KwIn{Initial $k$-space data $\ten{M}$ with ${\ten{M}_\Omega } = {\ten{T}_\Omega }$, maximum iteration $J$} \KwOut{Reconstructed tensor $\ten{M}$, mode approximations ${\{\tilde{\ten{M}}_i}\}^n_{i=1}$} \For{$j =1,2,\dots,J$} { \If{BCD}{ \For{$i =1,2, \ldots ,n$} { update ${\mat{L}_i}$ via Eq.~\reff{eq:BCD_L}}\\ Update $\ten{S}$ via Eq.~\reff{eq:BCD_S Update ${\ten{M}}$ via Eq.~\reff{eq:BCD_M } \If{ADMM}{ \For{$i =1,2, \ldots ,n$} {Update ${\mat{L}_i}$ via Eq.~\reff{eq:admm_L}} Update $\ten{S}$ via Eq.~\reff{eq:admm_S} Update ${\ten{M}}$ via Eq.~\reff{eq:admm_M} \For{$i =1,2, \ldots ,n$} { Update ${\ten{Y}_i}$ via Eq.~\reff{eq:admm_Y}\\ } } } Calculate ${\{\tilde{\ten{M}}_i}\}^n_{i=1}$ via Eq.~\reff{eq:mode_appro} \end{algorithm} The complete algorithm flow of solving the L+S tensor completion problem, which includes both block coordinate descent and ADMM, is summarized in Algorithm~\ref{alg:alt_solver}. The ADMM solver also converges to a coordinate-wise minimum since it follows the same decomposition scheme with an additional dual variable. The main advantage of ADMM is dealing with the inequality constraints directly and having a more accurate formulation. The convergence rate of ADMM is still under investigation, but the experiments show that ADMM usually performs outperforms block coordinate descent in the practice, which will be shown later. It is worth emphasizing that the constructed modes are not equivalent to each other although the inequalities of Eq.~\reff{eq:raw_Obj} are not relaxed. The tensor nuclear norm included in the objective function leads to different objectives in the decomposed $\{\text{Sub-}{\mat{L}_i}\}_{i=1}^n$ problems (see appendix~\ref{sec:appendix_admm}), therefore different solutions will be obtained for different modes. In the next section, we will show that this difference is the key to design our active sampling method. \end{comment}
1,116,691,501,042
arxiv
\section{Introduction} Observations of ultra-high-energy (UHE; \mbox{$> 10^{18}$}~eV) cosmic rays (CRs), and attempts to detect their expected counterpart neutrinos, are hampered by their extremely low flux. The detection of a significant number of UHE particles requires the use of extremely large detectors, or the remote monitoring of a large volume of a naturally-occurring detection medium. One approach, suggested by \citet{dagkesamanskii1989}, is to make use of the lunar regolith as the detection medium by observing the Moon with ground-based radio telescopes, searching for the Askaryan radio pulse produced when the interaction of a UHE particle initiates a particle cascade\gcitep{askaryan1962}. The high time resolution required to detect this coherent nanosecond-scale pulse puts these efforts in a quite different regime to conventional radio astronomy. Since the first application of this lunar radio technique with the Parkes radio telescope\gcitep{hankins1996}, many similar experiments have been conducted, none of which has positively detected a UHE particle. Consequently, these experiments have placed limits on the fluxes of UHECRs and neutrinos. To determine these limits, each experiment has developed an independent calculation of its sensitivity to radio pulses and, in most cases, an independent model for calculating the resulting aperture for the detection of UHE particles. This situation calls for further work in two areas, both of which are addressed here: the recalculation of the radio sensitivity of past experiments in a common framework, incorporating all known experimental effects, and the calculation of the resulting apertures for both UHECRs and neutrinos using a common analytic model. An additional benefit of this work is to provide a comprehensive description of the relevant experimental considerations, with past experiments as case studies, to support future work in this field. To that end, I also present here a similar analysis of the radio sensitivity and particle aperture for several possible future lunar radio experiments. The most sensitive telescope available for the application of this technique for the forseeable future will be the Square Kilometre Array (SKA), prospects for which have been discussed elsewhere\gcitep{bray2014b}, but phase~1 of this instrument is not scheduled for completion until 2023; in this work, I instead evaluate three proposed experiments that could be carried out in the near future (\mbox{$< 5$}~yr) with existing radio telescopes. Most other experiments that could be conducted with existing radio telescopes will resemble one of these. This work is organised as follows. In \secref{sec:radio} I address the calculation of the sensitivity of radio telescopes to coherent pulses, obtaining a similar result to Eq.~2 of \citet{gorham2004a}, but incorporating a wider range of experimental effects. This provides the theoretical basis for the re-evaluation in \secref{sec:exps} of past lunar radio experiments, in which I calculate a common set of parameters to represent their sensitivity to a lunar-origin radio pulse. Alongside these, I calculate the same parameters for proposed near-future experiments. In \secref{sec:nossr} I discuss the calculation of the sensitivity of lunar radio experiments to UHE particles. For each of the experiments evaluated in \secref{sec:exps}, I calculate the sensitivity to neutrinos based on the analytic model of \citet{gayley2009}, and the sensitivity to UHECRs based on the analytic model of \citet{jeong2012}. Finally, in \secref{sec:discussion}, I briefly discuss the implications for future work in this field. \section{Sensitivity to coherent radio pulses} \label{sec:radio} The sensitivity of a radio telescope is characterised by the system equivalent flux density (SEFD), conventionally measured in janskys (1~Jy = $10^{-26}$ W~m$^{-2}$~Hz$^{-1}$), which is given by \begin{equation} \inangle{F} = 2 \, \frac{ k \, \ensuremath{T_{\rm sys}} }{ \ensuremath{A_{\rm eff}} } \label{eqn:sefd} \end{equation} where $k$ is Boltzmann's constant, $\ensuremath{T_{\rm sys}}$ the system temperature and $\ensuremath{A_{\rm eff}}$ the effective aperture (i.e.\ the total collecting area of the telescope multiplied by the aperture efficiency). In the context of a lunar radio experiment, the system temperature is typically dominated by thermal radiation from the Moon --- or, at lower frequencies, by Galactic background emission --- with a smaller contribution from internal noise in the radio receiver. However, the strength of a coherent pulse, such as the Askaryan pulse from a particle cascade, is expressed in terms of a spectral electric field strength, in e.g.\ V/m/Hz. To describe the sensitivity of a radio telescope to a coherent pulse, we must relate this quantity to the parameters in \eqnref{eqn:sefd}. The factor of two in \eqnref{eqn:sefd} occurs because the flux contains contributions from two polarisations, whether these are considered as orthogonal linear polarisations or as opposite circular polarisations (left and right circular polarisations; LCP and RCP). The bolometric flux density in a single polarisation is given by the time-averaged Poynting vector \begin{equation} \inangle{S} = \frac{ E_{\rm rms}^2 }{ Z_0 } \label{eqn:poynting} \end{equation} where $E_{\rm rms}$ is the root mean square (RMS) electric field strength in that polarisation, and $Z_0$ is the impedance of free space. If the received radiation has a flat spectrum over a bandwidth $\Delta\nu$, the total spectral flux density is found by averaging the combined bolometric flux density in both polarisations over the band, giving us \begin{align} \inangle{F} &= 2 \, \frac{ \inangle{S} }{ \Delta\nu } \\ &= 2 \, \frac{ E_{\rm rms}^2 }{ Z_0 \, \Delta\nu } & \mbox{from \eqnref{eqn:poynting}} \label{eqn:sefd_poynting} \end{align} which is the SEFD again. Combining \eqnrefii{eqn:sefd}{eqn:sefd_poynting} shows that \begin{equation} E_{\rm rms} = \left( \frac{ k \, \ensuremath{T_{\rm sys}} \, Z_0 \, \Delta\nu }{ \ensuremath{A_{\rm eff}} } \right)^{1/2} . \label{eqn:bigErms} \end{equation} It is also useful to define \begin{align} \ensuremath{\mathcal{E}_{\rm rms}} &= \frac{ E_{\rm rms} }{ \Delta\nu } \label{eqn:Erms_basic} \\ &= \left( \frac{ k \, \ensuremath{T_{\rm sys}} \, Z_0 }{ \ensuremath{A_{\rm eff}} \, \Delta\nu } \right)^{1/2} & \mbox{from \eqnref{eqn:bigErms},} \label{eqn:Erms} \end{align} the equivalent RMS spectral electric field for this bandwidth, although for incoherent noise it should be borne in mind that, unlike the flux density, the spectral electric field varies with the bandwidth. This is in contrast to the behaviour of coherent pulses, for which the spectral electric field is bandwidth-independent, and the flux density scales with the bandwidth. The sensitivity of an experiment to detect a coherent radio pulse can be expressed as $\ensuremath{\mathcal{E}_{\rm min}}$, a threshold spectral electric field strength above which a pulse would be detected. This is typically measured with respect to $\ensuremath{\mathcal{E}_{\rm rms}}$, in terms of a significance threshold $n_\sigma$. Note that the addition of thermal noise will increase or decrease the amplitude of a pulse, so that $\ensuremath{\mathcal{E}_{\rm min}}$ is actually the level at which the detection probability is 50\% rather than an absolute threshold, but this distinction becomes less important when $n_\sigma$ is large. $\ensuremath{\mathcal{E}_{\rm min}}$ further depends on the position of the pulse origin within the telescope beam, as \begin{equation} \ensuremath{\mathcal{E}_{\rm min}}(\theta) = f_C \, \frac{n_\sigma}{\alpha} \sqrt{ \frac{\eta}{\mathcal{B}(\theta)} } \, \ensuremath{\mathcal{E}_{\rm rms}} \label{eqn:Emin} \end{equation} where $\mathcal{B}(\theta)$ is the beam power at an angle $\theta$ from its axis, normalised to \mbox{$\mathcal{B}(0) = 1$} and assumed here to be radially symmetric (e.g.\ an Airy disk). This same equation is used to calculate $\ensuremath{\mathcal{E}_{\rm max}}$ as described in \secref{sec:exps}. The factor $\eta$ is the ratio between the total pulse power and the power in the chosen polarisation channel, typically found as \begin{equation} \eta = \begin{cases} 2 & \mbox{for circular polarisation} \\ 1 / \cos^2\phi & \mbox{for linear polarisation} \end{cases} \label{eqn:eta} \end{equation} with $\phi$ the angle between the receiver and a linearly polarised pulse such as that expected from the Askaryan effect. The term $\alpha$ is the proportion of the original pulse amplitude recovered after inefficiencies in pulse reconstruction, as described in \secref{sec:alpha}. The remaining factor, $f_C$, accounts for the improvement in sensitivity from combining $C$ independent channels with a threshold of $n_\sigma$ in each, as described in \secref{sec:combchan}. The behaviour of coherent pulses as described above is quite different to that of conventional radio astronomy signals. As a consequence of \eqnref{eqn:Erms}, sensitivity to coherent pulses scales as $\sqrt{\ensuremath{A_{\rm eff}} \Delta\nu}$ in electric field and hence as \mbox{$\ensuremath{A_{\rm eff}} \Delta\nu$} in power, whereas sensitivity to incoherent signals scales as \mbox{$\ensuremath{A_{\rm eff}} \sqrt{\Delta\nu}$} in power. Fundamentally, this is because the signal of a coherent pulse combines coherently both across the collecting area of the telescope and across its frequency range, while most radio astronomy signals combine coherently across the collecting area and incoherently across frequency. Because of this difference it is not entirely appropriate to represent a detection threshold in terms of an equivalent flux density, as the flux density of a coherent pulse depends on its bandwidth, which defeats the purpose of using a spectral (rather than bolometric) measure such as flux density in the first place. However, this quantity is occasionally reported in the literature, so I calculate it in several cases for comparative purposes; ensuring, to the best of my ability, that both values are calculated for the same bandwidth, so that the comparison is valid. For a polarised pulse at the detection threshold, with spectral electric field $\ensuremath{\mathcal{E}_{\rm min}}$ and total electric field \mbox{$E_{\rm min} = \ensuremath{\mathcal{E}_{\rm min}} \Delta\nu$}, the equivalent flux can be found similarly to \eqnref{eqn:sefd_poynting} --- omitting the factor of 2, as the pulse appears in only a single polarisation --- as \begin{equation} \ensuremath{F_{\rm min}} = \frac{ \ensuremath{\mathcal{E}_{\rm min}}^2 \, \Delta\nu }{ Z_0 } \label{eqn:Fmin} . \end{equation} \subsection{Amplitude recovery efficiency} \label{sec:alpha} The spectral electric field $\mathcal{E}$ of a pulse is, in general, a complex quantity. For a coherent pulse, its phase is constant across all frequencies. If this phase is zero, then the time-domain function $E(t)$ has its power concentrated at a single point in time with peak amplitude \mbox{$|\mathcal{E}| \Delta\nu$}, as implicitly assumed in the above discussion. However, an Askaryan pulse has a phase close to the worst-case value of $\pi/2$\gcitep{miocinovic2006}, for which it takes on a bipolar profile with the power split between the poles, causing the peak amplitude to be reduced by a factor \mbox{$\sim \sqrt{2}$}. If this pulse is recorded directly without correcting the phase, this gives \mbox{$\alpha \sim 0.71$}. If the signal undergoes frequency downconversion, the phase is randomised, giving $\alpha$ somewhere between this value and unity\gcitep{bray2012}. A pulse originating from the Moon is smeared out in time, also reducing its peak amplitude, by dispersion as it passes through the Earth's ionosphere. The frequency-dependent delay is \begin{equation} \Delta t = 1.34 \times 10^9 \left( \frac{\rm STEC}{\rm TECU} \right) \left( \frac{\nu}{\rm Hz} \right)^{-2} {\rm s} \label{eqn:sens_dispersion} \end{equation} where STEC is the electron column density or slant total electron content measured in total electron content units (${\rm 1~TECU} = 10^{16}$ electrons~m$^{-2}$). Typical values are in the range 5--100~TECU, depending on the time of day, season, solar magnetic activity cycle, and slant angle through the ionosphere. When a signal is converted to digital samples with a finite sampling rate, the peak amplitude is further reduced, because the sampling times do not necessarily correspond to the peak in the original analog signal\gcitep{james2010}. This effect can be mitigated by oversampling the analog signal, or by interpolating the digital data\gcitep{bray2014a}. For a coherent ${\rm sinc}$-function pulse with no oversampling or interpolation, the worst case corresponds to sampling times equally spaced either side of the peak, giving a value for $\alpha$ of \mbox{${\rm sinc}(0.5) = 0.64$}. The interaction between these effects is complex, and not susceptible to a simple analytic treatment. I have instead developed a simulation to find a representative value of $\alpha$ for a given experiment, described in \appref{app:sim}. \subsection{Combining channels} \label{sec:combchan} Some coherent pulse detection experiments combine the signals from multiple channels, which may be different polarisations, frequency bands, antennas, or any combination of these. In this context, I take $\Delta\nu$ to be the bandwidth of a single channel, and \eqnref{eqn:Emin} with \mbox{$f_C = 1$} gives the threshold for a single channel on its own. The sensitivity of the combined signal depends critically on whether there is phase coherence between the channels, and whether they are combined coherently (i.e.\ direct summation of voltages) or incoherently (summing the squared voltages, or power). The scaling of the sensitivity for $C$ independent identical channels is as described below. \begin{description} \parskip=0pt \item[Coherent channels, coherent combination] \hfill \par\nobreak In this case, the pulses in each channel combine coherently, and the combination acts as a single channel with bandwidth \mbox{$C \, \Delta\nu$}. The threshold in voltage thus scales as \mbox{$f_C = C^{-1/2}$}. \item[Coherent channels, incoherent combination] \hfill \par\nobreak Squaring the voltages in this case converts them to the power domain, in which the sensitivity scales as $C^{1/2}$. The sensitivity in the voltage domain scales as the square root of this, or $C^{1/4}$, and hence \mbox{$f_C = C^{-1/4}$}. \item[Incoherent channels, coherent combination] \hfill \par\nobreak Since there is no phase coherence between the pulses in different channels, they sum incoherently, in the same way as the noise. The signal-to-noise ratio therefore does not scale with the number of channels, so \mbox{$f_C = 1$}. \item[Incoherent channels, incoherent combination] \hfill \par\nobreak Squaring the voltages converts them to the power domain, in which the sensitivity scales as $C^{1/2}$, regardless of the original phases. The sensitivity in the voltage domain therefore scales as $C^{1/4}$, and hence \mbox{$f_C = C^{-1/4}$}. \end{description} Conventional radio astronomy operates in the first regime for the combination of multiple antennas, as the signal is coherent across the collecting area; and in the last regime for the combination of multiple frequency channels, as most astronomical radio signals are not coherent across a range of frequencies. Care must be taken in defining the significance threshold $n_\sigma$ when the signal is in the power domain. For a voltage-domain signal $s$, which has a Gaussian distribution, the significance is defined simply in terms of the peak and RMS signal values as \mbox{$n_\sigma = s_{\rm peak}/s_{\rm rms}$}. If this signal is squared to produce the power-domain signal $S$, it has a $\chi^2$~distribution{} with one degree of freedom, and the significance is instead found as \mbox{$n_\sigma = ( S_{\rm peak} / \moverline{S} )^{1/2}$} in terms of the mean value $\moverline{S}$, since \mbox{$S_{\rm peak} = s_{\rm peak}^2$} and \mbox{$\moverline{S} = s_{\rm rms}^2$}. The ratio \mbox{$S_{\rm peak} / \moverline{S}$} is the same as the ratio between the equivalent flux density of the pulse (from \eqnref{eqn:Fmin}) and the mean background flux in a single polarisation (i.e.\ half the SEFD). When $C$ identical independent power-domain channels are summed, the resulting signal has a $\chi^2$~distribution{} with $C$ degrees of freedom, but the scaling factor $f_C$ corrects for this, with $n_\sigma$ remaining the significance in a single channel. Some experiments operate with multiple channels, but do not combine them either coherently or incoherently as described above. Instead, they combine them in coincidence, requiring a pulse to be simultaneously detected in all channels simultaneously. This increases the effective detection threshold: taking \mbox{$f_C = 1$} gives the threshold $\ensuremath{\mathcal{E}_{\rm min}}$ at which the detection probability is 50\%, due to Gaussian thermal noise increasing or decreasing the pulse amplitude, but the probability of simultaneous detection in $C$ channels is only $2^{-C}$. To scale $\ensuremath{\mathcal{E}_{\rm min}}$ so that the detection probability remains 50\%, for $C$ identical independent channels, we require $f_C$ such that \begin{equation} \prod_{i=1}^{C} \left( \int_{n_\sigma (1 - f_C)}^{\infty} \! \frac{ds_i}{\sqrt{2\pi}} e^{-s_i^2/2} \right) = 0.5 \end{equation} where the integral is over the Gaussian-distributed voltage-domain signal $s_i$ in each channel. Solving for $f_C$ gives us \begin{equation} f_C = 1 - \frac{ \sqrt{2} }{ n_\sigma } \, {\rm erf}^{-1} {\left( 1 - 2^{(C - 1)/C} \right)} \label{eqn:coinc} \end{equation} where ${\rm erf}^{-1}$ is the inverse of the standard error function. The value of $f_C$ approaches unity for large $n_\sigma$, for which the effects of thermal noise become insignificant, and for small $C$. \section{Past and near-future lunar radio experiments} \label{sec:exps} Lunar radio experiments have been carried out with a diverse range of telescopes, with a variety of different receivers and trigger schemes to balance their sensitivity with their ability to exclude radio-frequency interference (RFI). Here I attempt to represent them with a unified set of parameters, so their sensitivity to UHE particles can be calculated with the analytic models used in \secref{sec:nossr}. Although this representation is inevitably only an approximation to the inputs to numerical simulations (e.g.\ \cite{james2009b}), it lends itself more easily to use in future models. This work is similar in concept to previous work by \citet{jaeger2010}, but contains a more detailed analysis of previous experiments, including all the effects described in \secref{sec:radio}. I determine the following parameters. \begin{description} \parskip=0pt \item[Observing frequency: $\nu$] \hfill \par\nobreak I take this to be the central frequency of the triggering band. Generally speaking, a lower frequency results in a larger effective aperture for UHE particles, while a higher frequency reduces the threshold detectable particle energy. As the analytic models used in this work all assume a small fractional bandwidth, I also report the width $\Delta\nu$ of the triggering band as an indication of the accuracy of this assumption. However, this does not include the secondary 1.4~GHz band of the Kalyazin experiment (see \secref{sec:kalyazin}). \item[Minimum spectral electric field: $\ensuremath{\mathcal{E}_{\rm min}}$] \hfill \par\nobreak This is the spectral electric field strength of a coherent pulse for which the detection probability is 50\%, as described in \secref{sec:radio}; its interpretation as an absolute threshold will slightly underestimate the sensitivity for weaker pulses and overestimate it for stronger ones. An Askaryan pulse from a lunar UHE particle interaction is expected to have linear polarisation oriented radially to the Moon, and to originate from the lunar limb\gcitep{james2009b}. For telescope beams pointed at the limb of the Moon I use the minimum value \mbox{$\ensuremath{\mathcal{E}_{\rm min}} = \ensuremath{\mathcal{E}_{\rm min}}(0)$} at the centre of the beam; otherwise, I take $\ensuremath{\mathcal{E}_{\rm min}}(\theta_{\rm L})$ at the closest point on the limb. I represent the pulse reconstruction efficiency with the mean value $\moverline[0.5]{\alpha}$ for a flat-spectrum pulse, calculated with the simulation described in \appref{app:sim}. \item[Limb coverage: $\zeta$] \hfill \par\nobreak A single telescope beam typically covers only part of the Moon, which reduces the probability of detecting a UHE particle. As the probability of detection is dominated by radio pulses originating from the outermost fraction of the lunar radius, at least at higher frequencies\gcitep{james2009f}, I take the effective coverage to be the fraction of the circumference of the lunar limb within the beam, multiplied by the number of beams $n_{\rm beams}$ when there are multiple similar beams pointed at different parts of the limb. For this purpose, I consider a point on the limb to be within the beam if the effective threshold $\ensuremath{\mathcal{E}_{\rm min}}(\theta)$ in that direction is no more than $\sqrt{2}$ times the minimum threshold $\ensuremath{\mathcal{E}_{\rm min}}$ as defined above. For a beam pointed at the limb, this corresponds to the commonly-used full width at half maximum (FWHM) beam size. The analytic models used in this work assume full sensitivity within this beam and zero outside of it, which will slightly overestimate the sensitivity to weaker pulses near the detection threshold, which cannot be detected throughout the beam, and underestimate the sensitivity to stronger pulses, which can be detected even when they are slightly outside of it. Where available, I have used the dates of observations to determine the median apparent size of the Moon when calculating the limb coverage, although this has only a minor effect on the result: the apparent size of the Moon varies across the range 29--34\ensuremath{^{\prime}}, but most experiments provide a fairly even sampling of this range, so their median values are within 1\ensuremath{^{\prime}}\ of one another. \item[Effective observing time: $\ensuremath{t_{\rm obs}}$] \hfill \par\nobreak This is the effective time spent observing the Moon after allowing for inefficiency in the trigger algorithm, instrumental downtime while data is being stored, and the false positive rates of anti-RFI cuts. \end{description} Some experiments have used an anticoincidence filter in which they exclude any event which is detected in multiple receivers pointed at different parts of the sky, as these are typically caused by local RFI detected through the antenna sidelobes. These filters are critical for excluding pulsed RFI which might otherwise be misidentified as a lunar-origin pulse, but they also have the potential to misidentify a sufficiently intense lunar-origin pulse as RFI, which may substantially decrease the sensitivity of an experiment to UHE particles\gcitep{bray2015a}. To reflect this, for these experiments I calculate another quantity. \begin{description} \parskip=0pt \item[Maximum spectral electric field: $\ensuremath{\mathcal{E}_{\rm max}}$] \hfill \par\nobreak This is the spectral electric field strength of a coherent pulse which, if detected in one beam, would have a 50\% chance of also being detected through a sidelobe of another beam and hence being misidentified as RFI. It is otherwise defined similarly to $\ensuremath{\mathcal{E}_{\rm min}}$, and calculated with \eqnref{eqn:Emin} with $n_\sigma$ as the significance level for exclusion and $\mathcal{B}(\theta)$ as the sidelobe power of one beam at the centre of another. A lunar-origin pulse is considered to be detected and identified as such only if its spectral electric field strength is between $\ensuremath{\mathcal{E}_{\rm min}}$ and $\ensuremath{\mathcal{E}_{\rm max}}$. \end{description} I derive these values for past experiments in \secrefs{sec:parkes}{sec:lunaska_parkes}, calculating them separately for each pointing if the experiment used multiple pointing strategies. I also consider possible near-future experiments in \secrefs{sec:lofar}{sec:auscope}. The results are presented in \tabref{tab:exps}, and are used in the rest of this work. \begin{table*} \centering \begin{threeparttable} \caption[Observation parameters for lunar radio experiments]{Observation parameters for past and near-future lunar radio experiments.} \input{tab_exps} \label{tab:exps} \end{threeparttable} \end{table*} \subsection{Parkes} \label{sec:parkes} The first lunar radio experiment was conducted with the 64~m Parkes radio telescope in January 1995\gcitep{hankins1996,hankins2001}. They observed for 10 hours with a receiver that Nyquist-sampled the frequency range 1175--1675~MHz in dual circular polarisations. The storage of this data was triggered when a threshold was exceeded by the power in both of two subbands, each of width 100~MHz in a single polarisation, centred on 1325~MHz and 1525~MHz, at a delay offset corresponding to that expected from ionospheric dispersion. This last criterion was effective in discriminating against terrestrial RFI. However, they calculated the relative dispersive delay across a band $\Delta\nu$ as \begin{equation} \Delta t = 0.012 \left( \frac{\Delta\nu}{\rm Hz} \right) \left( \frac{\rm STEC}{ {\rm electrons}~{\rm cm}^{-2 }} \right) \left( \frac{\nu}{\rm Hz} \right)^{-3} {\rm s} \end{equation} whereas, to be equivalent (for small $\Delta\nu$) to \eqnref{eqn:sens_dispersion}, the leading constant should be 0.00268\gcitep{mcfadden2009}. Consequently, the 10~ns dedispersive delay they introduced between the two subbands exceeded the required value by a factor of \mbox{$\sim 4$}. Since the delay error is comparable to the 10~ns length of a band-limited pulse in a 100~MHz subband, a lunar-origin Askaryan pulse would have no significant overlap between the two subbands, and would not meet the trigger criteria. Even if such a pulse were recorded, it would be excluded by later tests on the stored full-band data, which required that a pulse display an increased amplitude when `correctly' dedispersed. This experiment was therefore not appreciably sensitive to UHE particles. The telescope beam for this experiment was directed at the centre of the Moon, reflecting the contemporary expectation that this was the most likely point at which to detect the Askaryan pulse from an interacting UHE neutrino\gcitep{dagkesamanskii1989}. Because of this, the beam had only minimal sensitivity at the lunar limb, where detectable Askaryan pulses are now known to be most likely to originate, which limits its sensitivity to UHE particles\gcitep{james2007}, even if the dedispersion problem described above is ignored. This experiment did, however, serve an important role in triggering further work in this field. \subsection[GLUE]{GLUE} \label{sec:glue} The Goldstone Lunar Ultra-high-energy Neutrino Experiment (GLUE) made use of the 34~m DSS13 and 70~m DSS14 antennas at the Goldstone Deep Space Communications Complex in a series of observations over 2000--2003, with a total of 124 hours of effective observing time\gcitep{gorham2001,gorham2004a,williams2004}. They observed around 2.2~GHz on both antennas, forming two non-overlapping 75~MHz RCP channels on DSS13, and a 40~MHz LCP channel and a 150~MHz RCP channel (later two 75~MHz RCP channels) on DSS14. Each channel was triggered by a peak in the signal power as measured by a square-law detector. A global trigger, causing an event to be stored, required a coincidence between all four (or five) channels within a 300~$\mu$s time window. Subsequent cuts eliminated RFI by tightening the coincidence timing criteria, aided considerably by the 22~km baseline between the two antennas, as well as by excluding extended pulses, pulses clustered in time, and pulses detected by an off-axis 1.8~GHz receiver on DSS14. A range of beam pointings were used, ranging from the centre to the limb of the Moon, reflecting the realisation that Askaryan pulses were most likely to be observed from the limb. \citet{williams2004} excluded thermal noise by applying significance cuts at \mbox{$n_\sigma = 4$} (DSS13 RCP), \mbox{$n_\sigma = 6$} (DSS14 RCP) and \mbox{$n_\sigma = 3$} (DSS14 LCP), with these thresholds chosen by scaling based on bandwidth (but not on collecting area) to equalise their sensitivity, and considered these, rather than the trigger thresholds, to define the sensitivity of the experiment. The trigger thresholds are not straightforward to determine, as they depend on the characteristics of the signal output of the square-law detectors, but I assume that the \mbox{$\sim 10$}~ns integration time of the square-law detectors effectively removes any dependence on the phase of the original signal while not further smearing out any peaks, and take the output to be the square of the signal envelope. This analog output was searched for peaks by SR400 discriminators which act on a continuous signal\gcitep{srs2007}, and so are not subject to the amplitude loss from a finite sampling rate described in \secref{sec:alpha}. Given these assumptions, the 30~kHz single-channel trigger rates for DSS13 RCP and DSS14 RCP imply thresholds equivalent to \mbox{$n_\sigma = 4.2$} and $4.4$ respectively in the original unsquared voltages, and the 45~kHz trigger rate for DSS14 LCP implies \mbox{$n_\sigma = 4.0$} (from Ref.\gcitep{bray2012}, Eq.~46). I therefore find that the trigger thresholds are higher than the cut thresholds, and thus limit the sensitivity, for the DSS13 RCP and DSS14 LCP channels. Note that my assumptions, and the insignificance of dispersion at this experiment's high observing frequency, imply \mbox{$\alpha = 1$}. If my assumptions are invalid then the true trigger thresholds will be lower than found here, but the amplitude reconstruction efficiency $\alpha$ will be decreased, leading to a net increase in the effective threshold and a decrease in the sensitivity of this experiment. Due to the range of different channels used in the coincidence trigger requirement, the scaling relation in \secref{sec:combchan} is not directly applicable: instead, the threshold is determined by the least sensitive channel or channels. Most of the observing time for this experiment was spent with both antennas pointed on the limb of the Moon, in which configuration the least sensitive channels are those of DSS13 RCP: given the reported values of 105~K for the system temperature and 75\% for the aperture efficiency, I find them by \eqnref{eqn:Erms} to have \mbox{$\ensuremath{\mathcal{E}_{\rm rms}} = 0.0033$} $\mu$V/m/MHz. Under the assumption that any event which exceeds the trigger threshold on both DSS13 RCP channels will almost certainly also trigger the more sensitive channels, \eqnref{eqn:coinc} can then be applied to find that the coincidence requirement between the two DSS13 RCP channels gives \mbox{$f_C = 1.13$}. From \eqnref{eqn:Emin}, taking the above values and \mbox{$\eta = 2$} for circular polarisation, I find $\ensuremath{\mathcal{E}_{\rm min}} = 0.022$ $\mu$V/m/MHz at the centre of the beam. Note that this is higher (less sensitive) than the value $0.00914$ $\mu$V/m/MHz found by \citet{williams2004}, which was based on the cut threshold (rather than the trigger threshold) and the more sensitive 150~MHz DSS14 RCP channel. \Figref{fig:gluebeam} shows the relationship between the cut and trigger thresholds, calculating $\ensuremath{\mathcal{E}_{\rm min}}(\theta)$ for all channels through the same procedure as above and assuming an Airy disk beam shape. Although the DSS14 LCP channel is more sensitive than DSS13 RCP, its beam is narrower, so it limits the effective beam width to 11\ensuremath{^{\prime}}, giving a limb coverage of 11\%. \begin{figure} \centering \includegraphics[width=\linewidth]{gluebeam} \caption[Detection threshold for GLUE experiment]{Threshold electric field strength $\ensuremath{\mathcal{E}_{\rm min}}(\theta)$ over angle $\theta$ from the beam axis for different channels of the GLUE experiment, for a limb pointing. Solid lines show the trigger thresholds I calculate for each channel, with the dashed line showing the threshold for a coincidence on both DSS13 RCP channels, while dotted lines show thresholds based on the cuts of \citet{williams2004}. The cut threshold calculated by \citeauthor{williams2004}\ for DSS14 RCP at the centre of the beam (starred) corresponds closely to my curve. The sensitivity is determined by the highest threshold, which is a trigger threshold (rather than a cut threshold) across the entire beam. I take $\ensuremath{\mathcal{E}_{\rm min}}$ at the centre of the beam to be given by the two-channel coincidence requirement for DSS13 RCP, as described in the text, and the beam width to be that at which the trigger threshold for the DSS14 LCP channel reaches $\sqrt{2}$ times this value, as shown.} \label{fig:gluebeam} \end{figure} The GLUE experiment spent a shorter period of time (see \tabref{tab:exps}) pointing either directly at the lunar centre, or in a half-limb position offset 0.125\ensuremath{^{\circ}}\ from this. In these cases, the DSS14 antenna was deliberately defocused, which reduced its aperture efficiency but improved its sensitivity on the limb of the Moon. The degree of defocusing was chosen to match the DSS13 beam size, so under these circumstances I treat DSS14 as a 34~m antenna, and find the sensitivity to be limited by the 40~MHz DSS14 LCP channel. As there is only one such channel, \mbox{$f_C = 1$}. Given the reported system temperatures of 170~K (half-limb) and 185~K (centre), I find $\ensuremath{\mathcal{E}_{\rm rms}}$ in this channel to be 0.0057 and 0.0059 $\mu$V/m/MHz respectively. The sensitivity in these cases, however, is dramatically affected by the large angle between the beam centre and the lunar limb. Assuming an Airy disk beam shape and an apparent lunar size of 31\ensuremath{^{\prime}}, the beam power at the closest point on the lunar limb is 40.7\% for a half-limb pointing, and only 0.5\% for a centre pointing. Including these factors as $\mathcal{B}(\theta_L)$ in \eqnref{eqn:Emin}, I obtain values for $\ensuremath{\mathcal{E}_{\rm min}}$ of 0.050 and 0.474 $\mu$V/m/MHz respectively, greatly increasing the threshold relative to that for a limb pointing. The advantage of these configurations is that the limb coverage is increased: 20\% for a half-limb pointing, and 100\% for a centre pointing since the beam is equally sensitive to the entire limb. The off-axis 1.8~GHz receiver on DSS14 used to identify RFI was operated throughout the experiment and, for most of the data, a cut was applied to exclude events in which this receiver detected a significant increase in noise power. Since a lunar-origin pulse could be detected through a sidelobe of its beam, this cut places an upper limit on the intensity of a pulse that could be identified by this experiment. The cut was applied to the power averaged over 1~$\mu$s, which is \mbox{$80\times$} the Nyquist sampling interval for the 40~MHz bandwidth of the receiver; hence, a band-limited pulse would need an amplitude of $\sqrt{80}\sigma$ to increase the averaged power by a factor of two, which was the threshold for the cut. I assume a system temperature for the receiver of only 30~K, as it was offset from the main beam by 0.5\ensuremath{^{\circ}}\ and hence not directed at the Moon. Due to this offset, it was only minimally sensitive to a lunar-origin pulse: the beam power $\mathcal{B}(\theta)$ of a 1.8~GHz Airy disk at 0.5\ensuremath{^{\circ}}\ is only 0.16\% for DSS14, or 1.43\% when defocused. Combining these parameters with \eqnref{eqn:Emin}, the threshold $\ensuremath{\mathcal{E}_{\rm max}}$ for exclusion of a pulse by this effect is 0.370 $\mu$V/m/MHz, or 0.253 $\mu$V/m/MHz when DSS14 was defocused. Since this latter value is below the detection threshold $\ensuremath{\mathcal{E}_{\rm min}}$ for the centre-pointing configuration, I conclude that this configuration was not sensitive to UHE particles, as any pulse from the limb of the Moon which was detected in the primary DSS14 beam would also be detected in the off-axis receiver and thus be excluded as RFI. There are substantial uncertainties associated with this analysis of the effects of the anti-RFI cut with the off-axis receiver. The exclusion threshold is highly sensitive to the assumed system temperature and beam shape, and realistically it will vary with the power of the off-axis beam at different points on the limb, rather than taking a single value (for the centre of the on-axis beam) as assumed here. There is a less serious approximation involved in conflating the 2.2~GHz primary observing frequency with the 1.8~GHz frequency of the off-axis receiver, effectively assuming that an Askaryan pulse will have a flat spectrum across this frequency range. Finally, this anti-RFI cut was not applied to all of the data, so some fraction of the observing time will be free of this effect. However, this is the best representation of this effect that can be achieved with the chosen set of parameters, and I expect it to be at least approximately correct. Note that the complete exclusion of the centre-pointing configuration makes little difference to the total sensitivity of the GLUE experiment, as only a small fraction of the observing time was spent in this configuration, and previous work which neglected the anti-RFI cut\gcitep{james2009b} has already shown that this configuration had only minimal sensitivity to UHE neutrinos. \subsection{Kalyazin} \label{sec:kalyazin} \citet{beresnyak2005} conducted a series of lunar radio observations with the 64~m Kalyazin radio telescope, with an effective duration of 31 hours, using 120~MHz of bandwidth (RCP only) at 2.25~GHz. Pulses in this band triggered the storage of buffered data both for this channel and for a 50~MHz band with dual circular polarisations at 1.4~GHz. RFI was excluded by requiring a corresponding pulse to be visible in both polarisations at 1.4~GHz at a delay corresponding to the expected ionospheric dispersion, along with further cuts on the pulse shape and the clustering of their times of arrival. Of 15,000 events exceeding the 2.25~GHz trigger threshold of 13.5~kJy, none met these criteria. Interpreting this trigger threshold as an equivalent total flux density in both polarisations, it is equivalent by \eqnref{eqn:Fmin} to a threshold of 0.0206 $\mu$V/m/MHz in a radially-aligned linear polarisation. (If it is instead interpreted as the flux density in the RCP channel alone, the electric field threshold will be increased by a factor of $\sqrt{2}$.) This value for $\ensuremath{\mathcal{E}_{\rm min}}$ neglects several of the scaling factors in \eqnref{eqn:Emin}, which I will now apply. For a single channel in a beam directed at the limb, $f_C = \mathcal{B}(\theta) = 1$, so only $\alpha$ needs to be calculated to compensate for inefficiency in reconstruction of the peak pulse amplitude. Dispersion is negligible at 2.25~GHz over the relatively narrow band of this experiment. The trigger system is described as having a time resolution of 2~ns, which I take to be the sampling interval, giving a sampling rate of 500 Msample/s, compared with a Nyquist rate of 240 Msample/s. This oversampling substantially mitigates the signal loss from a finite sampling rate. (Note that this sampling rate is lower than the maximum 2.5~Gsample/s rate of the TDS~3034 digital oscillograph used in this experiment\gcitep{tektronix2000}; possibly it was set to less than the maximum value, or the trigger algorithm only processed every fifth sample. In any case, the improvement in sensitivity from further oversampling is minimal.) Due to the frequency downconversion, the final phase of the pulse is essentially random, as described in \secref{sec:alpha}. I simulate these effects as described in \appref{app:sim}, assuming the downconverted signal to be at baseband (0--120~MHz), and find a mean signal loss of 13\% (i.e.\ \mbox{$\alpha = 0.87$}), almost entirely from this last effect. Applying this correction, I find an effective threshold of $\ensuremath{\mathcal{E}_{\rm min}} = 0.0235$ $\mu$V/m/MHz, equivalent to $\ensuremath{F_{\rm min}} = 17.6$~kJy. For a pulse to be detected by this experiment it must also have sufficient amplitude to be visible in the 1.4~GHz band, to distinguish it from RFI. Assuming a system temperature of 120~K and an aperture efficiency of 60\%, both polarisations at this frequency have a noise level of $\ensuremath{\mathcal{E}_{\rm rms}} = 0.0025$ $\mu$V/m/MHz. Given \mbox{$\eta = 2$} for circular polarisation and \mbox{$\alpha = 0.90$} for this band calculated as above, a pulse with an amplitude matching the threshold $\ensuremath{\mathcal{E}_{\rm min}}$ at 2.25~GHz would be visible at 1.4~GHz with a significance of \mbox{$n_\sigma = 5.9$} in each polarisation. This exceeds the \mbox{$\sim 4\sigma$} maximum level expected from thermal noise for the 15,000 stored events, making it sufficient to confirm the detection of a pulse. The coincidence requirement is thus not the limiting factor on the sensitivity of this experiment, which is instead determined entirely by the trigger threshold at 2.25~GHz. Note, however, that I have assumed a flat pulse spectrum between 1.4~GHz and 2.25~GHz: a pulse could still fail the coincidence requirement if its spectrum peaked toward the latter frequency. I have also neglected the scaling factor $f_C$ for the coincidence requirement between the 2.25~GHz band and both 1.4~GHz channels, and my assumptions for the system temperature and aperture efficiency may be inaccurate, but these effects are unlikely to reduce the significance of a pulse so much that its detection cannot be confirmed. This experiment observed a point offset from the lunar centre by 14\ensuremath{^{\prime}}, effectively on the limb. The resulting limb coverage for the 2.25~GHz beam, with an FWHM of 7\ensuremath{^{\prime}}, is 7\%. The 1.4~GHz beam is larger than this, and is thus able to confirm a detection anywhere within the 2.25~GHz beam, so it does not further constrain the limb coverage. \citet{dagkesamanskii2011} report further observations with a new recording system and a lower trigger threshold, but do not provide enough detail to evaluate the sensitivity of these observations, so they are not included here. \subsection[LUNASKA ATCA]{LUNASKA ATCA} \label{sec:lunatca} The Lunar Ultra-high-energy Neutrino Astrophysics with the Square Kilometre Array (LUNASKA) project conducted lunar radio observations with three of the 22~m antennas of the Australia Telescope Compact Array (ATCA), requiring a three-way coincidence for a successful detection, in February and May 2008\gcitep{james2009,james2010}. The pointing of the telescope in the two observation runs was at the centre and the limb of the Moon respectively, with a total effective duration of 26 hours. The radio frequency range was 1.2--1.8~GHz, with an analog dedispersion filter to compensate for ionospheric dispersion over this wide band, and sampling at 2.048 Gsample/s which aliased the signal from the 1.024--2.048~GHz range to 0--1.024~GHz. They report a median threshold over their observations of 0.0153 $\mu$V/m/MHz, not significantly different between the two observing runs, possibly because the reduced thermal emission from the Moon in the limb pointing of May 2008 was counteracted by the introduction of an anti-RFI filter that removed part of the band. Their figure already includes most of the effects considered here: it is averaged over a range of linear polarisation alignments, scaled for a 50\% detection probability given the requirement of a three-way coincidence, and increased to compensate for the signal loss from the finite sampling rate, and from the mismatch between the fixed dedispersion characteristic of their filter and the varying ionospheric STEC. These last two effects are treated with greater sophistication than in this work, because they simulate them for pulses with a range of spectra, rather than only for a flat spectrum. They implicitly assume the pulse to have a base phase of zero, whereas the inherent phase of an Askaryan pulse is close to the worst-case value of $\pi/2$\gcitep{bray2014a}, which will be preserved when the signal is downconverted by aliasing rather than by mixing with a local oscillator signal, but the original phase will most likely be near-completely randomised by the remnant dispersion, which is included in their calculation. I therefore adopt their threshold of 0.0153 $\mu$V/m/MHz without modification as $\ensuremath{\mathcal{E}_{\rm min}}(0)$, the threshold at the centre of the beam. For the limb pointing, I take this value directly as $\ensuremath{\mathcal{E}_{\rm min}}$, and use the apparent lunar size of 30\ensuremath{^{\prime}}\ and an FWHM beam size of 32\ensuremath{^{\prime}}\ when averaged over the band from the empirical model of \citet{wieringa1992}, which should provide a more precise result than an Airy disk in this case, to find the limb coverage to be 36\%. For the centre pointing, the same model gives a beam power at the limb of \mbox{$\mathcal{B}(\theta_L) = 55.1$}\% and hence a threshold of $\ensuremath{\mathcal{E}_{\rm min}}(\theta_L) = 0.0207$ $\mu$V/m/MHz, with equal sensitivity around the entire limb. \subsection{NuMoon} \label{sec:wsrt} The NuMoon project\gcitep{buitink2010} conducted a series of lunar radio observations from June 2007 to November 2008 with the Westerbork Synthesis Radio Telescope (WSRT), using the PuMa-II backend\gcitep{karuppusamy2008} to combine the signals from eleven of its fourteen 25~m antennas to form two tied-array beams pointing at opposite sides of the Moon, in four overlapping 20~MHz bands covering the effective frequency range 113--168~MHz. They recorded baseband data continuously during their observations, and retroactively applied dedispersion and a series of cuts to remove RFI based on pulse width, regular timing, and coincidence between the two beams. The effective observing time was 46.7 hours, spread out over 14 observing runs. They represented their sensitivity in terms of a parameter $S$ which is a measure of the power in a single beam summed across all four bands, both polarisations, and five samples (125~ns) in time, such that \mbox{$S = 8$} corresponds to the mean power or SEFD. The summation over time compensates for uncertainty in the STEC during the observations, which leads to some remnant dispersion or excess dedispersion extending a pulse. The events remaining after cuts show a large excess over the distribution expected from thermal noise, the most significant event having \mbox{$S = 76$} compared to an expected maximum of \mbox{$S \sim 30$}, with hundreds of other events falling between these two values. Due to the large number of these events, they are unlikely to originate from UHE particles interacting in the Moon, but they are not positively identified as RFI, and so they limit the sensitivity of this experiment: the detection threshold must be raised to exclude them. Due to the low observing frequency of this experiment, dispersion is a large effect, and even small errors in the STEC used for dedispersion can lead to pulses being extended in time beyond a five-sample window, preventing the parameter $S$ from recording their entire power. \citet{buitink2010} simulated this effect and found that a pulse with an original power equivalent to \mbox{$S > 90$} would have a \mbox{$ > 50$}\% probability of being detected with power in excess of the most significant event actually recorded in the experiment. This value of $S$ defines the significance threshold, equivalent in the voltage domain to $n_\sigma = \sqrt{90/8} = 3.4$. The detection efficiency declines again for stronger pulses, as they may have sufficient power dispersed over a sufficient interval to be excluded by the cut on pulse width, but the threshold width for this cut was chosen to minimise this effect, and I neglect it here. Since the tied-array beams were formed coherently, I treat all antennas, for a single polarisation and 20~MHz band, as a single channel. For eleven antennas each with a diameter of 25~m, and with an aperture efficiency of 33\% for the Low Frequency Front End (LFFE) receivers used in this experiment\gcitep{woestenburg2004}, the total effective area is 1782~m$^2$. \citet{buitink2010} give a range for the system temperature of 400--700~K, with the range being due to the varying contribution from Galactic background noise; I take the central value of 550~K. Given these parameters, I calculate from \eqnref{eqn:Erms} the value of $\ensuremath{\mathcal{E}_{\rm rms}}$ for a single 20~MHz band in a single polarisation as 0.020 $\mu$V/m/MHz. All \mbox{$C = 8$} channels (two polarisations and four frequency bands) for a single beam were separately downconverted to baseband signals, introducing arbitrary phase factors which were not calibrated, so there is no phase coherence between them. This is irrelevant, however, because they were combined in the power domain, which puts this experiment in the fourth regime described in \secref{sec:combchan}, so that the sensitivity scales as \mbox{$f_C = C^{-1/4}$} regardless of phase coherence. I modify this slightly because the bands were overlapping and thus not completely independent, and instead take $f_C$ based on the ratio between a single 20~MHz band and the 55~MHz total bandwidth, with an additional factor of 2 for the combination of polarisations, as \mbox{$(2 \times 55/20)$}$^{-1/4} = 0.65$. This is slightly optimistic, as the combination of the bands applies a suboptimal uneven weighting between overlapping and non-overlapping frequency ranges, but this discrepancy should be minor. The threshold in $S$ already incorporates the effects of dispersion, and the averaging of power over five consecutive samples will minimise the loss of pulse amplitude through finite sampling and randomisation of the pulse phase, so I do not calculate $\alpha$ as in \secref{sec:alpha}. The amplitude of a pulse will, however, be decreased when it is averaged in time, and I take \mbox{$\alpha = 1/\sqrt{5}$} to reflect this. The summing of power between polarisations ensures that \mbox{$\eta = 2$} regardless of the alignment between the linear polarisations of the receivers and of the pulse, the latter of which is in this case strongly frequency-dependent due to Faraday rotation. Given these parameters, and with $n_\sigma$ as calculated earlier, I calculate from \eqnref{eqn:Emin} the threshold electric field for this experiment to be 0.136 $\mu$V/m/MHz, equivalent by \eqnref{eqn:Fmin} to a flux density over the 55~MHz bandwidth of 272~kJy. The originally-reported value was 240~kJy, but this was for a detection efficiency of 87.5\% (rather than 50\%) and assumed perfect aperture efficiency, which will respectively increase and decrease the threshold. The limb coverage is dependent on the shape of the tied-array beams, which is the Fourier transform of the instantaneous \textit{u-v} coverage of the telescope. The WSRT is a linear array, which results in an elongated beam oriented perpendicular to the array axis. The tied-array beam is further tapered by the primary beam of a single antenna, but this is extremely wide (FWHM of 5\ensuremath{^{\circ}}) and so does not significantly affect the tied-array beam power around the Moon. The scale of the beam pattern is determined by the angle between the Moon and the east-west array axis, which determines the projected array length; I take this angle to be 65\ensuremath{^{\circ}}, which is its median value during the scheduled time listed for this experiment in the WSRT schedule archive\footnote{\url{http://www.astron.nl/wsrt-schedule}}. The eleven WSRT antennas used in this experiment consisted of nine of the ten fixed antennas with regular 144~m spacing (RT0--RT4 and RT6--RT9), and two of the four moveable antennas (RTA and RTB), which are respectively 36~m and 90~m distant from the last fixed antenna when the array is in the ``Maxi-Short'' configuration used in this experiment. I calculate the beam shape based on the \textit{u-v} coverage of these antennas, neglecting the minor effect of any phase errors between antennas in forming the tied-array beams, with the results shown in \figref{fig:wsrtbeam}: each beam has an FWHM size of 4.2\ensuremath{^{\prime}}\ in the direction parallel to the array, and is highly elongated in the transverse direction. \begin{figure} \centering \includegraphics[width=\linewidth]{wsrtbeam} \caption[Beam shape for NuMoon experiment]{WSRT beams as used in the NuMoon experiment, averaged across the four bands, for the Moon at the median angle of 65\ensuremath{^{\circ}}\ from the WSRT array axis. Solid lines show the two tied-array beams, pointed at opposite sides of the Moon; the strong sidelobes at 50--60\ensuremath{^{\prime}}\ are due to the regular spacing of the majority of the WSRT antennas, with the sidelobe width due to the large fractional bandwidth. The upper dashed line shows the primary beam of a single WSRT antenna, assumed to be an Airy disk. The lower dashed line shows the mean sidelobe level corresponding to 1/11 of the primary beam power, expected for random incoherent combination of the signals from eleven antennas. Starred points show the power of each beam at the centre of the other (the cross-beam power), which is 27.5\%. The overlapping positions of the FWHM beams with respect to the Moon are shown above the plot; in the transverse direction (vertical in this figure) they will extend out to the 5\ensuremath{^{\circ}}\ scale of the primary beam.} \label{fig:wsrtbeam} \end{figure} From the original pointing data for this experiment\gcitep{smits2013}, I find that the separation between the beams was scaled during each observation to match the changing resolution of the array. The 2.8\ensuremath{^{\prime}}\ separation between the centres of the beams shown in \figref{fig:wsrtbeam} is for the resolution when the Moon is at 65\ensuremath{^{\circ}}\ to the array axis, as assumed for the calculation of the beam pattern. Since this is less than the FWHM beam size, the FWHM beams overlap as shown; and since the scaling of the beam separation matches that of the beam pattern, the proportional overlap will be constant throughout the observations. Counting the overlap region only once, the fraction of the limb covered by the two beams is 14\%. Given the low observing frequency of this experiment, at which the Askaryan pulse from a particle cascade is very broadly beamed and hence may be detected away from the limb of the Moon, it is arguable that the metric should instead be the fraction of the nearside lunar surface area within the FWHM beams, which is 21\% in this pointing configuration. By either of these metrics, the coverage is substantially lower than the figure of 67\% given in the original report. The original report of this experiment also neglected the possibility of a lunar-origin pulse being simultaneously detected in both beams, leading to it being excluded by the anticoincidence cut. A pulse was considered to be detected, and hence eligible for the anticoincidence cut, if it exceeded a threshold of \mbox{$S = 20$} or $n_\sigma = \sqrt{20/8} = 1.58$ in the combined power in both polarisations, simultaneously in all four bands. The scaling factor $f_C$ must therefore be calculated as the product of factors corresponding to both methods of combining channels described in \secref{sec:combchan}: one for the incoherent combination of the two polarisation channels, and one for the required coincidence between the four bands. The first of these is $2^{-1/4}$ for the two polarisations, as in the earlier calculation of $\ensuremath{\mathcal{E}_{\rm min}}$ for this experiment. For the second factor \eqnref{eqn:coinc} cannot be used directly, as the channels being combined in coincidence do not have a Gaussian distribution: they have a $\chi^2$~distribution{} with ten degrees of freedom (for the incoherent sum of two polarisations and five consecutive samples in time), and are in the power domain. Instead, I approximate this distribution with a Gaussian distribution with equal variance, and apply \eqnref{eqn:coinc} with \mbox{$C = 4$} bands and a significance of \mbox{$\sqrt{2 \times 10} \, n_\sigma^2$} (with the factor of 2 for the variance of a $\chi^2$~distribution{}, the factor of 10 for the number of degrees of freedom, and the square of $n_\sigma$ to convert to the power domain), taking the square root of the result to return it to the voltage domain. This gives a value of $1.04$ for the factor of $f_C$ describing the four-band coincidence requirement, which I multiply by the factor of $2^{-1/4}$ for the combination of the two polarisation channels to find a combined value of \mbox{$f_C = 0.88$}. Finally, a lunar-origin pulse detected at the centre of one beam will be detected in the other beam with its intensity scaled by the power $\mathcal{B}(\theta)$ of the second beam at this point, which is shown in \figref{fig:wsrtbeam} to be 27.5\%. Applying these values for $n_\sigma$, $f_C$ and $\mathcal{B}(\theta)$ in \eqnref{eqn:Emin}, with \mbox{$\eta = 2$} and \mbox{$\alpha = 1/\sqrt{5}$} as in the calculation of $\ensuremath{\mathcal{E}_{\rm min}}$, I find the maximum detectable pulse strength to be $\ensuremath{\mathcal{E}_{\rm max}} = 0.165$ $\mu$V/m/MHz. As this exceeds $\ensuremath{\mathcal{E}_{\rm min}}$ by a factor of only 1.2, a lunar-origin pulse must have a strength within a quite narrow range for it to be detected without being excluded as RFI, which severely limits the sensitivity of this experiment. As for the GLUE centre-Moon pointing discussed in \secref{sec:glue}, I note that the exclusion threshold will vary across the beam, so it may be less restrictive at some points. The contribution from thermal noise may also assist in some cases by chance, elevating the power of a lunar-origin pulse in one beam by a greater degree than for the other beam, though this effect is limited by the fact that both tied-array beams are derived from the same set of receivers, so their noise will be strongly correlated. However, these are minor effects which only provide a benefit under limited circumstances, and are detrimental at other times; the parameter values derived above are the best representation of the average sensitivity of this experiment that can be achieved within the framework used here. \subsubsection{Without anticoincidence cut} \label{sec:wsrt_redone} Since the anticoincidence cut so strongly limits the sensitivity of the NuMoon experiment, it is worth considering the sensitivity of this experiment if this cut had not been applied. With the anticoincidence cut omitted, the most significant event remaining has an amplitude of \mbox{$S = 86$} (rather than \mbox{$S = 76$}). Assuming linear behaviour in the signal path, this implies that the threshold for a 50\% detection rate in excess of this amplitude is at \mbox{$S = 102$} (rather than \mbox{$S = 90$}) which leads, through the same procedure as describe above, to an electric field threshold of $\ensuremath{\mathcal{E}_{\rm min}} = 0.145$ $\mu$V/m/MHz. All other parameters are identical in this case, except for $\ensuremath{\mathcal{E}_{\rm max}}$, which is not defined. This set of parameters leads to a minor (\mbox{$< 10$}\%) increase in the minimum detectable UHE particle energy, but overall a substantial increase in the effective sensitivity to UHE particles, if the experiment is interpreted without the anticoincidence cut. I therefore use these modified parameters to represent the NuMoon experiment in \tabref{tab:exps} and \secref{sec:nossr}. \subsection[RESUN]{RESUN} \label{sec:resun} The Radio EVLA Search for Ultra-high-energy Neutrinos (RESUN) project conducted lunar radio observations with the Expanded Very Large Array (EVLA) for a total of 200 hours between September and November 2009\gcitep{jaeger2010}. At the time, this telescope consisted of a mix of antennas of the EVLA and of its predecessor, the Very Large Array (VLA), but the receiver systems of the unupgraded antennas were unable to maintain a linear response up to the large amplitudes required to detect an Askaryan pulse, so this experiment was conducted only with the upgraded EVLA antennas. They used three subarrays of four 25~m antennas each, with each subarray pointing at a different point on the lunar limb; given the FWHM beam size of \mbox{$\sim 30$}\ensuremath{^{\prime}}, this achieves coverage of the entire limb. For each antenna there were two 50~MHz bands centred on 1385~MHz and 1465~MHz, in dual circular polarisations, with all four channels converted to baseband and coherently summed; the experiment aimed to detect a coincident pulse with appropriate timing on all four antennas of a single subarray. No such pulses were detected with a significance exceeding \mbox{$n_\sigma = 4.1$}, consistent with the expectation from thermal noise. The coherent sum between two circular polarisations effectively constructs a single linear polarisation, with its orientation determined by the relative phase of the two input channels. Since this phase was not calibrated in this experiment, the resulting orientation is arbitrary. A pulse with a particular linear polarisation (e.g.\ radial to the Moon, as expected for an Askaryan pulse) will be detected in both circular polarisations with effectively random phases, and so it will not sum coherently when these two channels are combined. Since the two frequency bands also have arbitrary phase offsets, introduced when they are separately downconverted to baseband, the combination of all four channels (two polarisations in each of two bands) on each antenna is in the third regime described in \secref{sec:combchan}, and there is no advantage in sensitivity over a single channel; i.e.\ \mbox{$f_C = 1$}, and the value for $n_\sigma$ given above is the significance both in the combined signal and in a single channel. If the signals in each channel had been squared before they were summed then the experiment would have been in the fourth regime, improving the sensitivity (in the voltage domain) by a factor of $\sqrt{2}$. Adopting the assumptions from \citet{jaeger2010} of $\ensuremath{T_{\rm sys}} = 120$~K and $\ensuremath{A_{\rm eff}} = 343$~m$^2$ for a single antenna (implying an aperture efficiency of 70\%), the noise level in a single 50~MHz channel is $\ensuremath{\mathcal{E}_{\rm rms}} = 0.0060$ $\mu$V/m/MHz, from \eqnref{eqn:Erms}. The combined baseband signal, which is Nyquist-sampled at 100 Msample/s, is subject to inefficiency in amplitude reconstruction from the finite sampling rate and ambiguity of the pulse phase as described in \secref{sec:alpha}, for which I find \mbox{$\alpha = 0.79$} with the simulation from \appref{app:sim}, with dispersion having a negligible effect over this bandwidth. The four-antenna coincidence requirement at an \mbox{$n_\sigma = 4.1$} level increases the threshold by a factor \mbox{$f_C = 1.24$} by \eqnref{eqn:coinc}. With \mbox{$\eta = 2$} for circular polarisation, applying these factors in \eqnref{eqn:Emin} gives a detection threshold of 0.055 $\mu$V/m/MHz. This is substantially higher than the originally-reported value of 0.017 $\mu$V/m/MHz, which was based on the assumption that the signal would combine coherently between all four channels. Note, however, that the original publication incorporated the effects of the coincidence requirement when determining the resulting limit on the UHE neutrino flux rather than incorporating it into the reported electric field threshold, which explains part of the difference. \subsection{LaLuna} The LaLuna project (Lovell attempts Lunar neutrino acquisition) conducted preliminary observations with the 76~m Lovell telescope in November 2009 and May 2010, with an effective time of 1~hour spent observing the lunar limb\gcitep{spencer2010}. They observed at 1418~MHz with 32~MHz of bandwidth, recording pulses that occurred in either circular polarisation, and discriminated against circularly-polarised RFI by requiring that a pulse should appear in both polarisations simultaneously. However, they detected 6 pulses meeting this criterion, with no further means to determine whether they were of lunar origin and no reported upper limit on their amplitude, so no limit can be set from this experiment on the flux of UHE particles. \citet{spencer2010}\ have proposed improving on this by searching for coincident pulses with additional widely-spaced telescopes, usually used for Very Long Baseline Interferometry (VLBI), similar to the prospective experiment described in \secref{sec:auscope}. \subsection[LUNASKA Parkes]{LUNASKA Parkes} \label{sec:lunaska_parkes} In a continuation of the LUNASKA project, further lunar radio observations were conducted with the 64~m Parkes radio telescope in April--September 2010\gcitep{bray2014a,bray2015a}, using the frequency range 1.2--1.5~GHz with the Parkes 21~cm multibeam receiver\gcitep{staveley-smith1996} for an effective observing time of 127 hours. Interpolation and dedispersion were performed in real time with the Bedlam backend\gcitep{bray2012}, based on real-time measurements of ionospheric conditions. Multiple beams were pointed at different points on the limb of the Moon, with a real-time anticoincidence filter to exclude RFI. Further cuts refined the anticoincidence criteria, as well as excluding pulses with excessive width or clustering in their times of arrival. After these cuts, and compensating for the effects described in \secref{sec:alpha}, there were no events with a significance in excess of \mbox{$n_\sigma = 8.6$}, which is consistent with the expected thermal noise. The pointing strategy of this experiment placed two beams slightly off the limb of the Moon to reduce their system temperature by minimising the lunar thermal radiation they received, as shown in \figref{fig:parkesbeam}. For each of these beams, one of their orthogonal linear polarisations was oriented radially to the Moon, to match the expected polarisation of an Askaryan pulse. For 99~hours of the observations an additional beam was placed in a half-limb position, sacrificing sensitivity for slightly improved limb coverage. There were always four beams in total: the remaining one or two were pointed off-Moon to reduce their system temperature and make them more sensitive to RFI, to improve the effectiveness of the anticoincidence filter. \begin{figure} \centering \includegraphics[width=0.5\linewidth]{parkesbeam} \caption[Beam positions for LUNASKA Parkes experiment]{Typical pointing configuration for the LUNASKA Parkes experiment. Crosses in each beam indicate the orientation of the linear polarisations. I assume events in the half-limb beam to be most likely to be multiply detected with one of the adjacent limb beams, and events in the limb beams to be most likely to be multiply detected with the highly sensitive off-Moon beam.} \label{fig:parkesbeam} \end{figure} Due to the real-time processing, the trigger threshold was sufficiently low that any events exceeding \mbox{$n_\sigma = 8.6$} would have been recorded, so it is this significance that determines the sensitivity of the experiment. The reported electric field thresholds based on this significance already include all of the effects considered here, and the limb coverage is determined with the same approach, so I adopt these values unchanged in \tabref{tab:exps}. Note that the calculation in this case involves scaling the sensitivity by the beam power $\mathcal{B}(\theta_L)$ at the closest point on the limb, and the values \mbox{$\eta = 1$} (limb beams) and \mbox{$\eta = 2$} (half-limb beam) have been adopted because of their respective polarisation alignments. The strictest anticoincidence cut was applied at a level of \mbox{$n_\sigma = 4.5$}, which imposes a limit on the strongest event which could be detected without appearing in multiple beams and being excluded as RFI. I consider this limit for each beam to be determined by the most sensitive adjacent beam (see \figref{fig:parkesbeam}), as these will have the most strongly overlapping sidelobes. For the limb beams, this means the limit is determined by the off-Moon beam, for which $\ensuremath{\mathcal{E}_{\rm rms}} = 0.00038$ $\mu$V/m/MHz based on \eqnref{eqn:Erms} and the system temperature in this beam. With a sidelobe power of 0.5\%\gcitep{bray2015a}, using \eqnref{eqn:Emin}, this gives a value for $\ensuremath{\mathcal{E}_{\rm max}}$ of 0.0241 $\mu$V/m/MHz. For the half-limb beam, the limit is determined by the adjacent limb beams, for which $\ensuremath{\mathcal{E}_{\rm rms}} = 0.00054$ $\mu$V/m/MHz and hence $\ensuremath{\mathcal{E}_{\rm max}} = 0.0489$ $\mu$V/m/MHz, where I have again used \mbox{$\eta = 2$} to represent the misalignment between the receiver polarisation and the radius of the Moon in the half-limb beam. \subsection[LOFAR]{LOFAR} \label{sec:lofar} \citet{singh2012} have proposed lunar radio observations with the Low Frequency Array (LOFAR), a recently-constructed radio telescope which consists of a network of phased arrays, with all beamforming accomplished electronically rather than with movable antennas. Under their scheme, each of the 24 stations in the core of LOFAR would form a beam covering the entire Moon, and these signals would be combined to form 50 higher-resolution tied-array beams covering the face of the Moon. RFI would be excluded in real time by anticoincidence criteria applied between the tied-array beams. The trigger algorithm would be based on a subset of the frequency channels of the high-band antennas (HBAs), and would trigger the storage of buffered data from the rest of the HBA band and from non-core stations of the telescope, allowing greater sensitivity for confirmation of events. They consider triggering algorithms based on different subsets of the HBA frequency range; I take their `HiB' case, for which the effects of dispersion are minimised, and hence they find the highest detection efficiency. This case corresponds roughly to the highest-frequency 244 channels within the usable HBA band, each of width 195~kHz, and it is this 142--190~MHz frequency range that is shown in \tabref{tab:exps}. Their sensitivity calculation, however, is based on the entire HBA band of 110--190~MHz, and it is this bandwidth that I use as $\Delta\nu$ for the calculation below. The effective aperture for a single LOFAR HBA is \begin{align} \ensuremath{A_{\rm eff}} &= \min(\frac{\lambda^2}{3}, 1.5625{\rm ~m}^2) & \mbox{per antenna} \end{align} or 1.09~m$^2$ at the centre of the HiB band. The core region of LOFAR contains 24 stations, each with 2~HBA fields of 24~tiles each, with each tile consisting of 16 antennas, so its total effective aperture will be 20,025~m$^2$. However, unlike the steerable dish antennas used in the other experiments considered here, the phased arrays of LOFAR maintain a fixed orientation on the ground, and will have a reduced projected area for a source away from zenith. From the LOFAR site, the Moon reaches a maximum elevation of 56\ensuremath{^{\circ}}, at which the projected area is reduced to 16,600~m$^2$. I use this value for the effective aperture, assuming that observations can be scheduled close to transit at the optimum point in the Moon's orbit. The system temperature contains contributions from instrumental noise and Galactic synchrotron emission: \begin{equation} \ensuremath{T_{\rm sys}} = T_{\rm inst} + T_{\rm sky,0} \left( \frac{\lambda}{\rm 1~m} \right)^{2.55} \end{equation} where $T_{\rm inst} = 200$~K and $T_{\rm sky,0} = 60$~K, so the Galactic background sets a sky temperature of 270~K at the centre of the HiB band, for a total system temperature of $\ensuremath{T_{\rm sys}} = 470$~K. In this application, the sky temperature will be influenced by the Moon, which will occult some fraction of the Galactic background and replace it with its own thermal emission, but the Moon will occupy only a small fraction of the beam, and its temperature of 230~K\gcitep{troitskii1970} is similar to that of the Galactic background, so this makes little difference. With the effective aperture and system temperature derived above, $\ensuremath{\mathcal{E}_{\rm rms}}$ can be found by \eqnref{eqn:Erms} to be 0.0018 $\mu$V/m/MHz. The proposed trigger algorithm averages the signal power over a number of consecutive samples with the threshold chosen so that the background trigger rate from thermal noise is one per minute, to minimise the effect of the 5~s of dead time while storing the data after each trigger. \citet{singh2012} find the optimum window length to be 15~samples, finding for this case a detection efficiency of 50\% at a pulse amplitude of \mbox{$n_\sigma = 11.0$}, assuming perfect dedispersion. When there is an uncertainty in the STEC of \mbox{$\pm 1$}~TECU, causing the dispersion to be imperfect, they find their parameter $S_{80}$ (equivalent to $n_\sigma$, but for 80\% detection efficiency) to be increased by 14\%, so I scale $n_\sigma$ by the same ratio, to $12.6$. Achieving this precision in the STEC measurement will require an improvement over that achieved in the LUNASKA Parkes experiment, which found typical uncertainties of \mbox{$\pm 2$}~TECU in retrospective TEC maps based on Global Positioning System (GPS) data, or \mbox{$\pm 4$}~TECU in real-time ionosonde data\gcitep{bray2014a}. This improvement may be achieved by interpolating directly between real-time line-of-sight GPS measurements, which are accurate to better than 0.1~TECU\gcitep{hernandez-pajares2009}, or by measuring the Faraday rotation of polarised lunar radio emission passing through the ionosphere\gcitep{mcfadden2012}. Alternatively, if sufficient processing power is available, multiple copies of the signal could be dedispersed for different STECs and searched independently for pulses as suggested by \citet{romero-wolf2013}, at the cost of an increased trigger threshold required to maintain the same trigger rate from thermal noise. The simulations of \citet{singh2012} are more comprehensive than those in this work for their signal-processing strategy, so I assume all the effects described in \secref{sec:alpha} to be incorporated into the significance threshold given above, and apply no further corrections for the amplitude recovery efficiency (i.e.\ \mbox{$\alpha = 1$}). \citeauthor{singh2012}\ describe a triggering algorithm which operates individually on each polarisation channel, so I take \mbox{$f_C = 1$}. Since the pulse power at this frequency is split between linear polarisations by Faraday rotation, I take \mbox{$\eta = 2$}. Combining these with \eqnref{eqn:Emin}, the trigger threshold is $\ensuremath{\mathcal{E}_{\rm min}} = 0.031$ $\mu$V/m/MHz. Since a trigger causes the storage of buffered data for the entire telescope, which improves over the data available for the trigger by a factor of \mbox{$\sim 2$} in both collecting area and bandwidth, there is ample sensitivity to confirm the detection of an Askaryan pulse in retrospective analysis, so this trigger threshold defines the sensitivity of the experiment. This threshold spectral electric field is equivalent, for the full HBA band, to a flux density threshold of 12~kJy, compared to the value of 26~kJy determined by \citeauthor{singh2012}, although their reported threshold is for a detection efficiency of 80\% and averaged over the FWHM beam, both of which will increase its value. The anticoincidence criteria applied between the tied-array beams places an upper limit on the power of a pulse which can be detected without appearing in multiple beams, and hence being excluded as RFI. This can be mitigated by applying anticoincidence criteria only between widely-separated beams, to reduce the overlap between their beam patterns. \citet{singh2012} find the beam patterns to be complex, with different variation in azimuth and zenith angles, so I instead represent them with the theoretical mean sidelobe power level corresponding to the incoherent combination of the signals from 24~stations, which is $\mathcal{B}(\theta) = 1/24 = 4.2$\%. Since the stations of the LOFAR core have a much less regular distribution than the antennas of the WSRT, this is likely to be a better approximation than it is for the WSRT tied-array beams in \figref{fig:wsrtbeam}. I assume that RFI can be effectively excluded by setting the anticoincidence significance threshold at half the trigger threshold, consistent with the results from the LUNASKA Parkes experiment\gcitep{bray2014a}, so I take \mbox{$n_\sigma = 6.3$} for the exclusion threshold. The experience of \citet{buitink2010} suggests that this is insufficient to deal with the increased RFI at low observing frequencies, but the use of a two-dimensional array in this case rather than the one-dimensional WSRT may counteract this, as it avoids the strong sidelobes a one-dimensional array has on the RFI-rich horizon. Combining these values with \eqnref{eqn:Emin} gives us $\ensuremath{\mathcal{E}_{\rm max}} = 0.077$ $\mu$V/m/MHz. Assuming a duration of 200 hours, comparable with previous lunar radio experiments, the 5~s per minute of dead time after each trigger results in an effective observing time of 183 hours. Since LOFAR is electronically steered, and additional beams can be formed with sufficient signal-processing hardware, with future upgrades it may be possible to achieve much greater observing times by observing commensally with other projects: the Moon is above 30\ensuremath{^{\circ}}\ in elevation from the LOFAR site for 1,490 hours per year, or 1,360 hours after allowing for dead time. \subsection[Parkes PAF]{Parkes PAF} \label{sec:parkes_paf} \citet{bray2013} have proposed continued observations with the Parkes radio telescope using one of the phased array feed (PAF) receivers developed for the Australian Square Kilometre Array Pathfinder (ASKAP). These receivers\gcitep{schinckel2012} combine the signals from elements in the focal plane to form multiple beams within the field of view of the antenna. This would allow a lunar radio experiment to improve over the previous LUNASKA Parkes experiment with the 21~cm multibeam receiver (\secref{sec:lunaska_parkes}) by forming beams around the entire limb of the Moon, rather than the limited coverage shown in \figref{fig:parkesbeam}. The frequency range of these receivers is 0.7--1.8~GHz, not all of which will be processed for the 36 antennas of ASKAP, but the use of a single receiver on the 64~m Parkes antenna could justify processing the entire band. The major disadvantage of these receivers is their high system temperature (\mbox{$\sim 50$}~K), but this is less significant for a lunar radio experiment because the total system temperature is dominated by lunar thermal emission. Apart from the new receiver, this experiment would function similarly to the LUNASKA Parkes experiment, with real-time dedispersion and anticoincidence filtering between the beams to exclude RFI. I assume a duration of 200 hours as for LOFAR in \secref{sec:lofar}, but with a duty cycle of only 85\%, consistent with the loss of effective observing time from data storage and false positive rates of anti-RFI cuts in the LUNASKA Parkes experiment. The positioning of the beams relative to the limb is a trade-off between beam power on the limb and lunar thermal noise. Assuming the beams to be positioned slightly away from the Moon, as for the limb beams in \figref{fig:parkesbeam}, approximately 12 beams are required to achieve complete limb coverage. As the base system temperature for the ASKAP PAFs is \mbox{$\sim 25$}~K higher than that of the receiver used for the LUNASKA Parkes experiment, I take the total system temperature to be increased by this amount relative to the limb beams in that experiment, which gives $\ensuremath{T_{\rm sys}} = 80$~K. The effective aperture of the 64~m Parkes antenna with a PAF, given the stated 80\% aperture efficiency of these receivers, is 2,574~m$^2$. By \eqnref{eqn:Erms} the noise level can then be found to be $\ensuremath{\mathcal{E}_{\rm rms}} = 0.00038$ $\mu$V/m/MHz. The pointing assumed above, 4\ensuremath{^{\prime}}\ from the lunar limb, implies a beam power of 77.7\% at the closest point on the limb, assuming an Airy disk and averaging across the band. I assume the native orthogonal linear polarisations of the receiver to be coherently summed with an appropriate phase offset to form channels with linear polarisations aligned radially to the Moon for each beam, implying \mbox{$\eta = 1$}. This neglects the effects of Faraday rotation, which is not very significant for this frequency range: under typical conditions (STEC of 20~TECU; projected geomagnetic field of 50~$\mu$T along the line of sight) the polarisation of a lunar-origin pulse will be subjected to a differential rotation of 23\ensuremath{^{\circ}}\ between the minimum and maximum frequencies, corresponding to a \mbox{$\sim 1$}\% loss of signal power for a receiver oriented to match the polarisation at the centre of the band. Assuming an STEC uncertainty of 1~TECU, as for LOFAR in \secref{sec:lofar}, and also assuming effectively-complete interpolation and formation of the signal envelope, the signal recovery efficiency determined by the simulation in \appref{app:sim} is \mbox{$\alpha = 0.89$}. Taking a significance threshold of \mbox{$n_\sigma = 8.8$}, which is the expected maximum level of the thermal noise in 12 channels over the assumed observing time (from Ref.\gcitep{bray2012}, Eq.~46), \eqnref{eqn:Emin} then gives $\ensuremath{\mathcal{E}_{\rm min}} = 0.0043$ $\mu$V/m/MHz. As for the LUNASKA Parkes experiment, partial optimisation of the signal in real time should allow the trigger threshold to be set low enough that any events exceeding this threshold are stored, so that the sensitivity of the experiment is determined by this value for $\ensuremath{\mathcal{E}_{\rm min}}$ determined for a fully-optimised signal. As for other experiments using an anticoincidence filter to exclude RFI, the possibility of a lunar-origin pulse being detected in multiple beams places an upper limit on the detectable pulse strength. I take the sidelobe beam power to be 0.5\%, the same as for the Parkes 21~cm multibeam receiver. As for LOFAR in \secref{sec:lofar}, I assume an anticoincidence significance threshold of half the trigger threshold, or \mbox{$n_\sigma = 4.4$}, consistent with the successful exclusion of RFI in the LUNASKA Parkes experiment. Combining these values with \eqnref{eqn:Emin}, I find $\ensuremath{\mathcal{E}_{\rm max}} = 0.030$ $\mu$V/m/MHz. \subsection{AuScope} \label{sec:auscope} The AuScope VLBI array\gcitep{lovell2013} is a recently-completed array of three 12~m antennas with baselines ranging from 2,360~km to 3,432~km. Its primary purpose is geodesy, observing fixed radio sources in order to improve the precision of the terrestrial and celestial reference frames, but it may also be used for observational radio astronomy. It is less heavily subscribed than the other telescopes considered for lunar radio experiments, so longer observation times are possible: during each year the Moon is visible from all three antennas for 2,900 hours, which I take as the observing time. Each antenna is equipped with a combined S- and X-band receiver with dual circular polarisations. Of these bands, only the S band is useful in this application, with a frequency range of 2.2--2.4~GHz. The beam at this frequency is larger than the Moon, with an FWHM Airy disk size of \mbox{$\sim 38$}\ensuremath{^{\prime}}, indicating that the optimum observing strategy is to point at the centre of the Moon in order to achieve equal sensitivity around the entire lunar limb. Using the lunar thermal emission model of \citet{moffat1972} as applied in Ref.\gcitep{bray2014a}, the Moon contributes 69~K to the system temperature in this pointing configuration, for a total system temperature of 154~K when combined with the 85~K base level of the receivers. With the reported aperture efficiency of 60\%, the effective aperture for each antenna is 69~m$^2$, so from \eqnref{eqn:Erms} I find the noise level in a single polarisation channel to be $\ensuremath{\mathcal{E}_{\rm rms}} = 0.0076$ $\mu$V/m/MHz. The simplest way to perform this experiment is to search for coincident pulses on all six channels (two polarisations on each of three antennas). Each antenna would be monitored for a linearly-polarised Askaryan pulse appearing simultaneously in both circular polarisation channels, which would trigger the storage of voltage data for the event. This would eliminate the majority of the RFI, as in the GLUE and LaLuna experiments, so the resulting trigger rate should be dominated by thermal noise. These stored events would then be compared retrospectively, to find any coincident events on all three antennas with relative times of arrival indicating that they originated from the Moon. As RFI sources are unlikely to be simultaneously visible to such widely-separated antennas, this criterion should provide effectively-complete rejection of RFI. To find the effective significance threshold, I consider the trigger rates $R_1$ in a single polarisation channel, $R_2$ for the rate of coincidences between both polarisations channels on a single antenna, and $R_6$ for six-fold coincidences between both polarisations on all three antennas with reconstructed pulse origins on the Moon. The last two of these are related by \begin{equation} R_6 = R_2^3 \, W^2 \end{equation} where $W$ is the time window corresponding to the range of arrival directions across the face of the Moon, typically \mbox{$\sim 30$}--100~$\mu$s over these baselines. Setting $R_6$ equivalent to a single detection in the observing time of the experiment, to obtain the expected level of the thermal noise, I find $R_2$ to be 0.1--0.3~Hz. This is the required trigger rate on each antenna for the sensitivity to be limited by thermal noise rather than by the trigger threshold, and is sufficiently low that the minimal data required on each trigger can be recorded without incurring significant dead time. The relation to the trigger rate $R_1$ in a single polarisation channel is \begin{equation} R_2 = R_1^2 \, \frac{1}{\Delta\nu} , \end{equation} assuming that the delay between the two polarisation channels can be calibrated to a precision comparable to the scale of the inverse of the bandwidth $\Delta\nu$, resulting in typical $R_1$ values in the range 5--8~kHz. If the inter-polarisation delay can be calibrated to a small fraction of the inverse bandwidth, then the two channels could be summed incoherently (in the the fourth regime described in \secref{sec:combchan}) rather than being operated in coincidence, allowing an improvement in sensitivity by a factor $2^{1/4}$, but I do not assume this here. This trigger rate $R_1$ makes it possible to find the trigger threshold in a single polarisation channel for which a single global coincidence is expected from thermal noise, equivalent to the limiting significance threshold $n_\sigma$ of the experiment. I assume effectively-complete interpolation and formation of the signal envelope, implying \mbox{$\alpha = 1$}, given that dispersion is negligible at this observing frequency. The trigger threshold for the signal envelope can then be found (from Ref.\gcitep{bray2012}, Eq.~46) as \mbox{$n_\sigma = 4.8$}, with no significant variation across the range of values found for $R_1$. Given a beam power of \mbox{$\mathcal{B}(\theta_L) = 62$}\% on the limb for an Airy disk centred on the Moon, \mbox{$\eta = 2$} for circular polarisation, and a scaling factor \mbox{$f_C = 1.26$} for the required six-channel coincidence from \eqnref{eqn:coinc}, I find $\ensuremath{\mathcal{E}_{\rm min}}$ from \eqnref{eqn:Emin} to be 0.0083 $\mu$V/m/MHz. The feature that most clearly distinguishes this potential experiment from the others described here is the length of the baselines between the antennas. Apart from improving the efficacy of RFI rejection, this also allows the position on the Moon of the particle cascade responsible for a detected pulse to be determined with high precision, which is a vital piece of information for determining the direction of origin of the primary UHE particle. The disadvantage of the long baselines is the statistical penalty imposed by the increased search space for a coincident pulse, which leads to a threshold significance (as calculated above) higher than that for the otherwise similar RESUN experiment. An additional concern is that the narrowly-directed Askaryan pulse may not be visible to all of the antennas, which are separated by up to 0.5\ensuremath{^{\circ}}\ as seen from the Moon. However, the angular scale $\Delta\theta$ of the Askaryan radiation pattern at this observing frequency is 2.4\ensuremath{^{\circ}}\ (see Eq.~8 of Ref.\gcitep{alvarez-muniz2006}), larger than the separation between antennas, so this does not pose a significant problem. \section{Sensitivity to ultra-high-energy particles} \label{sec:nossr} The first detailed estimation of the particle aperture of a lunar radio experiment comes from the Monte Carlo simulations of \citet{gorham2001}, which were followed by further simulations by \citet{beresnyak2003}, \citet{scholten2006}, \citet{panda2007} and \citet{james2009b}, and an analytic approach by \citet{gayley2009}. Comparing these models is difficult, because the code for each simulation is generally not published, and reimplementing them from their published descriptions is laborious, but it is possible to compare their published results when several models have been applied to the same experiment. The most detailed simulations to date, those of \citeauthor{james2009b}, find results that are more pessimistic (lower aperture) than those reported for the GLUE experiment\gcitepsim{gorham2004a}{gorham2001} by around an order of magnitude, more pessimistic than those reported for the NuMoon experiment\gcitepsim{buitink2010}{scholten2006} by a similar factor\gcitep{james2011}, and approximately consistent\gcitep{james2009} with those reported for the Kalyazin experiment\gcitepsim{beresnyak2005}{beresnyak2003}. \citeauthor{gayley2009}\ also calculate the aperture for the GLUE experiment with their analytic model, finding results consistent with those of \citeauthor{james2009b}. Perfect agreement between these models is not expected, as they make different physical assumptions regarding the spectrum and angular distribution of Askaryan radiation, the physical properties of the lunar regolith, etc. However, even with these assumptions matched as closely as possible between different simulations, there remain in some cases discrepancies in the results (see App.~A of Ref.\gcitep{james2009}), which may be due to errors in their implementation in software. The analytic model of \citeauthor{gayley2009}\ avoids this problem because its published version includes the complete derivation of its final result, allowing it to be rigorously checked by other researchers. However, it makes several approximations in order to obtain a result in closed form, such as assuming constant elasticity for neutrino-nucleon interactions, and a constant transmission coefficient for radiation passing through the regolith-vacuum boundary, which may affect its accuracy. The use of lunar radio observations was originally suggested by \citet{dagkesamanskii1989} primarily for the detection of neutrinos, and most of the above models were originally developed with this purpose in mind, neglecting the possibility of detecting UHECRs. The simulations of \citeauthor{scholten2006}\ and \citeauthor{james2009b}\ have been applied to calculating the aperture for the detection of UHECRs, and the analytic model of \citeauthor{gayley2009}\ has been adapted to this purpose by \citet{jeong2012}. However, none of these models have been compared in this context. In this section, I calculate the sensitivity of the lunar radio experiments listed in \secref{sec:exps} to both neutrinos (\secref{sec:neutrinos}) and UHECRs (\secref{sec:crs}), based on the analytic models of \citeauthor{gayley2009}\ and \citeauthor{jeong2012}\ respectively, with some modifications as described in the corresponding sections. The implementation of these models is described in detail in \appref{app:model}, and the parameters used listed in \tabref{tab:exps}. For the case of neutrinos, I compare the results with those from the simulations of \citeauthor{james2009b}\ in greater detail than previous work, in \secref{sec:nu_compare}. The models used here do not include any correction for the effects of small-scale lunar surface roughness, which may cause a large (more than an order of magnitude) increase in aperture at high particle energies, at least at high frequencies\gcitep{james2010}. Accordingly, the results in this section may be taken as a comparison of lunar radio experiments, but should not be taken as a precise measure of their absolute sensitivity. Further development of aperture models --- either these analytic models, or simulations --- is strongly motivated. For experiments with only a minimum threshold electric field $\ensuremath{\mathcal{E}_{\rm min}}$, the models described in \appref{app:model} can be applied directly, finding the aperture due to the detection of events with electric field \mbox{$\mathcal{E} > \ensuremath{\mathcal{E}_{\rm min}}$}. For experiments which also have a maximum threshold electric field $\ensuremath{\mathcal{E}_{\rm max}}$, I find the aperture as \begin{equation} A(E) = A(E; \ensuremath{\mathcal{E}_{\rm min}}) - A(E; \ensuremath{\mathcal{E}_{\rm max}}) , \label{eqn:aperture} \end{equation} which excludes events which would be detected with electric field \mbox{$\mathcal{E} > \ensuremath{\mathcal{E}_{\rm max}}$}. When \mbox{$\ensuremath{\mathcal{E}_{\rm min}} > \ensuremath{\mathcal{E}_{\rm max}}$}, as for the centre-pointing configuration of the GLUE experiment, the aperture is zero. The aperture $A_{\rm P}(E)$ can be found separately for each pointing configuration P used in an experiment. The total exposure for an experiment is found by summing the exposure for each pointing, as \begin{equation} X\!(E) = \sum_{\rm P} A_{\rm P}(E) \, t_{\rm obs,P} . \end{equation} The 90\%-confidence model-independent limit set by the experiment to a diffuse isotropic particle flux, assuming zero detected events, is then \begin{equation} \frac{dF_{\rm iso}}{dE} < \frac{ 2.3 }{ E \, X\!(E) } \label{eqn:limit} \end{equation} where the factor of 2.3 is the mean of a Poisson distribution for which there is a 10\% probability of zero detections. \subsection{Neutrinos} \label{sec:neutrinos} I find the sensitivity of lunar radio experiments to neutrinos using the model of \citet{gayley2009}, with one modification for consistency with the simulations of \citet{james2009b}. The two models are otherwise consistent in their assumptions, but they differ in the way they treat the composition of the Moon. \citeauthor{james2009b}\ assume a surface regolith layer of depth 10~m underlaid by a sub-regolith layer of effectively infinite depth, both of which are characterised by their density $\rho$, their refractive index $n_r$, and their electric field attenuation length for radio waves $L_\gamma$, defined in terms of $\lambda$, the radio wavelength in vacuum. Values for these parameters are given in \tabref{tab:regvals}. \citeauthor{gayley2009}\ make the simplifying assumption that all detectable particle cascades occur in the regolith, for which they take the same values as \citeauthor{james2009b}\ for $\rho$ and $n_r$, but for $L_\gamma$ they give an expression equivalent to 29$\lambda$, matching the value used by\citeauthor{james2009b}\ for the sub-regolith layer. I modify the model of \citeauthor{gayley2009}\ by instead taking \mbox{$L_\gamma = 60\lambda$}, matching the value that \citeauthor{james2009b}\ use for the surface regolith layer. \begin{table*} \centering \begin{threeparttable} \caption[Regolith parameters in different neutrino aperture models]{Regolith parameters in different neutrino aperture models.} \begin{tabular}{llccc} \toprule \multirow{2}{*}{Model} & \multirow{2}{*}{layer} & \multicolumn{1}{c}{$\rho$} & \multicolumn{1}{c}{$n_r$} & \multicolumn{1}{c}{$L_\gamma$} \\ & & \multicolumn{1}{c}{(g\,cm$^{-3}$)} & & \\ \midrule \multirow{2}{*}{\citet{james2009b}} & regolith & 1.8 & 1.73 & 60$\lambda$ \\ & sub-regolith\,\tnote{a} & 3.0 & 2.50 & 29$\lambda$ \\[0.3em] \citet{gayley2009} & regolith & 1.8 & 1.73 & 29$\lambda$ \\[0.3em] this work & regolith & 1.8 & 1.73 & 60$\lambda$ \\ \bottomrule \end{tabular} \begin{tablenotes} \titem{a} Below depth of 10~m. \end{tablenotes} \label{tab:regvals} \end{threeparttable} \end{table*} This value for $L_\gamma$ corresponds to a loss tangent of \mbox{$1 / 60 \pi n_r = 0.003$}. The loss tangent of the regolith is determined primarily by the (depth-dependent) density and the abundances of FeO and TiO$_2$, with this value equivalent to a combined abundance of \mbox{$\sim 10$}\% at the surface (see Fig.~6 of Ref.\gcitep{olhoeft1975}), which is a reasonable approximation for the varied abundance over the surface of the Moon\gcitep{shkuratov1999}. At a depth of 10~m or more the loss tangent is roughly doubled, corresponding to the halved value of $L_\gamma$ that \citeauthor{james2009b}\ use for the sub-regolith layer. By matching the parameters used by \citeauthor{james2009b}\ for the surface regolith layer, I should find an equal contribution to the effective aperture from neutrinos interacting in this volume, but I should find a different contribution from the volume represented by the sub-regolith layer. Compared to their work, the value used here for the attenuation length of the sub-regolith layer is 2.1 times larger, leading to a corresponding increase in the detector volume, while the value for the density of this layer is 1.7 times smaller, leading to a corresponding decrease in the neutrino interaction rate; combined, these should lead to the neutrino aperture of the sub-regolith layer being overestimated here by a factor of 1.2. The analytic model used here also neglects the transmission losses at the regolith/sub-regolith interface modelled by \citeauthor{james2009b}, which will cause it to further overestimate the aperture contribution from the sub-regolith layer. These inaccuracies will be most significant for low radio frequencies and high neutrino energies, for which the sub-regolith contributes the largest fraction of the total aperture. \subsubsection{Comparison of analytic and simulation results} \label{sec:nu_compare} \begin{figure} \centering \includegraphics[width=\linewidth]{nu_ap_compare} \caption[Comparison of analytic and simulated neutrino apertures]{Comparison of neutrino apertures from the analytic model used in this work (thin lines) and previously-reported apertures (thick lines) from the simulations of \citet{james2009b}, for the LUNASKA ATCA experiment\gcitep{james2010} (left) and the LUNASKA Parkes experiment\gcitep{bray2015a} (right), for a range of pointings (solid, dashed, dash-dotted). The ratio between apertures from analytic and simulation results (lower plots) shows that, compared to simulations, the analytic model tends to underestimate the aperture at low and high neutrino energies, but is approximately accurate at intermediate energies.} \label{fig:nu_ap_compare} \end{figure} The originally-reported apertures for the LUNASKA ATCA and LUNASKA Parkes experiments are based on the simulations of \citeauthor{james2009b}, so the level of agreement between these and the apertures calculated in this work may be taken as a measure of the accuracy of the simplifying assumptions used in the model of \citeauthor{gayley2009}, and the further assumptions made in my implementation thereof. For the LUNASKA ATCA experiment, this includes the assumption of a flat bandpass made in this work, as a piecewise linear approximation to the bandpass was used in calculating the originally-reported limit; for the LUNASKA Parkes experiment, with a narrower band, a flat bandpass is assumed in both the original report and this work. A comparison of the apertures from the original reports and in this work is shown in \figref{fig:nu_ap_compare}. For both experiments, the apertures derived in this work indicate a higher neutrino energy threshold than those from the original reports, agree approximately at slightly higher energies, and (in most cases) indicate a lower aperture than the original reports at higher energies. The form of this deviation matches that found in a previous comparison\gcitep{gayley2009} for the GLUE experiment, though an absolute comparison is difficult, as no explanation is given by \citeauthor{gayley2009}\ for their choice of the limb coverage parameter $\zeta$. The simplest explanation for the first discrepancy --- the increased energy threshold in the analytic model --- is that it is due to the variable inelasticity of neutrino-nucleon interactions (e.g.\ \cite{connolly2011}): the interactions of lower-energy neutrinos may be detectable only when a large fraction of their energy is manifested in the resulting hadronic particle cascade, rather than the flat rate of 20\% assumed in this work, resulting in a lower detectable neutrino energy threshold for models (such as those of \citeauthor{james2009b}) which include this effect. Alternatively, the first discrepancy may also be due to the charged leptons (electrons, muons and taus) produced by neutrino-nucleon charged-current interactions, which are also neglected in this work. These particles typically carry \mbox{$\sim 80$}\% of the energy of the primary neutrino, and are thus capable of initiating a particle cascade which is detectable even when the primary hadronic cascade (with the remaining \mbox{$\sim 20$}\% of the energy) is below the detection threshold; however, muons and taus do not generally initiate a single cascade containing the majority of their energy, and the electromagnetic cascade initiated by a UHE electron is elongated by the LPM effect\gcitep{landau1953,migdal1956} causing the resulting Askaryan radiation to be directed in a very narrow cone, and hence are unlikely to be detected. Consequently, these secondary leptons make only a minor (\mbox{$\sim 10$}\%) contribution\gcitep{james2009b} to the neutrino aperture in the energy range in which the primary hadronic cascade is detectable, but the possibility of detecting the electromagnetic cascade from a charged-current interaction of an electron neutrino provides some minimal sensitivity down to a lower threshold neutrino energy than would otherwise be the case, matching the observed discrepancy in the threshold. This is also consistent with \citeauthor{james2009b}, who find the fractional contribution to the neutrino aperture of these primary electromagnetic cascades to be larger for lower neutrino energies. However, this contribution was omitted from the simulations for the LUNASKA Parkes experiment, so it can only assist in explaining the discrepancy seen for the LUNASKA ATCA experiment. The second discrepancy --- the decreased neutrino aperture at high energies in the analytic model --- is in the wrong direction and probably much too large to be explained by the different treatment of the sub-regolith layer. One possible explanation is that it is a consequence of the small-angle approximations made by \citet{gayley2009}, under the assumption that a particle cascade is only detectable from a point very close to the Cherenkov angle, which becomes less accurate at higher energies. Part of the discrepancy may also be caused by the way the aperture calculation in \eqnref{eqn:aperture} incorporates the maximum threshold $\ensuremath{\mathcal{E}_{\rm max}}$, which is a more significant constraint at higher energies; this is supported by the lesser discrepancy found for the LUNASKA ATCA experiment, which did not apply an anticoincidence filter and therefore had no maximum threshold. Finally, the discrepancy may be largely due to the assumption of a fixed limb coverage parameter $\zeta$: at high energies, particle cascades may be visible outside the fraction of the lunar limb covered by the primary telescope beam, through the beam sidelobes, which is neglected in the analytic model. This explanation is supported by the absence of this discrepancy for the Moon-centre pointing of the LUNASKA ATCA experiment, for which I take \mbox{$\zeta = 100$}\%. Future refinement of the analytic model might benefit from incorporating an energy-dependent limb coverage parameter $\zeta(E)$ to correct for this effect. Note that all of the prospective future experiments considered in \secrefs{sec:lofar}{sec:auscope} have 100\% limb coverage, so this effect should not apply to them. Most importantly, the analytic model of \citet{gayley2009} as implemented in this work produces apertures which are consistent with the simulations of \citet{james2009b} at intermediate energies, around the region of maximum sensitivity to an $E_\nu^{-2}$ neutrino spectrum. The apertures in this region are consistent within a factor of two, which may be taken as the uncertainty associated with the implementation of this model of the neutrino aperture. This is smaller than the uncertainties associated with the neutrino-nucleon cross-section\gcitep{connolly2011}, or with small-scale lunar surface roughness\gcitep{james2010}. \subsubsection{Comparison of different experiments} \label{sec:nu_exp_compare} The neutrino apertures that I calculate for the experiments in \secref{sec:exps} are shown in \figref{fig:nu_aperture}. They show trends that are familiar from previous work, but worth revisiting. The aperture for each experiment increases rapidly above some threshold neutrino energy for which the Askaryan radio pulse is strong enough to detect, and continues to increase, more slowly, at higher energies, both due to the increased radio pulse strength which allows a cascade to be detected deeper in the regolith, and because the neutrino-nucleon cross-section increases with energy, making down-going neutrinos more likely to interact in the regolith. By comparison with \tabref{tab:exps}, we see that minimum detectable neutrino energy is determined by $\ensuremath{\mathcal{E}_{\rm min}}$, and the aperture for higher-energy neutrinos is determined by the limb coverage $\zeta$. Lower-frequency experiments (NuMoon and LOFAR) have a larger aperture, as they can detect cascades over a wider range of angles or at greater depths beneath the lunar surface, although this latter effect may be overestimated here due to the optimistic assumptions regarding the sub-regolith layer. The parameter $\ensuremath{\mathcal{E}_{\rm max}}$ has little effect on the aperture, implying that the detectable cascades are dominated by those producing radio pulses with amplitudes only slightly exceeding $\ensuremath{\mathcal{E}_{\rm min}}$. \begin{figure} \centering \includegraphics[width=\linewidth]{nu_aperture} \caption[Neutrino apertures]{Neutrino apertures for the experiments listed in \secref{sec:exps}, calculated with the analytic model used in this work. For experiments which used multiple pointing configurations, on the limb, half-limb or centre of the Moon, the aperture for each pointing is shown individually.} \label{fig:nu_aperture} \end{figure} For past experiments, the corresponding limits on the diffuse neutrino flux are shown in \figref{fig:nu_flux_old}, compared to the limits originally reported for each experiment. For future experiments, limits are shown in \figref{fig:nu_flux_new}, along with predicted neutrino fluxes from the decay of superheavy particles from kinks in cosmic strings in the model of \citet{lunardini2012}. These are the most optimistic predictions not yet excluded by other (non-lunar) neutrino detection experiments; this is the class of models which are most suited to being tested by lunar radio experiments. For the most optimistic of the fluxes shown in this figure, the LOFAR experiment would expect to detect 5.1~neutrinos in a nominal 200~hours of observing time, or exclude it with a confidence of 99\% if no neutrinos were detected. \begin{figure} \centering \includegraphics[width=\linewidth]{nu_flux_old} \caption[Neutrino flux limits from past experiments]{Limits on the diffuse neutrino flux set by the past experiments listed in \secref{sec:exps}. Solid lines show the limits derived in this work based on the parameters in \tabref{tab:exps}, while dotted lines show previously-reported limits for the Parkes\gcitep{james2007}, GLUE\gcitep{gorham2004a}, Kalyazin\gcitep{beresnyak2005}, LUNASKA ATCA\gcitep{james2010}, NuMoon\gcitep{buitink2010}, RESUN\gcitep{jaeger2010} and LUNASKA Parkes\gcitep{bray2015a} experiments. In the case of the Kalyazin experiment, this is a model-dependent limit for an $E_\nu^{-2}$ neutrino spectrum, and has been rescaled from 95\% to 90\% confidence.} \label{fig:nu_flux_old} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{nu_flux_new} \caption[Neutrino flux limits from future experiments]{Limits on the diffuse neutrino flux that may be set by the near-future experiments listed in \secref{sec:exps}, for the nominal observing times given in the text. Dashed lines show the potential limits derived in this work based on the parameters in \tabref{tab:exps}, while solid lines (unlabelled) show the limits set by past experiments from \figref{fig:nu_flux_old}. Dash-dotted lines show models of the potential neutrino flux from kinks in cosmic strings\gcitep{lunardini2012}.} \label{fig:nu_flux_new} \end{figure} The limits found in this work for past experiments, shown in \figref{fig:nu_flux_old}, are generally less constraining than those originally reported for each experiment; in some cases, dramatically so. This may result from differences between the original analysis and the re-analysis in this work either in the calculation of the sensitivity of the experiment to coherent radio pulses, or in the model used to translate this radio sensitivity to a neutrino aperture. To discriminate between these possibilities, \figref{fig:nu_flux_compare} also shows, for selected experiments, neutrino limits calculated with the aperture model used in this work, but with the radio sensitivity from the original reports. For the GLUE experiment, the limits I calculate for this plot are for the limb pointing only, as this is the only configuration for which \citet{williams2004} reports the radio detection threshold (\mbox{$\ensuremath{\mathcal{E}_{\rm min}} = 0.00914$} $\mu$V/m/MHz) --- but this configuration was used for a majority (59\%) of the total observing time for this experiment, and had a lower radio detection threshold than other pointings, so the limit set by this pointing alone is close to that for the entire experiment. For NuMoon, the reported flux density threshold \mbox{$\ensuremath{F_{\rm min}} = 240$}~kJy was converted to a minimum spectral electric field \mbox{$\ensuremath{\mathcal{E}_{\rm min}} = 0.128$} $\mu$V/m/MHz with \eqnref{eqn:Fmin}, using the 55~MHz bandwidth of the experiment, and the limb coverage of \mbox{$\zeta = 0.67$} was taken from the original report\gcitep{buitink2010}. For RESUN, the originally-reported radio detection threshold is \mbox{$\ensuremath{\mathcal{E}_{\rm min}} = 0.017$} $\mu$V/m/MHz\gcitep{jaeger2010}. All other parameters for the radio sensitivity of these experiments are as given in \tabref{tab:exps}. \begin{figure} \centering \includegraphics[width=\linewidth]{nu_flux_compare} \caption[Comparison of neutrino flux limits]{Limits on the diffuse neutrino flux set by selected past experiments, showing versions of each limit calculated with different models, to illustrate the effects of the choice of model at each stage of the calculation. As in \figref{fig:nu_flux_old}, solid lines show limits derived in this work, and dotted lines show limits from the original reports\gcitep{gorham2004a,buitink2010,jaeger2010}. Dashed lines show limits calculated with the neutrino aperture model used in this work, but based on the radio pulse detection thresholds from the original reports, as described in the text. The upper solid line for the NuMoon experiment shows the limit after allowing for the effect of the anticoincidence filter between the two on-Moon beams described in \secref{sec:wsrt} (i.e.\ without the modified analysis in \secref{sec:wsrt_redone}).} \label{fig:nu_flux_compare} \end{figure} For the GLUE experiment, the neutrino limit calculated in this work with the originally-reported radio sensitivity is more similar to the limit calculated with the revised radio sensitivity from \secref{sec:glue} than to the limit from the original report. This indicates that the bulk of the discrepancy is due to the relative optimism of the simulations of \citet{gorham2001}, as previously found by \citet{james2009b} and \citet{gayley2009}. The limit calculated here with the radio detection threshold and lunar coverage from the original report of the NuMoon experiment is a factor \mbox{$\sim 6$} less constraining than that reported by \citet{buitink2010}, roughly matching a factor \mbox{$\sim 10$} found by \citet{james2011} in a similar test with their own aperture model. The limit is relaxed by a further factor \mbox{$\sim 5$} when using the revised radio sensitivity derived in \secref{sec:wsrt}, in proportion with the decrease in the estimated lunar coverage, and by a final factor \mbox{$\sim 5$}, or more at higher energies, if the radio sensitivity is calculated with the parameter $\ensuremath{\mathcal{E}_{\rm max}}$ based on the anticoincidence cut applied in this experiment (i.e.\ neglecting the modified analysis in \secref{sec:wsrt_redone}). For the RESUN experiment, the limit from the original report and the limit calculated here based on the same radio detection threshold use almost the same aperture model, but the differences (in the treatment of the regolith, and of thermal noise) cause the latter to be slightly (factor \mbox{$\sim 1.5$}) more constraining. The reduced sensitivity to neutrinos shown for this experiment in \figref{fig:nu_flux_old} is therefore entirely due to the revised radio sensitivity calculated in \secref{sec:resun}. \subsection{Cosmic rays} \label{sec:crs} I estimate the sensitivity of lunar radio experiments to CRs using the model of \citet{jeong2012}, with one simple but highly significant modification. \citeauthor{jeong2012}\ based their model for the CR aperture on the model of \citet{gayley2009} for the neutrino aperture, which correctly took the energy of a neutrino-initiated hadronic particle cascade to be \mbox{$\sim 20$}\% of the original neutrino energy, as described in \secref{sec:nu_compare}. For CRs, however, 100\% of the CR energy goes into a hadronic particle cascade. The result of this correction is to increase the expected radio pulse amplitude, and thus to decrease the detection threshold in the CR energy, by a factor of five. Note that other models\gcitep{scholten2006,james2009b} already assume 100\% of the CR energy to go into a hadronic particle cascade, so no modification is implied to results based on these models. The CR apertures that I calculate for the experiments in \secref{sec:exps} are shown in \figref{fig:cr_aperture}, and display several differences from the neutrino apertures in \figref{fig:nu_aperture}. Because all CRs interact very close to the lunar surface, and at sufficiently high energies they are almost all detectable, the CR aperture increases only slowly at high energies. For experiments with a maximum threshold $\ensuremath{\mathcal{E}_{\rm max}}$, the aperture decreases at high energies, implying that the Askaryan radio pulses from these events are dominated by strong pulses which may be rejected by anticoincidence criteria. As in \figref{fig:nu_aperture}, the low-frequency experiment with LOFAR has a larger maximum aperture than other experiments, though in this case this is purely because a cascade may be detected from a broader range of angles. \begin{figure} \centering \includegraphics[width=\linewidth]{cr_aperture} \caption[Cosmic ray apertures]{CR apertures for the experiments listed in \secref{sec:exps}, calculated with the analytic model used in this work. As in \figref{fig:nu_aperture}, apertures for each pointing configuration are shown individually. Note the characteristic decrease in the aperture at high energies for experiments which apply anticoincidence rejection, and hence have a defined maximum radio threshold $\ensuremath{\mathcal{E}_{\rm max}}$ (see \tabref{tab:exps}).} \label{fig:cr_aperture} \end{figure} The corresponding limits on the diffuse CR flux are shown in \figref{fig:cr_flux}, compared to the only such limit that has been previously published, for the NuMoon experiment\gcitep{terveen2010}. As in \secref{sec:nu_exp_compare}, the limit found for this experiment in this work is significantly less constraining. \begin{figure} \centering \includegraphics[width=\linewidth]{cr_flux} \caption[Cosmic ray flux limits]{Limits on the diffuse CR flux set by the experiments listed in \secref{sec:exps}. Solid lines show the limits derived in this work based on the parameters in \tabref{tab:exps}, while a dotted line shows the previously-reported limit for the NuMoon experiment\gcitep{terveen2010}, the only one of these experiments for which such a limit has been published. Dashed lines show the limits that may be set by near-future experiments, for the nominal observing times given in the text. The measured flux shown is from observations by the Pierre Auger Observatory\gcitep{abraham2010}, with a 22\% systematic uncertainty $\sigma_{\rm sys}$ in the energy scale, and the corresponding limit at higher energies (dotted) is based on its contemporary exposure of 12,790 km$^2$\,sr\,yr (now 66,000 km$^2$\,sr\,yr\gcitep{aab2014b}), with the same definition as the other limits.} \label{fig:cr_flux} \end{figure} Of the past lunar radio experiments shown here, the LUNASKA Parkes experiment came closest to being able to detect the known CR spectrum, with 0.09 events expected to be detected based on a parameterisation of the spectrum\gcitep{abraham2010}, or a range of 0.04--0.19 events corresponding to the 22\% systematic uncertainty in the energy scale; it is therefore unsurprising that this experiment did not detect any events. The prospective Parkes PAF experiment shown here would expect to detect 1.4 events (uncertainty range from energy scale of 0.7--2.8 events) in a nominal 200~hours of observing time. These numbers will, however, depend strongly on the effects of small-scale lunar surface roughness, which are neglected here but will dominate the uncertainty. \section{Discussion} \label{sec:discussion} This work indicates that past lunar radio experiments are in some cases less sensitive than initially believed, both in their sensitivity to radio pulses and in their consequent sensitivity to the UHE particle flux. This underscores the need for these experiments to be conducted with a proper appreciation of the specialised requirements for the detection of coherent radio pulses, and for all experimental details to be fully reported so that they can be re-evaluated by other researchers; it remains to be seen whether other effects will be discovered that further affect the sensitivity of the experiments considered here. Ideally, it is also desirable for multiple experiments to be conducted with different techniques, to minimise the possibility that a single oversight will lead to the acceptance of an incorrect result. Previous comparisons between low- and high-frequency lunar radio experiments have generally found the larger particle apertures of the former to be a decisive advantage\gcitep{scholten2006,james2009b,gayley2009}. However, these comparisons have generally assumed frequency-independent radio sensitivity. The comparison in \tabref{tab:exps} indicates that low-frequency experiments, due to a combination of high system temperatures and increased ionospheric dispersion, typically have an increased radio pulse threshold. This is likely to remain the case for the near-future experiments considered here, until the advent of the SKA, for which the extremely large collecting area of its low-frequency component results in sensitivity similar to that of the high-frequency component\gcitep{bray2014b}. The application of existing analytic aperture models indicates that an experiment with 200~hours of observing time on the Parkes radio telescope, using a phased-array feed, would detect an average of 1.4~UHECRs, and an equal observing time with LOFAR could exclude UHE neutrino spectra predicted by exotic-physics models (e.g.\ \cite{lunardini2012}) with up to 99\% confidence for the most optimistic predictions. (The correction applied in \secref{sec:crs} to the model of \citet{jeong2012} reinforces their conclusion that, in the absence of neutrinos from such models, lunar radio experiments will detect UHECRs well before they detect the more confidently expected cosmogenic neutrino flux.) Note that these observing times are nominal values, representing a comparable effort to previous experiments. The likely prospect of the first UHECR detection with this technique, in particular, could justify a longer experiment; ignoring the uncertainties in the detection rate of one UHECR per 140~hours, 1,000~hours of observations with a phased-array feed on the Parkes radio telescope would detect an average of 7~UHECRs, with a 99.9\% probability of at least one detection. Future theoretical work in this field should seek to refine these predictions through further development of CR and neutrino aperture models, either by improving the analytic models used here or through new simulations, in particular to properly represent the effects of small-scale lunar surface roughness. The parameters derived in this work to describe lunar radio experiments allow the easy application of future models to recalculate the sensitivity to UHE particles of past experiments, or to predict the sensitivity of new ones. \section*{Acknowledgements} I would like to thank S.\ Buitink, B.\ Stappers and R.\ Smits for further information on the NuMoon observations with the WSRT, as well as T.\,R.\ Jaeger and R.\,L.\ Mutel for further information on the RESUN observations with the EVLA. I would also like to thank J.\,E.\,J.\ Lovell and J.\,M.\ Dickey for information about the AuScope VLBI array, and R.\,D.\ Ekers for general discussions regarding lunar radio experiments. Finally, I would like to thank an anonymous referee for several helpful comments. This research was supported by the Australian Research Council's Discovery Project funding scheme (project number DP0881006) and by the European Research Council's Starting Grant funding scheme (project number 307215, LODESTONE).
1,116,691,501,043
arxiv
\section{Introduction} The search for new particles has been given a boost with the discovery of a Higgs boson at the Large Hadron Collider (LHC) at CERN \cite{ATLAS:2012ae,Chatrchyan:2012tx}. While we are still awaiting confirmation of whether this is the long-awaited Higgs boson of the Standard Model (SM) or, as it appears at present, a Higgs boson with possibly tantalizing signs of new physics, phenomenologists are trying to predict benchmarks for beyond the Standard Model (BSM) physics that could follow. Some adhere to long-hyped scenarios such as weak-scale supersymmetry (either in its minimal reincarnation, or in a non-minimal form), extended gauge structures, extra-dimensional models, or Higgs composite models, and base their analyses on most likely and telling signatures of the models. Yet there has been also some interest in testing more generic collider features, that would have clear signatures and may be common to more models. The advantage is the fact that several models might yield the same collider signals, so that a simple but general model with minimal additions (particles and interactions) over the SM might be useful for obtaining results easy to compare with data. A small set of model parameters is usually involved, such as the masses of the new particles and the coupling strength of their interactions. In this setup, one could think of the SM as a limiting case when the new physics sector decouples. Adopting this bottom-up approach for new physics phenomenology, the cleanest and clearest results are most likely obtained for exotic particles, {\it i.e.}, particles with quantum numbers unlike those from the SM particle content. In this work, we propose to investigate the possibility for supplementing the Standard Model with a very simple kind of exotic states, which we denote by $X^{++}$ and which are either scalar, fermion or vector fields with two units of electric charge. We keep our analysis as general as possible by allowing the new particles to belong to different $SU(2)_L$ representations, but further restrict ourselves to the simplest ones, namely the singlet, doublet and triplet cases. This goes along the same lines as several recent works investigating doubly-charged particles in more or less generic situations \cite{Cuypers:1996ia,DelNobile:2009st,Rentala:2011mr,Meirose:2011cs,Hisano:2013sn,delAguila:2013yaa}, our work being however the only analysis studying in detail the collider implications of various spin and weak isospin quantum numbers. From a top-down point of view, it must be noted that such doubly-charged particles appear in many BSM scenarios, and thus are of particular interest to model builders. As examples, doubly-charged scalar states, often dubbed doubly-charged Higgs fields, appear in left-right symmetric models \cite{Pati:1974yy, Mohapatra:1974hk, Mohapatra:1974gc,Senjanovic:1975rk, Mohapatra:1977mj, Senjanovic:1978ev,Mohapatra:1979ia} or in see-saw models for neutrino masses with Higgs triplets \cite{Cheng:1980qt, Gelmini:1980re, Zee:1980ai, Han:2005nk, Lee:2005kd, Picek:2009is, Majee:2010ar,Aoki:2011yk, Chen:2011de,Kumericki:2011hf, Aoki:2011pz,Chiang:2012dk,Kumericki:2012bh,Sugiyama:2012yw,Picek:2012ei, Kanemura:2013vxa}. Doubly-charged fermions can appear in extra-dimensional models including custodian taus \cite{Csaki:2008qq, Chen:2009gy, Kadosh:2010rm, delAguila:2010vg, delAguila:2010es,Delgado:2011iz}, in new physics models inspired by string theories \cite{Cvetic:2011iq} or as the supersymmetric partners for the doubly-charged scalar fields in supersymmetric extensions of left-right symmetric models \cite{Demir:2008wt, Frank:2007nv,Babu:2013ega,Franceschini:2013aha}. Finally, BSM theories with an extended gauge group often include doubly-charged vector bosons \cite{Frampton:1989fu, Pal:1990xw, Pisano:1991ee, Frampton:1991wf, Frampton:1992wt} although it is also possible to consider vector states with a double electric charge independently of any gauge-group structure, as in models with a non-commutative geometry or in composite or technicolor theories \cite{Farhi:1980xs, Harari:1982xy, Cabibbo:1983bk, Eichten:1984eu, Pancheri:1984sm, Buchmuller:1985nn, Stephan:2005uj, Gudnason:2006ug, Biondini:2012ny}. Subsequently, same-sign dilepton and/or doubly-charged Higgs bosons resonances have been the topic of many accelerator analyses in the past. Usually assumed to be produced either singly or in pairs, no events have been observed by experiments around the Large Electron Positron collider (LEP) \cite{OPAL, OPAL_single, L3, DELPHI}, the Hadron Electron Ring Accelerator (HERA) \cite{H1,H1bis} and the Tevatron \cite{D0, D0bis, CDF, CDFFV}. The most up-to-date bounds have however been more recently derived by the LHC experimental collaborations. Both ATLAS and CMS have searched for long-lived doubly-charged states \cite{Aad:2013pqd,Chatrchyan:2013oca}, basing their analyses either on identification of the new particles by using their longer time-of-flight to the outer subdetectors or on their anomalous energy loss along their tracks. Assuming a Drell-Yan-like pair production, the long-lived doubly-charged state masses have been constrained to lie above 685 GeV after analyzing 5 fb$^{-1}$ of LHC collisions at a center-of-mass energy of $\sqrt{S_h}=7$ TeV and 18.8 fb$^{-1}$ of collisions at $\sqrt{S_h}$ = 8 TeV \cite{Chatrchyan:2013oca}. Nevertheless, these limits do not hold for promptly-decaying doubly-charged particles. In this case, dedicated studies only exist for doubly-charged Higgs bosons. Being pair-produced, they are then assumed to decay into a pair of leptons with the same electric charge through Majorana-type interactions \cite{Chatrchyan:2012ya,ATLAS:2012hi}. Assuming a branching fraction of 100\% decays into leptons, {\it i.e.}, neglecting the possible decays into a $W$-boson pair, the doubly-charged Higgs mass has been constrained to be larger than about 450 GeV. All these existing mass bounds can however be easily evaded by relaxing the rather constraining new physics assumptions. We hence follow the approach of most model-builders and consequently assume the new particle mass to be a free parameter. Our work is organized as follows. In Section \ref{sec:themodel} we describe in detail our model for particles and interactions, differentiating in the discussion into spin and weak-isospin fields. We also compute analytical cross sections for the production of the new states at hadron colliders and for their decays into Standard Model particles. We then dedicate Section \ref{sec:numerics} to a detailed numerical analysis of doubly-charged particle signals at the LHC and briefly discuss, in Section~\ref{sec:MC}, different kinematical variables that would allow for distinguishing the spin and/or $SU(2)_L$ quantum numbers of the new states. Finally, we summarize our results in Section~\ref{sec:conclusion}. \section{Simplified models for exotic doubly-charged states} \label{sec:themodel} Following the approach of the LHC New Physics Working Group \cite{Alves:2011wf}, specific final state topologies are described by means of dedicated simplified models. They consist of minimal extensions of the Standard Model where the number of new states and operators is maximally reduced. Moreover, the model parameters are translated in terms of relevant products of cross sections and branching ratios so that LHC data can be easily reinterpreted in terms of constraints on these quantities. In this work, we construct a set of simplified models describing all the mechanisms yielding the production of doubly-charged particles, followed by their subsequent decays into pairs of charged leptons (possibly together with additional neutral states when relevant) with the same electric charges. Only final state signatures with three leptons or more will be considered, as the associated Standard Model background is known to be under good control. We start with the Standard Model field content and the $SU(3)_C \times SU(2)_L \times U(1)_Y$ gauge group and then add an exotic doubly-charged, non-colored, state lying in a specific representation of the Lorentz group and $SU(2)_L$. Motivated by the most common existing new physics theories, we restrict ourselves to scalar, spin $1/2$ and vector states which we assume to lie either in the trivial, fundamental or adjoint representation of $SU(2)_L$. However, higher spin states or higher-dimensional representations of $SU(2)_L$ could be considered, such as in Ref.~\cite{Biondini:2012ny} where the phenomenology of excited leptons in the $\utilde{\bf 4}$ representation of $SU(2)_L$ is investigated. In addition, the hypercharge quantum numbers of the new multiplet are chosen so that the doubly-charged component is always the state with the highest electric charge. Finally, any interaction allowed by model symmetries but irrelevant for our study is omitted from the Lagrangians presented in this section. In the rest of this section, we construct simplified models following a classification of the doubly-charged states by their spin. We focus on their signals at the LHC and analytically compute cross sections and decay widths relevant for the production of final states with three leptons or more, the corresponding numerical analysis being performed in Section \ref{sec:numerics}. This will guide us to a choice of benchmark scenarios to be considered for a more advanced study based on Monte Carlo simulations, as in Section \ref{sec:MC}. \subsection{Spin $0$ doubly-charged particles}\label{sec:modsc} In the following, we focus on simplified models describing the dynamics of $SU(2)_L$ multiplets containing, as a component with the highest electric charge, a doubly-charged state. We denote by $\phi$, $\Phi$ and $\mathbf{\Phi}$ three complex scalar fields lying in the $\utilde{\bf 1}$, $\utilde{\bf 2}$ and $\utilde{\bf 3}$ representations of $SU(2)_L$, respectively, with hypercharge quantum numbers set to $Y_\phi = 2$, $Y_\Phi = 3/2$, $Y_{\mathbf\Phi} = 1$, \be\label{eq:scalarfields} \phi \equiv \phi^{++} \ , \qquad \Phi^i \equiv \bpm \Phi^{++} \\ \Phi^+ \epm \qquad\text{and}\qquad \mathbf \Phi^i{}_j \equiv \bpm \frac{\mathbf \Phi^+}{\sqrt{2}} & \mathbf \Phi^{++}\\ \mathbf \Phi^0 & -\frac{\mathbf \Phi^+}{\sqrt{2}} \epm \ . \ee In the last expression, we have employed the matrix representation for triplet fields defined by \be\label{sec:adjfundeq} \mathbf \Phi^i{}_j = \frac{1}{\sqrt{2}} (\sigma_a)^i{}_j\ \mathbf\Phi^a \ , \ee the matrices $\sigma^a$ being the Pauli matrices and $a=1,2,3$ a $SU(2)_L$ adjoint gauge index\footnote{In our notations, we always employ Latin letters of the middle of the alphabet for fundamental indices and Latin letters of the beginning of the alphabet for adjoint indices.}. Diagonalizing the third generator of $SU(2)_L$ in the adjoint representation, the gauge eigenstates $\mathbf\Phi^a$ can be linked to the physical mass-eigenstates $\mathbf\Phi^0$, $\mathbf \Phi^+$ and $\mathbf \Phi^{++}$ by means of \be \mathbf\Phi^1 = \frac1{\sqrt{2}} \Big[\mathbf\Phi^0 + \mathbf\Phi^{++}\Big] \ , \quad \mathbf\Phi^2 = \frac1{\sqrt{2} i} \Big[\mathbf\Phi^0 - \mathbf\Phi^{++}\Big] \quad\text{and}\quad \mathbf\Phi^3 = \mathbf\Phi^+ \ . \ee Kinetic and gauge interaction terms for the three fields of Eq.\ \eqref{eq:scalarfields} are fixed by gauge invariance, \be \lag_{\rm kin} = D_\mu \phi^\dag D^\mu \phi + D_\mu \Phi_i^\dag D^\mu \Phi^i + \ D_\mu \mathbf\Phi_a^\dag D^\mu \mathbf\Phi^a +\ldots \ , \label{eq:Lsck}\ee the covariant derivatives being given by \be\bsp D_\mu\phi =&\ \partial_\mu\phi - 2 i g' B_\mu\phi \ , \\ D_\mu\Phi^i =&\ \partial_\mu\Phi^i - \frac{3}{2} i g' B_\mu\Phi^i - i g\ \frac{(\sigma_a)^i{}_j}{2}\ \Phi^j\ W_\mu^a \ , \\ D_\mu\mathbf\Phi^a =&\ \partial_\mu\mathbf\Phi^a - i g' B_\mu\mathbf\Phi^a + g\ \e_{bc}{}^a\ \mathbf\Phi^c\ W_\mu^b \ , \esp\label{eq:covder}\ee and the dots standing for mass terms. In the expressions above, $g$ and $g'$ are the weak and hypercharge coupling constants, respectively, and we have normalized the structure constants of $SU(2)$ as $\e_{12}{}^3=1$. We have also introduced the electroweak gauge bosons denoted by $B_\mu$ and $W_\mu^a$. The Lagrangian above also allows the components of fields lying in a non-trivial representation of $SU(2)_L$ to decay into each other, together with an accompanying gauge boson, if kinematically allowed. However, we choose to focus in this work on the low mass region of the parameter space so that splittings, induced, {\it e.g.}, by electroweak symmetry breaking, are assumed smaller than the weak boson masses. In order to allow for the extra fields $\phi$, $\Phi$ and $\mathbf\Phi$ to decay, new Yukawa interactions are hence required, \be \lag_{\rm yuk} = \frac12 y^{(1)}\ \phi\ \lbar^c_R l_R + \frac{y^{(2)}}{\Lambda}\ \Phi^i\ \Lbar^c_i \gamma_\mu D^\mu l_R + \frac12 y^{(3)}\ \mathbf \Phi^i{}_j\ \Lbar^c_i L^j + {\rm h.c.} \ , \label{eq:Lscy}\ee where the three quantities $y^{(1)}$, $y^{(2)}$ and $y^{(3)}$ are $3 \times 3$ matrices in generation space and where flavor indices have been omitted for clarity. Moreover, we have explicitly indicated the chirality of the lepton fields, or equivalently their $SU(2)_L$ representations so that there is no confusion about the action of the gauge-covariant derivative $D_\mu$. The (four-component) spinorial field $l_R$ stands hence for the right-handed charged lepton singlet of $SU(2)_L$ and the objects \be\label{eq:deflep} L^i = \bpm \nu_L\\ l_L \epm \qquad\text{and}\qquad L_i = \e_{ij}\ L^j = \bpm l_L \\ - \nu_L \epm \ee are two representations of the weak doublet comprised of the left-handed neutrino $\nu_L$ and charged lepton $l_L$ fields. The terms included in the Lagrangian of Eq.\ \eqref{eq:Lscy} are consistent with gauge and Lorentz invariance, which leads to the appearance of the charge conjugation operator denoted by the superscript $^c$. Moreover, care is taken so that each component of the new scalar fields is allowed to decay into Standard Model particles. In particular, this implies the use of a higher-dimensional operator suppressed by an effective scale $\Lambda$ for the coupling of the weak doublet $\Phi$ to the lepton fields since the four-component spinor product $\xibar^c_R \lambda_L=0$ for any fermionic fields. The Lagrangian of Eq.\ \eqref{eq:Lsck} includes couplings of the new fields to the electroweak gauge bosons so that the former can then be produced at hadron colliders either from quark-antiquark scattering or through vector boson fusion. The only processes giving rise to a signature with three charged leptons or more consist of the pair production of two doubly-charged fields or of the associated production of one singly-charged and one doubly-charged state. Considering first the neutral current channels, $q\bar q \to \phi^{++} \phi^{--}$, $q\bar q\to \Phi^{++} \Phi^{--}$ and $q\bar q \to \mathbf{\Phi}^{++} \mathbf{\Phi}^{--}$, the relevant partonic cross sections read, as a function of the partonic center-of-mass energy $\sh$, \be\bsp \hat\sigma^{NC}_1 = &\ \frac{4 \pi \alpha^2 \sh}{9} \big[ 1 - 4x^2_{\phi^{++}}\big]^{\frac32} \bigg[ \frac{e^2_q}{\sh^2} - \frac{ e_q (L_q+R_q) (\sh-M_Z^2)}{2 c_W^2 \sh |\sh_Z|^2} + \frac{L_q^2 + R_q^2}{8 c_W^4 |\sh_Z|^2} \bigg] \ , \\ \hat\sigma^{NC}_2 = &\ \frac{4 \pi \alpha^2 \sh}{9} \big[ 1 - 4x^2_{\Phi^{++}}\big]^{\frac32} \bigg[ \frac{e^2_q}{\sh^2} + \frac{ e_q (1-4s_W^2) (L_q+R_q) (\sh-M_Z^2)}{8 c_W^2 s_W^2 \sh |\sh_Z|^2}+ \frac{(1-4s_W^2)^2 (L_q^2 + R_q^2)}{128 c_W^4 s_W^4 |\sh_Z|^2} \bigg] \ , \\ \hat\sigma^{NC}_3 = &\ \frac{4 \pi \alpha^2 \sh}{9} \big[ 1 - 4x^2_{\mathbf{\Phi}^{++}}\big]^{\frac32} \bigg[ \frac{e^2_q}{\sh^2} + \frac{ e_q (1-2s_W^2) (L_q+R_q) (\sh-M_Z^2)}{4 c_W^2 s_W^2 \sh |\sh_Z|^2} + \frac{(1-2s_W^2)^2 (L_q^2 + R_q^2)}{32 c_W^4 s_W^4 |\sh_Z|^2} \bigg] \ , \esp\label{eq:xsecscNC}\ee where $\hat\sigma_i$ is associated with the pair production of doubly-charged components of multiplets in the $\utilde{\mathbf{i}}$ representation of $SU(2)_L$. In those expressions, we have introduced the sine and cosine of the weak mixing angle $s_W$ and $c_W$, the quark electric charges $e_q$, their weak isospin quantum numbers $T_{3 q}$ and their $Z$-boson coupling strengths $L_q=2 (T_{3q}-e_q s_W^2)$ and $R_q=-2 e_q s_W^2$. Moreover, we are employing the reduced kinematical variables $x_\phi^2 = \frac{M^2_\phi}{\sh}$ and $\sh_Z = \sh - M_Z^2 + i \Gamma_Z M_Z$, where $M_Z$ and $\Gamma_Z$ are the $Z$-boson mass and width, respectively. For a new $SU(2)_L$ doublet (triplet) of scalar fields, trilepton signatures can also rise from the charged current production of a doubly-charged state in association with a singly-charged state, $u_i \bar d_j \to \Phi^{++} \Phi^-$ ($u_i \bar d_j \to \mathbf{\Phi}^{++} \mathbf{\Phi}^-$). The corresponding cross sections read \be\bsp \hat\sigma^{CC}_2 = &\ \frac{\pi \alpha^2 \sh}{72 s_W^4 |\sh_W|^2} |V_{ij}^{\rm CKM}|^2\ \lambda^{\frac32}(1,x^2_{\Phi^{++}},x^2_{\Phi^+}) \ , \\ \hat\sigma^{CC}_3 = &\ \frac{\pi \alpha^2 \sh}{36 s_W^4 |\sh_W|^2} |V_{ij}^{\rm CKM}|^2\ \lambda^{\frac32}(1,x^2_{\Phi^{++}},x^2_{\Phi^+}) \ , \esp\label{eq:xsecscCC}\ee where as for the neutral currents, we have employed the reduced propagator $\sh_W =\sh - M_W^2 + i \Gamma_W M_W$, $M_W$ and $\Gamma_W$ being the $W$-boson mass and width, in addition to the K\"allen function $\lambda(a,b,c) = a^2+b^2+c^2-2ab -2bc -2 ca$ and to the CKM matrix $V^{\rm CKM}$. After electroweak symmetry breaking, the neutral component of the $\mathbf\Phi$ multiplet can develop a vacuum expectation value (vev) $v_{\mathbf\Phi}$, \be\label{eq:vev} \mathbf\Phi^0 \to \frac{1}{\sqrt{2}} \Big[v_{\mathbf\Phi} + \mathbf H^0 + i \mathbf A^0 \Big] \ , \ee where we are now distinguishing the scalar $\mathbf H^0$ and pseudoscalar $\mathbf A^0$ degrees of freedom of the neutral field. In principle, $v_{\mathbf\Phi}$ is constrained to be small by the electroweak $\rho$-parameter which only slightly deviates from unity \cite{Gunion:1989in, Csaki:2002qg, Chen:2003fm, Csaki:2003si}, in addition to strong constraints arising from the neutrino sector \cite{Gelmini:1980re, Zee:1980ai, Han:2005nk, Lee:2005kd}. We have however decided to consider both scenarios with a very small $v_{\mathbf\Phi}$ value and those with a larger value, as in left-right symmetric models where the vacuum expectation values of the neutral fields are less constrained. In this case, the Lagrangian of Eq.~\eqref{eq:Lsck} allows for the production of a single doubly-charged or singly-charged new state through vector boson fusion \cite{Chiang:2012dk, Rommerskirchen:2007jv} or in association with a weak gauge boson \cite{Akeroyd:2005gt}. Since vector boson fusion processes do not yield final states with more than two charged leptons, we restrict ourselves to the study of the $q \bar q' \to \mathbf{\Phi}^+ Z$ and $q \bar q' \to \mathbf{\Phi}^{++} W$ channels, the corresponding partonic cross sections being given by \be\bsp \hat\sigma^{W\mathbf\Phi^{++}} = &\ \frac{\pi^2 \alpha^3 v^2_{\mathbf\Phi}}{18 s_W^6 \sh_W^2} |V_{ij}^{\rm CKM}|^2\ \lambda^\frac12 (1,x^2_{\Phi^{++}},x^2_W) \bigg[ \frac{\lambda(1,x^2_{\Phi^{++}},x^2_W)}{x_W^2} + 12 \bigg]\ , \\ \hat\sigma^{Z\mathbf\Phi^+} = &\ \frac{\pi^2 \alpha^3 v^2_{\mathbf\Phi}(1+s_W^2)^2}{72 s_W^6 c_W^2 \sh_W^2} |V_{ij}^{\rm CKM}|^2\ \lambda^\frac12 (1,x^2_{\Phi^{++}},x^2_Z) \bigg[ \frac{\lambda(1,x^2_{\Phi^{++}},x^2_Z)}{x_Z^2} + 12 \bigg]\ . \esp\label{eq:xsecscvev}\ee In addition, the neutral ${\bf A^0}$ and ${\bf H^0}$ fields can also decay into multileptonic final states so that their single- and pair-production should be considered. However, a correct treatment of these fields implies to also add their mixings with the components of the Standard Model Higgs doublet. This renders the approach rather non-minimal so that we neglect them from the present analysis. Let us note that including them would not change our conclusions in the following sections and even increase any possible new physics signals. As mentioned above, the Lagrangian of Eq.\ \eqref{eq:Lscy} allows for the new states to decay. We assume that these states are all close enough in mass so that they cannot decay into each other and compute below the relevant partial widths. Considering the decays of the doubly-charged states into a same-sign lepton pair, we obtain, assuming lepton flavor conservation, \be\bsp \Gamma_{1,\ell}^{++} = &\ \frac{M_{\phi^{++}}|y^{(1)}|^2}{32 \pi} \Big[1-2 x_\ell^2\Big] \sqrt{1-4 x_\ell^2}\ , \\ \Gamma_{2,\ell}^{++} = &\ \frac{M_{\Phi^{++}} M_\ell^2 |y^{(2)}|^2}{8 \pi \Lambda^2} \Big[ 1-2 x_\ell^2\Big] \sqrt{1-4 x_\ell^2} \ , \\ \Gamma_{3,\ell}^{++} = &\ \frac{M_{\mathbf{\Phi}^{++}} |y^{(3)}|^2}{32 \pi} \Big[1 - 2 x_\ell^2\Big] \sqrt{1 -4 x_\ell^2}\ , \esp\label{eq:BRsc++1}\ee using again different subscripts to distinguish the scalar field representations. Whereas flavor-violating effects could have also been considered, we nevertheless neglect them as they are constrained to be small due to lepton decay processes such as $\mu \to 3 e$, $\mu\to e\gamma$, {\it etc.}, which are bound to be extremely rare~\cite{Swartz:1989qz, Rizzo:1981xx,Gunion:1989in,Kosmas:1993ch,Mohapatra:1992uu}. In addition, the triplet field can also decay into a pair of $W^+$-bosons, the associated width being given by \be\label{eq:BRsc++2} \Gamma_{3,WW}^{++} = \frac{M^3_{\mathbf{\Phi}^{++}} \alpha^2 \pi v_{\mathbf{\Phi}}^2}{4 M_W^4 s_W^4} \sqrt{1 - 4 x_W^2} \Big[1 - 4 x_W^2 + 12 x_W^4\Big] \ . \ee Turning to the singly charged components of the new multiplets, the partial widths for the leptonic decays $\Phi^+ \to \ell^+ \bar\nu_l$ and $\mathbf{\Phi}^+ \to \ell^+ \bar\nu_l$ are computed as \be\bsp \Gamma_{2,\ell}^+ = &\ \frac{M_{\Phi^+} M_\ell^2 |y^{(2)}|^2}{16 \pi \Lambda ^2} \Big[1-x_\ell^2\Big]^2\ , \\ \Gamma_{3,\ell}^+ = &\ \frac{M_{\mathbf{\Phi}^+} |y^{(3)}|^2}{32 \pi} \Big[1 - x_\ell^2\Big]^2\ , \esp\label{eq:BRsc+1} \ee while the one for the decay of a triplet field into a pair of weak gauge bosons, $\mathbf{\Phi}^+ \to W^+ Z$, is found to be \be\label{eq:BRsc+2}\bsp \Gamma_{3,WZ}^+ = &\ \frac{ M^3_{\mathbf{\Phi}^+} \alpha^2\pi v_{\mathbf{\Phi}}^2 (1 + s_W^2)^2}{8 c_W^2 s_W^4 M_Z^2 M_W^2 } \Big[\lambda(1,x^2_W,x^2_Z) + 12 x_Z^2 x_W^2\Big] \sqrt{\lambda(1,x^2_W,x^2_Z)} \ . \esp \ee \subsection{Spin $1/2$ doubly-charged particles}\label{sec:femod} \subsubsection{Simplified models with three generations of leptons}\label{sec:femodA} We now turn to the building of Standard Model extensions containing one extra fermionic multiplet and like in Section \ref{sec:modsc}, we restrict ourselves to states with an electric charge not higher than two. We therefore consider three fermionic fields $\psi$, $\Psi$ and $\mathbf\Psi$ lying in the singlet, doublet and adjoint representation of $SU(2)_L$, respectively, and we fix their hypercharge quantum numbers to $Y_\psi = 2$, $Y_\Psi = 3/2$ and $Y_{\mathbf\Psi} = 1$, \be\label{eq:ferfields} \psi \equiv \psi^{++} \ , \qquad \Psi^i \equiv \bpm \Psi^{++} \\ \Psi^+ \epm \qquad\text{and}\qquad \mathbf \Psi^i{}_j \equiv \bpm \frac{\mathbf \Psi^+}{\sqrt{2}} & \mathbf \Psi^{++}\\ \mathbf \Psi^0 & -\frac{\mathbf \Psi^+}{\sqrt{2}} \epm \ . \ee The associated kinetic and gauge interaction terms are standard and induced by gauge covariant derivatives that can be derived from Eq.\ \eqref{eq:covder}, \be \lag_{\rm kin} = i \psibar \gamma^\mu D_\mu \psi + i \Psibar_i \gamma^\mu D_\mu \Psi^i + i \mathbf{\Psibar}_a \gamma^\mu D_\mu \mathbf\Psi^a + \ldots\ , \label{eq:Lfek}\ee where mass terms are included in the dots. As in Section~\ref{sec:modsc}, we assume the new states to be almost mass-degenerate so that they cannot decay into each other. In order to allow for the decays of the $\psi$, $\Psi$ and $\mathbf\Psi$ states, it is thus necessary to introduce at least one additional fermionic particle $N$, which we choose to be gauge singlet as in massive neutrino models \cite{King:2003jb, Altarelli:2004za, Asaka:2005an}. Avoiding the introduction of more new states to our simplified model, three-body decays of the $\psi$, $\Psi$ and $\mathbf \Psi$ fermions into a lepton pair and a $N$ particle are permitted by means of non-renormalizable four-fermion interactions, \bea \lag_{\rm F} &=& \frac{G^{(1,1)}}{2\Lambda^2} \big[\lbar^c_R l_R\big] \big[\Nbar P_L \psi\big] + \frac{G^{(1,2)}}{2\Lambda^2} \big[\lbar^c_R l_R\big] \big[\Nbar P_R \psi\big] + \frac{G^{(2,1)}}{\Lambda^2} \big[\lbar^c_R \Psi^i\big] \big[\Nbar L_i\big] + \frac{G^{(2,2)}}{\Lambda^2} \big[\lbar^c_R N\big] \big[\Lbar^{ic} \Psi_i\big] \non \\ & +& \frac{G^{(3,1)}}{2\Lambda^2} \big[\Lbar^c_i L^j\big] \big[\Nbar P_L \mathbf\Psi^i{}_j\big] + \frac{G^{(3,2)}}{2\Lambda^2} \big[\Lbar^c_i L^j\big] \big[\Nbar P_R \mathbf\Psi^i{}_j\big] + {\rm h.c.} \ . \label{eq:LfeF}\eea In this Lagrangian, we have only considered a set of independent effective operators and omitted all generation indices for clarity. In addition, $P_L$ and $P_R$ are chirality projectors acting on spin space, the left-handed and right-handed fermionic components of the Standard Model fields have been defined in Section \ref{sec:modsc} and the interaction strengths suppressed by a new physics scale $\Lambda$ are $3\times 3$ matrices in flavor space $G^{(1,1)}$, $G^{(1,2)}$, $G^{(2,1)}$, $G^{(2,2)}$, $G^{(3,1)}$ and $G^{(3,2)}$. The gauge interactions included in the Lagrangian of Eq.~\eqref{eq:Lfek} imply the possible hadroproduction of pairs of $\psi$, $\Psi$ and $\mathbf \Psi$ states from quark-antiquark scattering. Focusing on final state signatures with at least three leptons, the partonic cross sections associated with the relevant neutral current processes $q \bar q \to \psi^{++} \psi^{--}$, $q \bar q \to \Psi^{++} \Psi^{--}$ and $q \bar q \to \mathbf{\Psi}^{++} \mathbf{\Psi}^{--}$ are \be\bsp \hat\sigma^{NC}_1 = &\ \frac{16 \pi \alpha^2 \sh}{9} \big[1 + 2 x^2_{\psi^{++}}\big] \sqrt{1 - 4x^2_{\psi^{++}}} \bigg[ \frac{e^2_q}{\sh^2} - \frac{ e_q (L_q+R_q) (\sh-M_Z^2)}{2 c_W^2 \sh |\sh_Z|^2} + \frac{L_q^2 + R_q^2}{8 c_W^4 |\sh_Z|^2} \bigg] \ , \\ \hat\sigma^{NC}_2 = &\ \frac{16 \pi \alpha^2 \sh}{9} \big[1 + 2 x^2_{\Psi^{++}}\big] \sqrt{1 - 4x^2_{\Psi^{++}}} \bigg[ \frac{e^2_q}{\sh^2} + \frac{ e_q (1-4s_W^2) (L_q+R_q) (\sh-M_Z^2)}{8 c_W^2 s_W^2 \sh |\sh_Z|^2} + \frac{(1-4s_W^2)^2 (L_q^2 + R_q^2)}{128 c_W^4 s_W^4 |\sh_Z|^2} \bigg] \ , \\ \hat\sigma^{NC}_3 = &\ \frac{16 \pi \alpha^2 \sh}{9} \big[1 + 2 x^2_{\mathbf \Psi^{++}}\big] \sqrt{1 - 4x^2_{\mathbf \Psi^{++}}} \bigg[ \frac{e^2_q}{\sh^2} + \frac{ e_q (1-2s_W^2) (L_q+R_q) (\sh-M_Z^2)}{4 c_W^2 s_W^2 \sh |\sh_Z|^2} + \frac{(1-2s_W^2)^2 (L_q^2 + R_q^2)}{32 c_W^4 s_W^4 |\sh_Z|^2} \bigg] \ , \esp\label{eq:xsecfeNC}\ee whereas those related to the charged current processes $u_i\bar d_j\to \Psi^{++}\Psi^-$ and $u_i\bar d_j\to \mathbf{\Psi}^{++}\mathbf{\Psi}^-$ are \be\bsp \hat\sigma^{CC}_2 = &\ \frac{\pi \alpha^2 \sh}{36 s_W^4 |\sh_W|^2} |V_{ij}^{\rm CKM}|^2\ \sqrt{\lambda(1,x^2_{\Psi^{++}},x^2_{\Psi^+})} \Big[ 1-(x_{\Psi^{++}}-x_{\Psi^+})^2\Big] \Big[ 2+(x_{\Psi^{++}}+x_{\Psi^+})^2\Big] \ , \\ \hat\sigma^{CC}_3 = &\ \frac{\pi \alpha^2 \sh}{18 s_W^4 |\sh_W|^2} |V_{ij}^{\rm CKM}|^2\ \sqrt{\lambda(1,x^2_{\Psi^{++}},x^2_{\Psi^+})} \Big[ 1-(x_{\Psi^{++}}-x_{\Psi^+})^2\Big] \Big[ 2+(x_{\Psi^{++}}+x_{\Psi^+})^2\Big] \ . \esp\label{eq:xsecfeCC}\ee We recall that our conventions for the notations have been introduced in Section~\ref{sec:modsc}. Since there are no closed formulas for the widths of the new particles that can be calculated from Eq.\ \eqref{eq:LfeF}, we refer to Section~\ref{sec:numerics} for the corresponding numerical analysis. As in the scalar case, flavor violating effects are again neglected since they are constrained from rare leptonic decays, as illustrated in the left-right supersymmetric case in Ref.~\cite{Frank:2000dw}. \subsubsection{Simplified models with four charged lepton species}\label{sec:femodB} Since the singly-charged component of the $\Psi$ and $\mathbf\Psi$ multiplets have the same quantum numbers as the charged leptons, they could mix, as in $R$-parity violating supersymmetric theories \cite{Akeroyd:1997iq} or in custodian tau models \cite{delAguila:2010es}. This mixing is however highly constrained by LEP data, by measurements of the muon anomalous magnetic moment as well as by limits on leptonic flavor violating processes and on conversions in nuclei. We therefore restrict the most general case to a $2\times 2$ mixing with tau leptons, \be \bpm e' \\ \mu' \\ \tau' \\ E' \epm = \bpm e \\ \mu \\ \tau' \\ E' \epm = \bpm 1 &0&0&0\\ 0&1&0&0\\ 0&0&c_\tau&s_\tau\\ 0&0&-s_\tau&c_\tau\epm \bpm e \\ \mu \\ \tau \\ \Psi^+ \epm \qquad\text{and}\qquad \bpm e'' \\ \mu'' \\ \tau'' \\ E'' \epm = \bpm e \\ \mu \\ \tau'' \\ E'' \epm = \bpm 1 &0&0&0\\ 0&1&0&0\\ 0&0&c_\tau&s_\tau\\ 0&0&-s_\tau&c_\tau\epm \bpm e \\ \mu \\ \tau \\ \mathbf\Psi^+ \epm\ , \ee where $c_\tau$ ($s_\tau$) denotes the cosine (sine) of the associated mixing angle. In those notations, the $E'$ and $\tau'$ fields are related to lepton mixing with the singly-charged component of the $SU(2)_L$ doublet $\Psi$ while the $E''$ and $\tau''$ fields to mixing with the $SU(2)_L$ triplet ${\mathbf \Psi}$. In contrast to the model constructed in Section \ref{sec:femodA}, the decays of the new fields are possible via this mixing, without requiring the addition of any extra particle to the theory. Assuming mass-degenerate $\Psi$ states, the doubly-charged component of the $SU(2)_L$ doublet always decays into a $W$-boson and a $\tau$ lepton, which further yields a final state with zero, one or two charged leptons. The associated partial width can be computed from the Lagrangian of Eq.~\eqref{eq:Lfek} and is \be \Gamma^{++}_{2, \tau W} = \frac{M_{\Psi^{++}}^3 \alpha s_\tau^2}{8 M_W^2 s_w^2} \sqrt{\lambda(1,x^2_W,x^2_\tau)} \bigg[ \lambda(1,x^2_W,x^2_\tau) - 3 x_W^2 \Big(x_W^2 - (1-x_\tau)^2\Big)\bigg] \ . \label{eq:decfemix1}\ee Similarly, the $E'$ lepton can decay either to a $\tau Z$ or to a $\nu_\tau W$ final state, the corresponding widths being \be\bsp \Gamma^+_{2, \tau Z} =&\ \frac{M_{E'}^3 \alpha s_\tau^2 c_\tau^2}{32 M_Z^2 c_W^2 s_W^2} \sqrt{\lambda(1,x_Z^2,x_\tau^2)} \bigg[ 5\lambda(1,x_Z^2,x_\tau^2) + 3 x_Z^2 \Big(5 (1 + x_\tau^2 - x_Z^2) - 8 x_\tau\Big) \bigg] \ , \\ \Gamma^+_{2,\nu_\tau W} =&\ \frac{M_{E'}^3 \alpha s^2_\tau}{16 M_W^2 s_W^2} \Big[1-x_W^2\Big]^2 \Big[1+2x_W^2\Big]\ . \esp\label{eq:decfemix2}\ee After accounting for the decays of the Standard Model particles, other production mechanisms, in addition to the one of a pair of doubly charged states (whose cross section is given by Eq.~\eqref{eq:xsecfeNC}), can induce signatures with at least three leptons. The two processes $q \bar q \to E'^+ E'^-$ and $q \bar q \to E'^{\pm} \tau^\mp$ hence possibly lead to final states containing up to six leptons, the corresponding partonic cross sections being respectively given by \be\bsp \hat\sigma_2^{E'E'} =&\ \frac{4 \pi \alpha^2 \sh}{9} \sqrt{1 - 4x^2_{E'}} \bigg[ \big[1 + 2 x^2_{E'}\big] \frac{e^2_q}{\sh^2} - \frac{e_q (L_q+R_q)}{8c_W^2 s_W^2} \ \frac{\sh-M_Z^2}{ \sh |\sh_Z|^2}\ \big[1 + 2 x^2_{E'}\big] \big[2 + 4 s_W^2 - 3 s^2_\tau\big]\ \\ &\ + \frac{(L_q^2+R_q^2)}{64 c_W^4 s_W^4} \ \frac{1}{ |\sh_Z|^2}\ \Big( \big[1 + 2 x^2_{E'}\big] \big[ 2(1+2 s_W^2)^2 - 6 s_\tau^2 (1+2 s_W^2) \big] + s_\tau^4 (5+7x^2_{E'})\Big) \bigg]\ ,\\ \hat\sigma_2^{E'\tau} = &\ \frac{\pi \alpha^2 (L_q^2+R_q^2) s_\tau^2 c_\tau^2}{288 c_W^4 s_W^4} \frac{\sh}{|\sh_Z|^2}\ \sqrt{\lambda(1,x_\tau^2,x_{E'}^2)} \Big[ 5\big( 3 - 3 x_{E'}^2 -3 x_\tau^2- \lambda(1,x_\tau^2,x_{E'}^2) \big) + 24 x_{E'} x_\tau\Big] \ , \esp\ee whilst the production channels of an associated pair comprised of a singly and a doubly-charged particle, $u_i \bar d_j \to \Psi^{++} E'^-$ and $u_i \bar d_j \to \Psi^{++} \tau^-$, can yield up to five charged leptons. In this case, the production cross sections are \be\bsp \hat\sigma_2^{\Psi^{++}E'} = &\ \frac{\pi \alpha^2 \sh c_\tau^2}{36 s_W^4 |\sh_W|^2} |V_{ij}^{\rm CKM}|^2\ \sqrt{\lambda(1,x^2_{\Psi^{++}},x^2_{E'})} \Big[ 1-(x_{\Psi^{++}}-x_{E'})^2\Big] \Big[ 2+(x_{\Psi^{++}}+x_{E'})^2\Big] \ , \\ \hat\sigma_2^{\Psi^{++} \tau} = &\ \frac{\pi \alpha^2 \sh s_\tau^2}{36 s_W^4 |\sh_W|^2} |V_{ij}^{\rm CKM}|^2\ \sqrt{\lambda(1,x^2_{\Psi^{++}},x^2_\tau)} \Big[ 1-(x_{\Psi^{++}}-x_\tau)^2\Big] \Big[ 2+(x_{\Psi^{++}}+x_\tau)^2\Big] \ . \esp\ee In the same way, all the components of the $\mathbf\Psi$ multiplet can decay to final states containing up to three leptons, after accounting for subsequent decays of the gauge bosons and tau leptons. The corresponding partial widths are \be\bsp \Gamma^{++}_{3,\tau W} = &\ \frac{M_{\mathbf{\psi}^{++}}^3 \alpha s_\tau^2}{4 M_W^2 s_W^2}\sqrt{\lambda(1,x^2_W, x_\tau^2)} \Big[ \lambda(1,x^2_W, x_\tau^2) + 3 x_W^2 \big( (1-x_\tau)^2-x_W^2\big)\Big] \ , \\ \Gamma^+_{3,\tau Z} = &\ \frac{ M_{E''}^3 \alpha s_\tau^2 c_\tau^2}{32 M_Z^2 c_W^2 s_W^2} \sqrt{\lambda(1,x^2_Z, x_\tau^2)} \Big[ \lambda(1,x^2_Z, x_\tau^2) + 3 x_Z^2 ( 1+x_\tau^2-x_Z^2) \Big] \ , \\ \Gamma^+_{3,\nu_\tau W} =&\ \frac{M_{E''}^3 \alpha s_\tau^2}{16 M_W^2 s_W^2} \Big[1-x_W^2\Big]^2 \Big[1 + 2 x_W^2\Big] \ , \\ \Gamma^0_{3, \tau W} =&\ \frac{ M_{\mathbf{\psi}^0}^3 \alpha s_\tau^2}{4 M_W^2 s_W^2} \sqrt{\lambda(1,x^2_W, x_\tau^2)} \Big[ \lambda(1,x^2_W, x_\tau^2) + 3 x_W^2 \big( (1-x_\tau)^2-x_W^2\big)\Big] \ . \esp\label{eq:decfemix3}\ee Therefore, the production of any pair of components of the $\mathbf\Psi$ field can lead to signatures with three or more charged leptons. While the partonic cross section related to the production of two doubly-charged particles is given in Eq.~\eqref{eq:xsecfeNC}, all the other relevant cross sections, associated with the $u_i \bar d_j \to \mathbf{\Psi}^{++} E''^-$, $u_i \bar d_j \to \mathbf{\Psi}^{++} \tau^-$, $q\bar q\to E''^+ E''^-$, $q\bar q\to E''^\pm \tau^\mp$, $u_i \bar d_j \to \mathbf{\Psi}^0 E''^+$, $u_i \bar d_j \to \mathbf{\Psi}^0 \tau^+$ and $q \bar q\to \mathbf{\Psi}^0 \mathbf{\Psi}^0$ modes, are respectively computed as \be\bsp \hat\sigma_3^{\mathbf{\Psi}^{++}E''} = &\ \frac{\pi \alpha^2 \sh c_\tau^2}{18 s_W^4 |\sh_W|^2} |V_{ij}^{\rm CKM}|^2\ \sqrt{\lambda(1,x^2_{\Psi^{++}},x^2_{E''})} \Big[ 1-(x_{\Psi^{++}}-x_{E''})^2\Big] \Big[ 2+(x_{\Psi^{++}}+x_{E''})^2\Big] \ ,\\ \hat\sigma_3^{\mathbf{\Psi}^{++}\tau} = &\ \frac{\pi \alpha^2 \sh s_\tau^2}{18 s_W^4 |\sh_W|^2} |V_{ij}^{\rm CKM}|^2\ \sqrt{\lambda(1,x^2_{\Psi^{++}},x^2_\tau)} \Big[ 1-(x_{\Psi^{++}}-x_\tau)^2\Big] \Big[ 2+(x_{\Psi^{++}}+x_\tau)^2\Big] \ ,\\ \hat\sigma_3^{E'' E''} =&\ \frac{4 \pi \alpha^2 \sh}{9} \sqrt{1 - 4x^2_{E''}} \bigg[ \big[1 + 2 x^2_{E''}\big] \frac{e^2_q}{\sh^2} - \frac{e_q (L_q+R_q)}{8c_W^2 s_W^2} \ \frac{\sh-M_Z^2}{ \sh |\sh_Z|^2}\ \big[1 + 2 x^2_{E''}\big] \big[4 s_W^2 - s_\tau^2\big]\ \\& \ + \frac{(L_q^2+R_q^2)}{64 c_W^4 s_W^4} \ \frac{1}{ |\sh_Z|^2}\ \Big( \big[1 + 2 x^2_{E''}\big] \big[ 8 s_W^4 - 4 s_\tau^2 s_W^2 \big] + s^4_\tau (1-x^2_{E''})\Big) \bigg]\ , \\ \hat\sigma_3^{E''\tau} = & \ \frac{\pi \alpha^2 (L_q^2+R_q^2)s_\tau^2 c_\tau^2}{288 c_W^4 s_W^4} \frac{\sh}{|\sh_Z|^2}\ \sqrt{\lambda(1,x_\tau^2,x_{E''}^2)} \Big[ 3\big( 1 - x_{E''}^2 - x_\tau^2\big) - \lambda(1,x_\tau^2,x_{E''}^2) \Big] \ , \\ \hat\sigma_3^{E'' \Psi^0} =&\ \frac{\pi \alpha^2 \sh c_\tau^2}{18 s_W^4 |\sh_W|^2} |V_{ij}^{\rm CKM}|^2\ \sqrt{\lambda(1,x^2_{\Psi^0},x^2_{E''})} \Big[ 1-(x_{\Psi^0}-x_{E''})^2\Big] \Big[ 2+(x_{\Psi^0}+x_{E''})^2\Big] \ ,\\ \hat\sigma_3^{\tau\Psi^0} =&\ \frac{\pi \alpha^2 \sh s_\tau^2}{18 s_W^4 |\sh_W|^2} |V_{ij}^{\rm CKM}|^2\ \sqrt{\lambda(1,x^2_{\Psi^0},x^2_\tau)} \Big[ 1-(x_{\Psi^0}-x_\tau)^2\Big] \Big[ 2+(x_{\Psi^0}+x_\tau)^2\Big] \ ,\\ \hat\sigma_3^{\Psi^0\Psi^0} =&\ \frac{\pi \alpha^2 (L_q^2+R_q^2)}{18 c_W^4 s_W^4} \frac{\sh}{|\sh_Z|^2}\ \sqrt{1-4x_{\mathbf{\Psi}^0}^2} \Big[ 1 + 2 x_{\mathbf{\Psi}^0}^2 \Big] \ . \esp\ee \subsection{Spin $1$ doubly-charged particles}\label{sec:modve} In this section, we adjoin to the Standard Model particle content additional complex vectorial fields $V$, $\cal V$ and $\mathbf V$ lying respectively in the singlet, fundamental and adjoint representation of $SU(2)_L$. The hypercharge quantum numbers are again set to $Y_V=2$, $Y_{\cal V} = 3/2$ and $Y_{\mathbf{V}} = 1$, so that the component fields read \be\label{eq:lkvec} V_\mu \equiv V_\mu^{++} \ , \qquad {\cal V}_\mu^i \equiv \bpm {\cal V}_\mu^{++} \\ {\cal V}_\mu^+ \epm \qquad\text{and}\qquad \mathbf V^i{}_j \equiv \bpm \frac{\mathbf V_\mu^+}{\sqrt{2}} & \mathbf V_\mu^{++}\\ \mathbf V_\mu^0 & -\frac{\mathbf V_\mu^+}{\sqrt{2}} \epm \ . \ee Gauge interactions and kinetic terms are described by means of a gauge-covariant version of the Proca Lagrangian \be\bsp \lag_{\rm kin}= &\ -\frac12 \Big[ D_\mu V_\nu^\dag - D_\nu V_\mu^\dag \Big] \Big[ D^\mu V^\nu - D^\nu V^\mu \Big] - \frac12 \Big[ D_\mu {\cal V}_\nu^\dag - D_\nu {\cal V}_\mu^\dag \Big] \Big[ D^\mu {\cal V}^\nu - D^\nu {\cal V}^\mu \Big] \\ &\ - \frac12 \Big[ D_\mu {\mathbf V}_\nu^\dag - D_\nu {\mathbf V}_\mu^\dag \Big] \Big[ D^\mu {\mathbf V}^\nu - D^\nu {\mathbf V}^\mu \Big] + \ldots \ , \esp\ee the dots being identified with omitted mass terms and the covariant derivatives being derived from Eq.~\eqref{eq:covder}. As in the previous sections, we again forbid the new states to decay into each other and model the decays of the new vector fields into pairs of charged leptons via the Lagrangian \be {\cal L}_{\rm dec} = \frac{\tilde g^{(1)}}{\Lambda} V_\mu\ \bar l^c_R \sigma^{\mu \nu} D_\nu l_R + \tilde g^{(2)} {\cal V}_\mu^i\ \bar{L}^c_i \gamma^\mu l_R + \frac{\tilde g^{(3)}}{\Lambda} (\mathbf V_\mu)^i{}_j \ \bar L^c_i \sigma^{\mu \nu} D_\nu L^j \ , \label{eq:ldvec}\ee where the coupling strengths are denoted by $3\times 3$ matrices in flavor space $\tilde g^{(1)}$, $\tilde g^{(2)}$ and $\tilde g^{(3)}$ and where $\sigma^{\mu \nu}=\frac{i}{4}[\gamma^\mu, \gamma^\nu]$. Gauge invariance makes use of the standard gauge-covariant derivatives $D_\mu L$ and $D_\mu l_R$ as well as of the dual object $L_i$ defined in Eq.~\eqref{eq:deflep}. In addition, dimension-four operators such as $\bar l_R^c\gamma^\mu l_R$ identically vanish so that the use of higher-dimensional operators suppressed by a new physics scale $\Lambda$ is required. The Lagrangian of Eq.~\eqref{eq:lkvec} allows to pair produce the new doubly charged fields via quark-antiquark scattering processes, $q\bar q\to V^{++} V^{--}$, $q\bar q\to {\cal V}^{++} {\cal V}^{--}$ and $q\bar q\to {\mathbf V}^{++} {\mathbf V}^{--}$, which leads to a four leptons signature after accounting for the decays permitted by the interactions described in Eq.~\eqref{eq:ldvec}. The associated partonic cross sections are calculated as \be\bsp \hat\sigma^{NC}_1 = &\ \frac{4 \pi \alpha^2 \sh}{9} \sqrt{1 - 4x^2_{V^{++}}}\ \frac{1-x^2_{V^{++}}-12x^4_{V^{++}}}{x^2_{V^{++}}}\bigg[ \frac{e^2_q}{\sh^2} - \frac{ e_q (L_q+R_q) (\sh-M_Z^2)}{2 c_W^2 \sh |\sh_Z|^2} + \frac{L_q^2 + R_q^2}{8 c_W^4 |\sh_Z|^2} \bigg] \ , \\ \hat\sigma^{NC}_2= &\ \frac{4 \pi \alpha^2 \sh}{9} \sqrt{1 - 4x^2_{{\cal V}^{++}}}\ \frac{1-x^2_{{\cal V}^{++}}-12x^4_{{\cal V}^{++}}}{x^2_{{\cal V}^{++}}}\\ &\quad \times \bigg[\frac{e^2_q}{\sh^2} + \frac{ e_q (1-4s_W^2)(L_q+R_q) (\sh-M_Z^2)}{8 s_W^2 c_W^2 \sh |\sh_Z|^2} + \frac{(1-4 s_W^2)^2(L_q^2 + R_q^2)}{128 s_W^4 c_W^4 |\sh_Z|^2} \bigg] \ , \\ \hat\sigma^{NC}_3 = &\ \frac{4 \pi \alpha^2 \sh}{9} \sqrt{1 - 4x^2_{{\mathbf V}^{++}}}\ \frac{1-x^2_{{\mathbf V}^{++}}-12x^4_{{\mathbf V}^{++}}}{x^2_{{\mathbf V}^{++}}}\\ &\quad \times \bigg[\frac{e^2_q}{\sh^2} + \frac{ e_q (1-2s_W^2)(L_q+R_q) (\sh-M_Z^2)}{4 s_W^2 c_W^2 \sh |\sh_Z|^2} + \frac{(1-2 s_W^2)^2(L_q^2 + R_q^2)}{32 s_W^4 c_W^4 |\sh_Z|^2} \bigg] \ , \esp\ee respectively. In the case of fields lying in the fundamental or adjoint representation of $SU(2)_L$, three lepton final states also result from the production of an associated pair comprised of one doubly-charged state and one singly charged state, $u_i \bar d_j \to {\cal V}^{++} {\cal V}^-$ and $u_i \bar d_j \to {\mathbf V}^{++} {\mathbf V}^-$ whose cross sections respectively are \be\bsp \hat\sigma^{CC}_2 = &\ \frac{\pi \alpha^2 \sh}{288 s_W^4 |\sh_W|^2} |V_{ij}^{\rm CKM}|^2\ \sqrt{\lambda(1,x^2_{{\cal V}^{++}},x^2_{{\cal V}^+})} \frac{\big[ 1-(x_{{\cal V}^{++}}-x_{{\cal V}^+})^2\big]\big[ 1-(x_{{\cal V}^{++}}+x_{{\cal V}^+})^2\big]}{x_{{\cal V}^{++}}^2 x^2_{{\cal V}^+}} \\ &\quad \times \Big[ \lambda(1,x^2_{{\cal V}^{++}},x^2_{{\cal V}^+}) - 1 +4 (x_{{\cal V}^+}^2 + x_{{\cal V}^{++}}^2 + 3 x_{{\cal V}^+}^2 x_{{\cal V}^{++}}^2)\Big] \ , \\ \hat\sigma^{CC}_3 = &\ \frac{\pi \alpha^2 \sh}{144 s_W^4 |\sh_W|^2} |V_{ij}^{\rm CKM}|^2\ \sqrt{\lambda(1,x^2_{{\mathbf V}^{++}},x^2_{{\mathbf V}^+})} \frac{\big[ 1-(x_{{\mathbf V}^{++}}-x_{{\mathbf V}^+})^2\big]\big[ 1-(x_{{\mathbf V}^{++}}+x_{{\mathbf V}^+})^2\big]}{x_{{\mathbf V}^{++}}^2 x^2_{{\mathbf V}^+}} \\ &\quad \times \Big[ \lambda(1,x^2_{{\mathbf V}^{++}},x^2_{{\mathbf V}^+}) - 1 +4 (x_{{\mathbf V}^+}^2 + x_{{\mathbf V}^{++}}^2 + 3 x_{{\mathbf V}^+}^2 x_{{\mathbf V}^{++}}^2)\Big] \ . \esp\ee The partial widths associated with the decays of the states above into charged leptons are finally given by \be\bsp \Gamma_{1,\ell}^{++} = &\ \frac{M_{V^{++}} M_\ell^2 (\tilde g^{(1)})^2}{96 \pi \Lambda^2} \Big[ 1-4 x_\ell^2\Big]^{3/2}\ , \\ \Gamma_{2,\ell}^{++} = &\ \frac{M_{{\cal V}^{++}} (\tilde g^{(2)})^2}{24 \pi} \Big[1 -4 x_\ell^2\Big]^{3/2}\ , \\ \Gamma_{3,\ell}^{++} = &\ \frac{M_{\mathbf{V}^{++}} M_\ell^2 (\tilde g^{(3)})^2}{96 \pi \Lambda^2} \Big[ 1-4 x_\ell^2\Big]^{3/2}\ , \\ \Gamma_{2,\ell}^+ = &\ \frac{M_{{\cal V}^+} (\tilde g^{(2)})^2}{48 \pi}\Big[2 - 3 x_\ell^2 + x_\ell^6\Big] \ , \\ \Gamma_{3,\ell}^+ = &\ \frac{M_{{\mathbf V}^+} M_\ell^2 (\tilde g^{(3)})^2}{384 \pi \Lambda^2}\Big[1-x_\ell^2\Big]^2 \Big[2 + x_\ell^2\Big] \ . \esp\label{eq:vecwidths}\ee As in the previous sections, flavor violating decays have not been considered as the underlying interactions could lead to visible signals in rare leptonic decay experiments. \section{Multilepton production at the LHC}\label{sec:numerics} In this section, we present numerical predictions of total cross sections for the production of multileptonic final states originating from the pair production and decay of the various new particles introduced in Section \ref{sec:themodel}. For the sake of simplicity, we assume that all new physics masses are equal. We focus on $pp$ collisions such as produced at the CERN LHC collider running at a center-of-mass energy $\sqrt{S_h}=8$ TeV. Thanks to the QCD factorization theorem, hadronic cross sections are calculated by convolving the partonic cross sections $\hat\sigma$ computed in Section\ \ref{sec:themodel} with the universal parton densities $f_a$ and $f_b$ of partons $a$ and $b$ in the proton, which depend on the longitudinal momentum fractions of the two partons $x_{a,b} = \sqrt{\tau}e^{\pm y}$ and on the unphysical factorization scale $\mu_F$, \be \sigma = \int_{4\tilde M^2/S_h}^1\!\mathrm{d}\tau \int_{-1/2\ln\tau}^{1/2\ln\tau}\mathrm{d} y\ f_a(x_a,\mu_F^2) \ f_b(x_b,\mu_F^2) \ \hat\sigma(x_a x_b S_h) \ . \ee We employ the leading order set L1 of the CTEQ6 global parton density fit \cite{Pumplin:2002vw}, which includes five light quark flavors and the gluon and we identify the factorization scale to be the average mass of the produced final state particles $\tilde M$. For the masses and widths of the electroweak gauge bosons, we use the current values given in the Particle Data Group review \cite{Beringer:2012zz}, {\it i.e.}, $M_Z=91.1876$ GeV, $M_W=80.385$ GeV, $\Gamma_Z=2.4952$ GeV and $\Gamma_W=2.085$ GeV, and we evaluate the CKM-matrix elements by using the Wolfenstein parameterization. The corresponding four free parameters are set to $\lambda=0.22535$, $A=0.811$, $\bar\rho=0.131$ and $\bar\eta=0.345$. The squared sine of the electroweak mixing angle $\sin^2\theta_W=1- M_W^2/M_Z^2$ and the electromagnetic fine structure constant $\alpha= \sqrt{2} G_F M_W^2\sin^2\theta_W/\pi$ are calculated in the improved Born approximation using a value of $G_F=1.16638\cdot 10^{-5}$ GeV$^{-2}$ for the Fermi coupling constant, which is derived from muon lifetime measurements. \begin{figure}[!t] \centering \includegraphics[width=.32\columnwidth]{scalsgl.eps} \includegraphics[width=.32\columnwidth]{scaldbl.eps} \includegraphics[width=.32\columnwidth]{scaltri.eps} \caption{\label{fig:scalar}Doubly-charged particle contributions to the production rate of final states containing more than three charged leptons at the LHC, running at a center-of-mass energy of 8 TeV. We consider non-standard fields lying in the singlet (left), doublet (center) and triplet (right) representations of $SU(2)_L$ and impose that its component with the highest electric charge to be doubly-charged. } \end{figure} We start by focusing on mechanisms yielding the production of pairs of components of the $SU(2)_L$ multiplets $\phi$, $\Phi$ and $\mathbf\Phi$ introduced in Section \ref{sec:modsc}. In the first step, we evaluate the different branching ratios of their doubly-charged, singly-charged and neutral states to a given number of electrons, muons and taus by using Eq~\eqref{eq:BRsc++1}, Eq.~\eqref{eq:BRsc++2}, Eq.~\eqref{eq:BRsc+1} and Eq.~\eqref{eq:BRsc+2}. We then derive, in Figure~\ref{fig:scalar}, BSM contributions to the production rates of final states containing $N_\ell=3$, $N_\ell = 4$ and $N_\ell = 5$ charged leptons\footnote{From now on, we denote by the terminology \textit{charged leptons}, electrons and muons.} originating from the production and decay of the new scalar particles after accounting for subsequent decays of tau leptons, $W$-bosons and $Z$-bosons where relevant. For our numerical analysis, we choose to couple the $\phi$, $\Phi$ and $\mathbf\Phi$ fields to charged leptons in a flavor-conserving way, as already mentioned in the previous section of this paper, and we fix the parameters of the Lagrangian of Eq.~\eqref{eq:Lscy} to \be y^{(1)} = y^{(2)} = y^{(3)} = 0.1 \cdot \mathbb{1}\ . \ee Moreover, we consider the effective interactions of the doublet fields to be suppressed by an energy scale $\Lambda = 1$~TeV and allow the neutral component of the triplet field $\mathbf\Phi^0$ to acquire a non-vanishing vacuum expectation value equal to $v_{\mathbf\Phi} = 100$~GeV. Finally, all new particles are assumed mass-degenerate. On the left panel of the figure, we present results for a field lying in the singlet representation of $SU(2)_L$. The unique component of such a field, {\it i.e.}, the doubly charged particle $\phi^{++}$, always decays into a charged lepton pair or a tau pair. Four lepton signatures are therefore expected to be copiously produced, although the possible hadronic decays of the tau allow for contributions to final states containing less than four electrons and muons. Consequently, doubly-charged particles lying in the singlet representation of $SU(2)_L$ contribute, for moderate masses $M_\phi \lesssim 330$~GeV, to the production of final states with a leptonic multiplicity $N_\ell \geq 3$, with a cross section larger than 1~fb. \begin{figure} \centering \includegraphics[width=.32\columnwidth]{scaltri_3l.eps} \includegraphics[width=.32\columnwidth]{scaltri_4l.eps} \caption{\label{fig:NLscalartri} Different new physics contributions to the production of three (left) and four (right) leptons at the LHC, running at a center-of-mass energy of 8 TeV, in the context of a simplified model containing one additional $SU(2)_L$ triplet of scalar fields whose highest component is a doubly-charged state. } \end{figure} The situation changes when the Standard Model is extended by fields lying in a non-trivial representation of $SU(2)_L$ due to the possibility of production of associated pairs of different new states or of a single new state together with a Standard Model gauge boson. This is illustrated on the central and right panels of Figure~\ref{fig:scalar} for the doublet and triplet cases, respectively. For fields lying in the fundamental representation of $SU(2)_L$, the BSM contributions to the production cross section of final states with $N_\ell \geq 3$ charged leptons are found to be larger than 1~fb only for a rather low mass scale $M_\Phi \lesssim 250$~GeV. For fields in the adjoint representation, the predictions however highly depend on the size of the vev $v_{\mathbf\Phi}$. For very small $v_{\mathbf \Phi}$ values, the cross sections are sizable and a large mass reach can be foreseen, as extrapolated from the results presented in the region of the right panel of Figure \ref{fig:scalar} located below the dibosonic thresholds, where the value of the vev is irrelevant. However, as soon as the dibosonic decay modes of the $\mathbf\Phi^+$ and $\mathbf\Phi^{++}$ fields are open, important vev values render them dominant so that the production rate of final states with more than three charged leptons drops, the Standard Model weak bosons preferably decaying into quarks. In our example, where the vev has a value close to the weak scale, the new physics masses have to be below 350~GeV in order to contribute to the production cross sections of multileptonic final states (with $N_\ell\geq 3$) with at least $\sigma = 1$~fb. We show in detail the effects of a large $v_{\mathbf\Phi}$ in Figure~\ref{fig:NLscalartri}, where we split the different production channels contributing to final state signatures with $N_\ell=3$ (left panel) and $N_\ell=4$ (right panel) charged leptons and present the respective contribution of each subprocess. In addition, it should be noted that final states with five and six leptons can also be produced, since both the doubly-charged and singly-charged fields can decay into three leptons after accounting for leptonic $W$-boson and $Z$-boson decays. However, the related rates are very low and render the possible observation of five-lepton or six-lepton events rather unlikely when one takes into account the available integrated luminosity of 20 fb$^{-1}$ recorded at the LHC in 2012. \begin{figure}[!t] \centering \includegraphics[width=.32\columnwidth]{fermsgl3G.eps} \includegraphics[width=.32\columnwidth]{fermdbl3G.eps} \includegraphics[width=.32\columnwidth]{fermtri3G.eps} \caption{\label{fig:fermionA}Doubly-charged particle production rate of final states containing more than three charged leptons at the LHC, running at a center-of-mass energy of 8 TeV. We consider extensions of the Standard Model with an extra fermionic field lying in the singlet (left), doublet (center) or triplet (right) representation of $SU(2)_L$ and impose that its component with the highest electric charge to be doubly-charged. In addition, its singly-charged component is prevented from mixing with the SM sector. } \end{figure} We now turn to the pair-production of the components of the fermionic fields $\psi$, $\Psi$ and $\mathbf\Psi$ introduced in Section \ref{sec:femod} and calculate their effect on the production of final states containing three leptons or more at the LHC. We investigate first scenarios with three generations of fermions as presented in Section \ref{sec:femodA}. In this case, the new states decay through a four-fermion interaction into a pair of leptons with the same electric charge and a fermionic field $N$ similar to a right-handed neutrino, as described by Eq.~\eqref{eq:LfeF}. In order to compute the related partial widths, we implement the Lagrangian of Eq.~\eqref{eq:LfeF} into the {\sc FeynRules}\ package \cite{Christensen:2008py,% Christensen:2009jx,Christensen:2010wz,Duhr:2011se,Fuks:2012im,Alloul:2013fw}, then export the Feynman rules to a UFO module \cite{Degrande:2011ua} that is subsequently linked to {\sc MadGraph}~5 \cite{Alwall:2011uj}. For our numerical analysis, we set the effective scale $\Lambda = 1$~TeV and impose the coupling strengths to be flavor-diagonal, \be G^{(1,1)} = G^{(1,2)} = G^{(2,1)} = G^{(2,2)} = G^{(3,1)} = G^{(3,2)} = 0.1 \cdot \mathbb{1} \ . \ee We also set the mass of the $N-$field to 50 GeV, in order to be compatible with the current limits on the existence of heavy stable neutral leptons \cite{Beringer:2012zz} and assume, for the sake of simplicity, that all components of the new fermionic multiplet containing the doubly-charged state have the same mass. In Figure~\ref{fig:fermionA}, we derive the associated contributions to the production of final states with at least three charged leptons in the singlet (left panel), doublet (central panel) and triplet (right panel) cases. In contrast to scalar extra fields, we observe that a much larger mass range is expected to give rise to cross sections larger than 1~fb, which is guaranteed for $M_\psi \lesssim 550$~GeV, $M_\Psi\lesssim 650$~GeV and $M_{\mathbf\Psi}\lesssim 725$~GeV for new fermionic fields lying in the singlet, doublet and triplet representations of $SU(2)_L$, respectively. \begin{figure}[!t] \centering \includegraphics[width=.32\columnwidth]{fermdbl.eps} \includegraphics[width=.32\columnwidth]{fermtri.eps} \caption{\label{fig:fermionB}Doubly-charged particle production rate of final states containing more than three charged leptons at the LHC, running at a center-of-mass energy of 8 TeV. We consider extensions of the Standard Model with an extra fermionic field lying in the singlet (left), doublet (center) or triplet (right) representation of $SU(2)_L$ and impose its component with the highest electric charge ti be doubly-charged. In addition, its singly-charged component mixes with the SM tau lepton. } \end{figure} In the series of fermionic scenarios presented in Section~\ref{sec:femodB}, the Standard Model is only supplemented by a fermionic field lying either in the fundamental or in the adjoint representation of $SU(2)_L$. In this case, the singly-charged component mixes with the Standard Model tau lepton, which allows for the decays of the new states to the Standard Model sector and open new production mechanisms giving rise to signatures with $N_\ell \geq 3$ charged leptons. In order not to challenge the very precisely measured properties of the tau lepton inferred by its coupling to the $Z$-boson, we fix the mixing angle \be \sin\theta_\tau = 0.01 \ . \ee Computing the different branching ratios using the formulas of Eq.~\eqref{eq:decfemix1}, Eq.~\eqref{eq:decfemix2} and Eq.~\eqref{eq:decfemix3} we then show in Figure~\ref{fig:fermionB} the new physics contributions to the production of final states with at least three leptons. It is found that for masses $M_\Psi \lesssim 470$~GeV and $M_{\mathbf\Psi} \lesssim 550$~GeV in the doublet and triplet cases, respectively, the corresponding cross sections are higher than 1~fb. \begin{figure}[!t] \centering \includegraphics[width=.32\columnwidth]{vectsgl.eps} \includegraphics[width=.32\columnwidth]{vectdbl.eps} \includegraphics[width=.32\columnwidth]{vecttri.eps} \caption{\label{fig:vector}Doubly-charged particle production rate of final states containing more than three charged leptons at the LHC, running at a center-of-mass energy of 8 TeV. We consider extensions of the Standard Model with an extra vectorial field lying in the singlet (left), doublet (center) or triplet (right) representation of $SU(2)_L$ and impose that its component with the highest electric charge to be doubly-charged. } \end{figure} We finally address the production of the vectorial multiplets $V$, $\cal V$ and $\mathbf V$ defined in Section~\ref{sec:modve}. These fields lie in the trivial, fundamental and adjoint representation of $SU(2)_L$, respectively, and their component with the highest electric charge is doubly-charged. In order to derive the associated contributions to the production of signatures with three leptons and more, we start by computing the different partial widths of the new fields according to the results of Eq.~\eqref{eq:vecwidths}. As for the other types of models, we choose the coupling strengths of the new field to the Standard Model leptons and neutrinos, presented in Eq.\eqref{eq:ldvec}, to be flavor-conserving, \be \tilde g^{(1)} = \tilde g^{(1)} = \tilde g^{(1)} = 0.1 \cdot \mathbb{1} \ , \ee and suppressed, if relevant, by an effective scale fixed to $\Lambda = 1$~TeV. Moreover, all new states are once again assumed mass-degenerate. We show in Figure \ref{fig:vector} the hadronic cross sections related to trilepton and tetralepton production induced by the pair-production of the component of the $V$, $\cal V$ and $\mathbf V$ fields. We find that these cross sections are larger than 1~fb, which possibly implies the observation of some events during the 2012 LHC run, for new physics masses satisfying $M_V \lesssim 400$~GeV, $M_{\cal V} \lesssim 600$~GeV and $M_{\mathbf V} \lesssim 500$~GeV in the singlet, doublet and triple cases, respectively. \begin{table} \begin{center} \caption{\label{tab:summary}Upper bound on doubly-charged particle mass scale so that the sum of all BSM contributions to the production rate, at the LHC running at a center-of-mass energy of 8 TeV, of multileptonic final states (with $N_\ell \geq 3$) is larger than 1 fb.} \begin{tabular}{l||c|c|c} & $~~~SU(2)_L$ singlet$~~~$ & $~~~SU(2)_L$ doublet$~~~$ & $~~~SU(2)_L$ triplet$~~~$ \\ \hline Scalar fields & 330 GeV & 257 GeV & 350 GeV\\ Fermionic fields (no mixing with the SM) & 555 GeV & 661 GeV & 738 GeV\\ Fermionic fields (mixing with the SM) & - & 471 GeV & 560 GeV\\ Vector fields & 392 GeV & 619 GeV & 495 GeV\\ \end{tabular} \end{center} \end{table} In Table \ref{tab:summary}, we summarize the different mass ranges expected to give rise to new physics contributions to multilepton production at the LHC collider, running at a center-of-mass energy of 8~TeV, larger than 1~fb. This motivates us to select three benchmark scenarios for a more careful analysis, based on Monte Carlo simulations, of the effects associated with the presence of fields containing a doubly-charged component, with the aim of defining some ways to distinguish their spin and $SU(2)_L$ representation. We hence choose first a series of scenarios where the common mass is fixed to a rather optimistic value of 100~GeV, recalling that no LHC constraints has been derived for promptly decaying fields and/or when non-leptonic decay channels are open. We then define two other classes of scenarios where the new physics mass scale lies well above the dibosonic thresholds. We fix it to 250 GeV and 350 GeV, respectively. \section{Probing spin and $SU(2)_L$ representations with Monte Carlo simulations} \label{sec:MC} In the previous section, we have shown that the mass range for possibly observing doubly-charged particles at the LHC depends on their spin and $SU(2)_L$ representations. In this section, we focus on the study of various kinematical distributions that should allow to probe the nature of a doubly-charged particle, if one assumes that it is responsible for the observation of excesses in multilepton final states with $N_\ell \geq 3$ charged leptons. Towards this goal, we implement all the models presented in Section~\ref{sec:themodel} in {\sc MadGraph}~5 \cite{Alwall:2011uj} via {\sc FeynRules} \cite{Christensen:2008py,Christensen:2009jx,Christensen:2010wz,Duhr:2011se,Fuks:2012im,% Alloul:2013fw}. We then present, within the {\sc MadAnalysis}~5 framework \cite{Conte:2012fm}, results that are based on a hadron-level simulation of the signal describing 20~fb$^{-1}$ of collisions at a center-of-mass energy of 8 TeV. For this, the parton-level events as generated by {\sc Madgraph}~5 have been showered and hadronized by means of the {\sc Pythia}~6 package~\cite{Sjostrand:2006za}, tau lepton decays being handled by using the {\sc Tauola} program \cite{Davidson:2010rw}. We start our analysis by preselecting events after imposing a set of basic selections ensuring that most (non-simulated) background contributions are well under control. \begin{itemize} \item We start by removing from the event final states all charged leptons not having a transverse momentum $p_T~\ge~10$~GeV and a pseudorapidity satisfying $|\eta| \leq 2.5$. \item Jets are reconstructed by means of the anti-$k_{t}$ algorithm \cite{Cacciari:2008gp} as implemented in {\sc FastJet}~\cite{Cacciari:2005hq,Cacciari:2011ma} after we set the radius parameter to $R=0.4$. We only consider jet candidates with $p_T \geq 20$~GeV and $|\eta| \leq 2.5$ which are not too close to an electron, {\it i.e.}, which lie outside a cone of radius $R=0.1$ centered around the electron. \item Lepton isolation is then enforced by rejecting all leptons lying in a cone of radius $R=0.4$ centered on any of the remaining jets. \item We require the presence in the final state of at least three isolated charged leptons. \end{itemize} While a complete simulation of the Standard Model background goes beyond the scope of this work, we refer to an existing phenomenological study of leptonic final states to demonstrate that the background remaining after the selection above\footnote{Although an additional veto on events containing identified $b$-jets is applied, this does not affect our purely leptonic signal.} is under control~\cite{Alloul:2013fra}. This analysis shows that we can indeed expect about 5500 background events, originating in 99.5\% of the cases from diboson production processes, so that already 230 signal events can induce a $3\sigma$ deviation from the Standard Model expectation. \begin{figure}[!t] \centering \begin{picture}(400,2) \put(80,2){$p p \to N_\ell$ charged leptons at the LHC (8 TeV), with $N_\ell \geq 3$.} \end{picture} \hspace*{-0.6cm} \includegraphics[width=.33\columnwidth]{PTL1_singlets_100.eps} \includegraphics[width=.33\columnwidth]{PTL1_singlets_250.eps} \includegraphics[width=.33\columnwidth]{PTL1_singlets_350.eps}\\ \hspace*{-0.6cm} \includegraphics[width=.33\columnwidth]{PTL1_doublets_100.eps} \includegraphics[width=.33\columnwidth]{PTL1_doublets_250.eps} \includegraphics[width=.33\columnwidth]{PTL1_doublets_350.eps}\\ \hspace*{-0.6cm} \includegraphics[width=.33\columnwidth]{PTL1_triplets_100.eps} \includegraphics[width=.33\columnwidth]{PTL1_triplets_250.eps} \includegraphics[width=.33\columnwidth]{PTL1_triplets_350.eps} \caption{\label{fig:ptl1}Transverse-momentum spectrum of the leading lepton emerging from a possible doubly-charged particle signal with $N_\ell \geq 3$ charged leptons in the final states. Event generation has been performed in the context of the LHC and for 20~fb$^{-1}$ of collisions at a center-of-mass energy of 8 TeV. In the top, middle and lower series of graphs, we require that the new states lie in the trivial, fundamental and adjoint representation of $SU(2)_L$, respectively, while their mass is set to 100~GeV, 250~GeV and 350~GeV in the left, central and right columns of the figure. In each subfigure, we show distributions for scalar fields (plain orange curve), vector fields (dashed green curve) and fermionic fields whose singly-charged component is allowed to mix with the Standard Model $\tau$ lepton (dashed blue curve, dubbed scenario B) or not (dot-dashed black curve, dubbed scenario A).} \end{figure} \begin{figure}[!t] \centering \begin{picture}(400,2) \put(80,2){$p p \to N_\ell$ charged leptons at the LHC (8 TeV), with $N_\ell \geq 3$.} \end{picture} \hspace*{-0.6cm} \includegraphics[width=.33\columnwidth]{PTL2_singlets_100.eps} \includegraphics[width=.33\columnwidth]{PTL2_singlets_250.eps} \includegraphics[width=.33\columnwidth]{PTL2_singlets_350.eps}\\ \hspace*{-0.6cm} \includegraphics[width=.33\columnwidth]{PTL2_doublets_100.eps} \includegraphics[width=.33\columnwidth]{PTL2_doublets_250.eps} \includegraphics[width=.33\columnwidth]{PTL2_doublets_350.eps}\\ \hspace*{-0.6cm} \includegraphics[width=.33\columnwidth]{PTL2_triplets_100.eps} \includegraphics[width=.33\columnwidth]{PTL2_triplets_250.eps} \includegraphics[width=.33\columnwidth]{PTL2_triplets_350.eps} \caption{\label{fig:ptl2}Same as in Figure~\ref{fig:ptl1}, but for the transverse-momentum spectrum of the next-to-leading leading charged lepton.} \end{figure} Since all experimental analyses focusing on multileptonic signatures further select events by requiring specific thresholds on the transverse-momentum of (at least) the two leading leptons $\ell_1$ and $\ell_2$, we present the related spectra in Figure~\ref{fig:ptl1} and Figure~\ref{fig:ptl2} in the context of all the Standard Model extensions introduced in Section \ref{sec:themodel}. In the top, middle and lower series of graphs shown on the figures, respectively, we focus on fields lying in the singlet, doublet and triplet representations of $SU(2)_L$. In contrast, on the left, central and right columns of the figure, we set the masses of the new states to 100~GeV, 250~GeV and 350~GeV, respectively. We recall that these choices have been adopted from the three mass scenarios constructed in the previous section. All the represented spectra exhibit a common global behavior. The distributions start by steeply rising, then peak and are finally extended by a tail up to (in general) moderate $p_T$ values smaller than 400~GeV. It is therefore rather complicated to define a feature to a given spin and/or $SU(2)_L$ representation when one accounts for the possible different new particle masses. Two exceptions are however allowed. First, events containing very hard leptons with a transverse momentum larger than 500~GeV are expected to be copiously produced in scenarios where the Standard Model is extended by a vectorial field $\cal V$ lying in the doublet representation of $SU(2)_L$, for any mass value. Next, models with additional doubly-charged scalar fields that are singlet under the $SU(2)_L$ gauge group lead to the production of multileptonic events where the $p_T$ spectra of the two leading leptons are depleted in the low and intermediate transverse-momentum regions. This feature nevertheless competes with the low cross sections associated with heavy scalar masses larger than 250-300~GeV (as illustrated in Figure~\ref{fig:scalar}). From those considerations, one concludes that the $p_T$ spectra of the two leading leptons $\ell_1$ and $\ell_2$ offer possible means to distinguish the spin and/or $SU(2)_L$ representations of the doubly-charged particles in very specific cases, but not in general. Similar features are found when analyzing the transverse-momentum distribution of the next-to-next-to-leading lepton $\ell_3$, as well as the transverse-mass and the invariant-mass spectra of any pair of leptons present in the event. The corresponding figures have therefore been omitted for brevity. \begin{figure}[!t] \centering \begin{picture}(400,2) \put(80,2){$p p \to N_\ell$ charged leptons at the LHC (8 TeV), with $N_\ell \geq 3$.} \end{picture} \hspace*{-0.6cm} \includegraphics[width=.33\columnwidth]{DELTARL1L2_singlets_100.eps} \includegraphics[width=.33\columnwidth]{DELTARL1L2_singlets_250.eps} \includegraphics[width=.33\columnwidth]{DELTARL1L2_singlets_350.eps}\\ \hspace*{-0.6cm} \includegraphics[width=.33\columnwidth]{DELTARL1L2_doublets_100.eps} \includegraphics[width=.33\columnwidth]{DELTARL1L2_doublets_250.eps} \includegraphics[width=.33\columnwidth]{DELTARL1L2_doublets_350.eps}\\ \hspace*{-0.6cm} \includegraphics[width=.33\columnwidth]{DELTARL1L2_triplets_100.eps} \includegraphics[width=.33\columnwidth]{DELTARL1L2_triplets_250.eps} \includegraphics[width=.33\columnwidth]{DELTARL1L2_triplets_350.eps} \caption{\label{fig:dr12}Same as in Figure~\ref{fig:ptl1}, but for the angular-distance spectrum of a lepton pair comprised of the two leading leptons $\ell_1$ and lepton $\ell_2$.} \end{figure} Spin representations are highly correlated to angular distributions. In this way, scalar, fermionic and vectorial doubly-charged particles are expected to give rise to signals with largely different features, when investigating kinematical variables such as angular distances between final state particles. As an example, in Figure~\ref{fig:dr12} we show the distribution in the angular distance between the two leading leptons $\Delta R(\ell_1 \ell_2) = \sqrt{\Delta\phi^2_{12} + \Delta\eta_{12}^2}$. In this expression, $\Delta \phi_{12}$ stands for the azimuthal angular separation of the two leptons with respect to the beam direction and $\Delta\eta_{12}$ for their pseudorapidity difference. Investigating the shapes of the spectra, we observe that they not only depends on the Lorentz representation of the new field, but also on their $SU(2)_L$ one. Therefore, these variables offer an important discriminating features among the different scenarios and deserve to be studied in the context of a more realistic phenomenological analysis, including detector effects that could alter the spectrum. As we wish to keep our considerations as general as possible, this however is beyond the scope of this prospective work. \section{Conclusions} \label{sec:conclusion} In this paper we have investigated LHC signals related to the presence of doubly-charged particles. We have considered different scenarios for such particles, varying their Lorentz and $SU(2)_L$ representations and constructing associated simplified models allowing for their pair production, followed by their decays into Standard Model particles. We have studied the contributions of doubly-charged states to the production cross sections of final states containing three or more charged leptons, a signature known to suffer from a reduced Standard Model background. Using analytical and numerical computations we have hence deduced that masses ranging up to 700~GeV are possibly accessible at the LHC for specific models. We have then employed Monte Carlo simulations to probe several kinematical distributions allowing to possibly distinguish the spin and $SU(2)_L$ representations of a doubly-charged state. The results found have been encouraging, in particular in the case of variables such as the angular distance $\Delta R$ between two of the final state leptons. This motivates an extension of this work, including an investigation of the detector effects which could spoil the shapes of the angular variable spectra, possibly in the context of ATLAS or CMS analyses. \acknowledgments The authors are grateful to Claude Duhr and Olivier Mattelaer for discussing fermion number violating interactions in {\sc MadGraph}. We acknowledge NSERC of Canada for partial financial support under grant number SAP105354 and the Theory-LHC-France initiative of the CNRS/IN2P3.
1,116,691,501,044
arxiv
\section{\@startsection{section}{1}{\z@} \makeatother \setcounter{page}{1} \field{A} \title{A Pseudo Multi-Exposure Fusion Method Using Single Image} \authorlist \authorentry{Yuma KINOSHITA}{s}{tmu} \authorentry{Sayaka SHIOTA}{m}{tmu} \authorentry{Hitoshi Kiya}{f}{tmu} } \affiliate[tmu]{ Department of Information and Communication Systems, Tokyo Metropolitan University, 191-0065, Tokyo, Japan} \received{2011}{1}{1} \revised{2011}{1}{1} \begin{document} \setlength{\tabcolsep}{3.0pt} \maketitle \begin{summary} This paper proposes a novel pseudo multi-exposure image fusion method based on a single image. Multi-exposure image fusion is used to produce images without saturation regions, by using photos with different exposures. However, it is difficult to take photos suited for the multi-exposure image fusion when we take a photo of dynamic scenes or record a video. In addition, the multi-exposure image fusion cannot be applied to existing images with a single exposure or videos. The proposed method enables us to produce pseudo multi-exposure images from a single image. To produce multi-exposure images, the proposed method utilizes the relationship between the exposure values and pixel values, which is obtained by assuming that a digital camera has a linear response function. Moreover, it is shown that the use of a local contrast enhancement method allows us to produce pseudo multi-exposure images with higher quality. Most of conventional multi-exposure image fusion methods are also applicable to the proposed multi-exposure images. Experimental results show the effectiveness of the proposed method by comparing the proposed one with conventional ones. \end{summary} \begin{keywords} Multi-Exposure Image Fusion, Image Enhancement, Contrast Enhancement, Tone Mapping \end{keywords} \section{Introduction} The low dynamic range (LDR) of the imaging sensors used in modern digital cameras is a major factor preventing cameras from capturing images as good as those with human vision. For this reason, the interest of high dynamic range (HDR) imaging has recently been increasing. Various research works on HDR imaging have so far been reported \cite{schoberl2013evaluation,chalmers2009high,debevec1997recovering,oh2015robust, kinoshita2016remapping,kinoshita2017fast,kinoshita2017fast_trans,huo2016single}. The research works are classified into two categories. The first one aims to generate HDR images having an extremely wide dynamic range. However, HDR display devices are not popular yet due to the high cost of the technologies. Hence, the second one focuses on tone mapping operations which generate standard LDR images from HDR ones \cite{murofushi2013integer, murofushi2014integer, dobashi2014fixed}. Consequently, in order to generate high quality LDR images via HDR images, it is necessary not only to generate HDR ones but also to map them into LDR ones. To generate LDR images more simply, multi-exposure image fusion methods have been proposed \cite{goshtasby2005fusion,mertens2009exposure,saleem2012image,wang2015exposure, li2014selectively,sakai2015hybrid,nejati2017fast}. The reported fusion methods use a stack of differently exposed images, ``multi-exposure images,'' and fuse them to produce an image with high quality. The advantage of these methods, compared with the ones via HDR images, is that they eliminate three operations: generating HDR images, calibrating a camera response function (CRF), and preserving the exposure value of each photograph. However, the conventional multi-exposure image fusion methods have several problems due to the use of a stack of differently exposed images. If the scene is dynamic or the camera moves while pictures are being captured, the multi-exposure images in the stack will not line up properly with one another. This misalignment results in ghost-like artifacts in the fused image. Although a number of methods have been proposed\cite{li2014selectively,oh2015robust} to eliminate these artifacts, the effectiveness of these methods is limited because it is difficult to apply them to videos. In addition, multi-exposure image fusion methods cannot be applied to existing images with a single exposure or videos. Because of such a situation, this paper proposes a novel pseudo multi-exposure image fusion method using a single image. The proposed method enables us to produce pseudo multi-exposure images from a single image and to improve the image quality by fusing them. To produce multi-exposure images, the proposed method use the relationship between the exposure values and pixel values, which is obtained by assuming that a digital camera has a linear response function. Moreover, the use of a local contrast enhancement method improves the quality of the pseudo multi-exposure images. Most of conventional multi-exposure image fusion methods are also applicable to the proposed pseudo multi-exposure images. Furthermore, the proposed method is useful for both reducing the number of input images used in conventional fusion ones, and improving the quality of multi-exposure images. We evaluate the effectiveness of the proposed method in terms of the quality of generated images by a number of simulations. In the simulations, the proposed method is compared with existing multi-exposure image fusion methods and typical contrast enhancement methods. The results show that the proposed method can produce high quality images, as well as conventional fusion methods with multi-exposure images. In addition, the proposed method outperforms typical contrast enhancement methods in terms of the color distortion. \section{Preparation} Multi-exposure fusion methods use images taken under different exposure conditions, i.e., “multi-exposure images.” Here we discuss the relationship between exposure values and pixel values. For simplicity, we focus on grayscale images. \subsection{Relationship between exposure values and pixel values} Figure \ref{fig:camera} shows the imaging pipeline for a digital camera\cite{dufaux2016high}. The radiant power density at the sensor, i.e., irradiance $E$, is integrated over the time $\Delta t$ the shutter is open, producing an energy density, commonly referred to as exposure $X$. If the scene is static during this integration, exposure $X$ can be written simply as the product of irradiance $E$ and integration time $\Delta t$ (referred to as "shutter speed"): \begin{equation} X(p) = E(p)\Delta t , \label{eq:exposure} \end{equation} where $p=(x,y)$ indicates the pixel at point $(x,y)$. A pixel value $I(p) \in [0, 1]$ in the output image $I$ is given by \begin{equation} I(p) = f(X(p)) , \label{eq:CRF} \end{equation} where $f$ is a function combining sensor saturation and a camera response function (CRF). The CRF represents the processing in each camera which makes the final image $I(p)$ look better. \begin{figure}[!t] \centering \includegraphics[width=0.95\linewidth]{./camera.pdf} \caption{Imaging pipeline of digital camera} \label{fig:camera} \end{figure} Camera parameters, such as shutter speed and lens aperture, are usually calibrated in terms of exposure value (EV) units, and the proper exposure for a scene is automatically selected by the camera. The exposure value is commonly controlled by changing the shutter speed although it can also be controlled by adjusting various camera parameters. Here we assume that the camera parameters except for the shutter speed are fixed. Let $0 \mathrm{[EV]}$ and $\Delta t_{0 \mathrm{EV}}$ be the proper exposure value and shutter speed under the given conditions, respectively. The exposure value $v_i \mathrm{[EV]}$ of an image taken at shutter speed $\Delta t_i$ is derived from \begin{equation} v_i = \log_2 \Delta t_i - \log_2 \Delta t_{0 \mathrm{EV}} . \label{eq:EV} \end{equation} From eq. (\ref{eq:exposure}) to eq. (\ref{eq:EV}), images $I_{0 \mathrm{EV}}$ and $I_i$ exposed at $0 \mathrm{[EV]}$ and $v_i \mathrm{[EV]}$, respectively, are written as \begin{align} I_{0 \mathrm{EV}}(p) &= f(E(p)\Delta t_{0 \mathrm{EV}}) \label{eq:CRFwithExposure}\\ I_i(p) &= f(E(p)\Delta t_i) \label{eq:CRFwithExposure2} = f(2^{v_i} E(p)\Delta t_{0 \mathrm{EV}}) . \end{align} Assuming function $f$ is linear, we obtain the following relationship between $I_{0 \mathrm{EV}}$ and $I_i$: \begin{equation} I_i(p) = 2^{v_i} I_{0 \mathrm{EV}}(p) . \label{eq:relationship} \end{equation} Therefore, the exposure can be varied artificially by multiplying $I_{0 \mathrm{EV}}$ by a constant. This ability is used in our proposed pseudo multi-exposure fusion method, which is described in the next section. \section{Proposed pseudo multi-exposure image fusion} In this paper, we propose a novel pseudo multi-exposure image fusion method which fuses multi-exposure images generated form a single image. The outline of the proposed method is shown in Fig. \ref{fig:PMEF}. In the proposed method, local contrast enhancement is applied to the luminance $L$ calculated from the original image $I$ and then pseudo exposure compensation and tone mapping are also applied. Next, image $I'$ with improved quality is produced by multi-exposure image fusion. \begin{figure*}[!t] \centering \includegraphics[clip, width=12cm]{./PMEF.pdf} \caption{Outline of proposed method \label{fig:PMEF}} \end{figure*} \subsection{Local contrast enhancement} If pseudo multi-exposure images are generated form a single image, the quality of an image fused from them will be lower than that of an image fused from genuine multi-exposure images. Therefore, the dodging and burning algorithm is used to enhance the local contrast \cite{huo2013dodging}. The algorithm is given by \begin{equation} L_c(p) = \frac{L^2(p)}{L_a(p)}, \label{eq:dodgingAndBurning} \end{equation} where $L_a(p)$ is the local average of luminance $L(p)$ around pixel $p$. It is obtained by applying a low-pass filter to $L(p)$. Here, a bilateral filter is used for this purpose. $L_a(p)$ is calculated using the bilateral filter \begin{equation} L_a(p) = \frac{1}{c(p)} \sum_{q \in \Omega} L(q) g_{\sigma_1}(q-p) g_{\sigma_2}(L(q) - L(p)), \label{eq:bilateral} \end{equation} where $\Omega$ is the set of all pixels, and $c(p)$ is a normalization term such as \begin{equation} c(p) = \sum_{q \in \Omega} g_{\sigma_1}(q-p) g_{\sigma_2}(L(q) - L(p)), \label{eq:normalizingConst} \end{equation} where $g_{\sigma}$ is a Gaussian function given by \begin{equation} g_{\sigma}(p | p=(x,y)) = C_{\sigma}\exp \left( -\frac{x^2 + y^2}{\sigma^2} \right) \label{eq:gaussian} \end{equation} using a normalization factor $C_{\sigma}$. Parameters $\sigma_1 = 16, \sigma_2 = 3/255$ are set in accordance with \cite{huo2013dodging}. \subsection{Pseudo exposure compensation} The pseudo exposure compensation consists of two steps: estimating luminance $L_{0 \mathrm{EV}}$ from $L_c$ and calculating luminance $L_i (1 \le i \le N, i \in \mathbb{N})$ of the $i$th image, where $L_{0 \mathrm{EV}}$ is the luminance of the properly exposed image i.e. with $0 \mathrm{[EV]}$, and $N$ is the number of pseudo multi-exposure images produced by the proposed method. In the first step, there are two approaches A and B to estimate the luminance $L_{0 \mathrm{EV}}$. Approach A estimates $L_{0 \mathrm{EV}}$ on the basis of automatic exposure algorithms in digital cameras, so that it enables us to avoid color distortions between a resulting image and the original image. On the other hand, approach B estimates $L_{0 \mathrm{EV}}$ by using all luminance values of the scene unlike the automatic exposure algorithms which generally use luminance values in specific area of the scene. Hence, approach B allows us to strongly enhance the contrast in all image regions. Note that approach A is only available when the exposure value $v \mathrm{[EV]}$ of the original image $I$ is known. In contrast, approach B is available regardless whether the exposure value $v \mathrm{[EV]}$ of $I$ is known or not. \begin{description}[style=nextline,font=\mdseries,leftmargin=0pt,listparindent=2em,parsep=1pt] \item[A. Estimating $L_{0 \mathrm{EV}}$ with exposure value $v$] In approach A, according to eq. (\ref{eq:relationship}), $L_{0 \mathrm{EV}}$ is estimated as \begin{equation} L_{0 \mathrm{EV}}(p) = 2^{-v} L_c(p). \label{eq:knownEV} \end{equation} \item[B. Estimating $L_{0\mathrm{EV}}$ without exposure value $v$] In approach B, we map the geometric mean $\overline{L}_c$ of luminance $L_c$ to middle-gray of the displayed image, or 0.18 on a scale from zero to one, as in \cite{reinhard2002photographic}, where the geometric mean of the luminance values indicates the approximate brightness of the image. The luminance $L_{0\mathrm{EV}}$ is derived from \begin{equation} L_{0\mathrm{EV}}(p) = \frac{0.18}{\overline{L}_c} L_c(p) \label{eq:unknownEV} \end{equation} where the geometric mean $\overline{L}_c$ of $L_c(p)$ is calculated using \begin{equation} \overline{L}_c = \exp{\left(\frac{1}{|\Omega|} \sum_{p \in \Omega} \log{L_c(p)}\right)}. \label{eq:geoMean} \end{equation} If eq. (\ref{eq:geoMean}) has singularities at some pixels i.e. $L_c(p)=0$, $\overline{L}_c$ is calculated by \begin{equation} \overline{L}_c = \exp{ \left(\frac{1}{|\Omega|} \left( \sum_{p \notin B} \log{L_c(p)} + \sum_{p \in B} \log{\epsilon} \right) \right) } \label{eq:geoMeanEps} \end{equation} where $B = \{p | L_c(p)=0\}$ and $\epsilon$ is a small value. \end{description} The second step of the pseudo exposure compensation is carried out according to eq. (\ref{eq:relationship}). The luminance $L_i$ of the $i$th image $I_i$ is obtained by \begin{equation} L_i(p) = 2^{v_i} L_{0\mathrm{EV}}(p), \label{eq:constMultiplication} \end{equation} so that the image $I_i$ could have the exposure value $v_i \mathrm{[EV]}$. To generate high quality images, multi-exposure images should represent bright, middle and dark regions of the original image $I$, respectively. Since the image having $0 \mathrm{[EV]}$ represents the middle region clearly, a negative value, zero and a positive value should be used as the parameters $v_i$. In this paper, we use $N = 3$, and $v_i = -1, 0, +1 \mathrm{[EV]}$. \subsection{Tone mapping} Since the luminance value $L_i(p)$ calculated by the pseudo exposure compensation often exceeds the maximum value of the common image format. Pixel values might be lost due to truncation of the values. This problem is overcome, by using a tone mapping operation to fit the luminance value into the interval $[0, 1]$. The luminance $L'_i$ of a pseudo multi-exposure image is obtained, by applying a tone mapping operator $F_i$ to $L_i$: \begin{equation} L'_i(p) = F_i(L_i(p)). \label{eq:TM} \end{equation} Reinhard's global operator is used here as tone mapping operator $F_i$ \cite{reinhard2002photographic}. Reinhard's global operator is given by \begin{equation} F_i(L(p)) = \frac{L(p)\left(1 + \frac{L(p)}{L^2_{white_i}} \right)}{1 + L(p)}, \label{eq:reinhardTMO} \end{equation} where parameter $L_{white_i} > 0$ determines luminance value $L(p)$ as $L'(p) = F_i(L(p)) = 1$. Note that Reinhard's global operator $F_i$ is a monotonically increasing function. Here, let $L_{white_i} = \max L_i(p)$. We obtain $L'_i(p) \le 1$ for all $p$. Therefore, truncation of the luminance values can be prevented. Combining $L'_i$, luminance $L$ of the original image $I$, and RGB pixel values $C(p) \in \{R(p), G(p), B(p)\}$ of $I$, we obtain RGB pixel values $C'_i(p) \in \{R'_i(p), G'_i(p), B'_i(p)\}$ of pseudo multi-exposure images $I'_i$: \begin{equation} C'_i(p) = \frac{L'_i(p)}{L(p)}C(p). \label{eq:color} \end{equation} \subsection{Fusion of pseudo multi-exposure images} Pseudo multi-exposure images $I'_i$ can be used as input for any multi-exposure image fusion method. While numerous methods for fusing images have been proposed, here we use those of Mertens et al. \cite{mertens2009exposure}, Sakai et al. \cite{sakai2015hybrid}, and Nejati et al. \cite{nejati2017fast}. A final image $I'$ is produced using \begin{equation} I' = \mathscr{F}(I'_1, I'_2, \cdots, I'_N), \label{eq:fusion} \end{equation} where $\mathscr{F}(I_1, I_2, \cdots, I_N)$ indicates a function to fuse $N$ images $I_1, I_2, \cdots, I_N$ into a single image. \subsection{Proposed procedure} The procedure for generating an image $I'$ from the original image $I$ by the proposed method is summarized as follows (see Fig. \ref{fig:PMEF}). \begin{enumerate}[nosep] \item Calculate luminance $L$ of the original image $I$. \item Calculate $L_c$ by using eq. (\ref{eq:dodgingAndBurning}) to eq. (\ref{eq:gaussian}). \item Calculate $L_i$ according to eq. (\ref{eq:constMultiplication}). \begin{enumerate}[label=Approach \Alph*.,leftmargin=*] \item Calculate $L_{0\mathrm{EV}}$ by eq. (\ref{eq:knownEV}). \item Calculate $L_{0\mathrm{EV}}$ by eqs. (\ref{eq:unknownEV}) and (\ref{eq:geoMeanEps}). \end{enumerate} \item Calculate luminance values $L'_i$ of pseudo multi-exposure images $I'_i$ from eqs. (\ref{eq:TM}) and (\ref{eq:reinhardTMO}). \item Generate $I'_i$ according to eq. (\ref{eq:color}). \item Obtain an image $I'$ with a multi-exposure image fusion method $\mathscr{F}$ as in eq. (\ref{eq:fusion}). \end{enumerate} \section{Simulation} Using two simulations, ``Simulation 1'' and ``Simulation 2,'' we evaluated the quality of the images produced by the proposed method, the three fusion methods mentioned above, and typical single image based contrast enhancement methods, i.e. the histogram equalization (HE), the contrast limited adaptive histograph equalization (CLAHE) \cite{zuiderveld1994contrast}, and the contrast-accumulated histogram equalization (CACHE) \cite{wu2017contrast}. \subsection{Comparison with conventional methods} To evaluate the quality of the images produced by each method, objective metrics are needed. Typical metrics such as the peak signal to noise ratio (PSNR) and the structural similarity index (SSIM) are not suitable for this purpose because they use the target image with the highest quality as a reference one. We therefore used TMQI\cite{yeganeh2013objective} and CIEDE2000\cite{sharma2005ciede2000} as the metrics as they do not require any reference images. TMQI represents the quality of images tone mapped from an HDR image; the index incorporates structural fidelity and statistical naturalness. An HDR image is used as a reference to calculate structural fidelity. Any references are not needed to calculate statistical naturalness. Since the processes of tone mapping and photographing are similar, TMQI is also useful for evaluating photographs. CIEDE2000 represents the distance in a color space between two images. We used CIEDE2000 to evaluate the color distortion caused by the proposed method. \subsection{Simulation conditions} \subsubsection{Simulation 1 (using HDR images)} In Simulation 1, HDR images were used to prepare the input images for the proposed method. The following procedure was carried out to evaluate the effectiveness of the proposed method. \begin{enumerate}[nosep] \item Map HDR image $I_H$ to three multi-exposure images $I_{Mk}, k = 1,2,3$ with exposure values $v_{Mk} = k-2\mathrm{[EV]}$ by using a tone mapping operator (see Fig. \ref{fig:orgImages}). \item Obtain $I'$ from $I$ according to the proposed procedure as in 3.5, under $I=I_{M2}$ having $v_{M2} = 0\mathrm{[EV]}$. \item Compute TMQI values between $I'$ and $I_H$. \item Compute CIEDE2000 values as an error measure between $I'$ and $I_{M2}$. \end{enumerate} In step 1), the tone mapping operator corresponds to function $f$ in eqs. (\ref{eq:CRFwithExposure}) and (\ref{eq:CRFwithExposure2}) (see Fig. \ref{fig:camera}). As assumed for eq. (\ref{eq:relationship}), a linear operator was used as the tone mapping operator. In addition, the properly exposed image, having $0\mathrm{[EV]}$, for each scene was defined as an image in which the geometric mean of the luminance equals to 0.18. We used 60 HDR images selected from available online databases \cite{openexrimage,anyherehdrimage}. \subsubsection{Simulation 2 (photographing directly)} In Simulation 2, four photographs taken by Canon EOS 5D Mark II camera and eight photographs selected from an available online database \cite{easyhdr} were directly used as input images $I_{Mk}$ (see Fig. \ref{fig:estate}). Since there were no HDR images for Simulation 2, the first step in Simulation 1 was not needed. In addition, structural fidelity in TMQI could not be calculated due to the non-use of HDR images. Thus, we used only statistical naturalness in TMQI as a metric. \begin{figure}[!t] \centering \subfloat[$I_{M1}$ \newline ($v_{M1}=-1\mathrm{[EV]}$)]{ \includegraphics[width=2.6cm]{./dark_memorial.jpg} \label{fig:OrgM1EV}} \subfloat[$I_{M2}$ \newline ($v_{M2}=0\mathrm{[EV]}$)]{ \includegraphics[width=2.6cm]{./normal_memorial.jpg} \label{fig:Org0EV}} \subfloat[$I_{M3}$ \newline ($v_{M3}=+1\mathrm{[EV]}$)]{ \includegraphics[width=2.6cm]{./bright_memorial.jpg} \label{fig:OrgP1EV}}\\ \caption{Examples of multi-exposure images $I_{Mk}$ (Memorial) mapped from $I_H$} \label{fig:orgImages} \end{figure} \begin{figure}[!t] \centering \subfloat[$I_{M1}$ \newline ($v_{M1}=-1.3\mathrm{[EV]}$)]{ \includegraphics[width=2.6cm]{./dark_estate.jpg} \label{fig:estateM1EV}} \subfloat[$I_{M2}$ \newline ($v_{M2}=0\mathrm{[EV]}$)]{ \includegraphics[width=2.6cm]{./normal_estate.jpg} \label{fig:estate0EV}} \subfloat[$I_{M3}$ \newline ($v_{M3}=+1.3\mathrm{[EV]}$)]{ \includegraphics[width=2.6cm]{./bright_estate.jpg} \label{fig:estateP1EV}}\\ \caption{Examples of multi-exposure images $I_{Mk}$ (Estate rsa) for Simulation 2} \label{fig:estate} \end{figure} \subsection{Simulation results} Here, the effectiveness of the proposed method is discussed on the basis of objective assessments. \subsubsection{Simulation 1} Tables \ref{tab:HDRTMQI}, \ref{tab:HDRNaturalness} and \ref{tab:HDRCIEDE} summarize TMQI score, statistical naturalness score, and CIEDE2000 score for Simulation 1, respectively. For TMQI $\in [0, 1]$ (and statistical naturalness $\in [0, 1]$), a larger value means higher quality. For CIEDE2000 $\in [0, \infty)$, a smaller value intends that the color difference between two images is smaller. \begin{description}[style=nextline,font=\mdseries,leftmargin=0pt,listparindent=2em,parsep=1pt] \item[a) Comparison with multi-exposure fusion methods] Table \ref{tab:HDRTMQI} shows the results of evaluating three multi-exposure fusion methods (MEF), three conventional contrast enhancement methods (CE), and the proposed method, in terms of TMQI, where the proposed method has six variations. Here CE and the proposed method utilized a single image $I_{M2}$ having $0 \mathrm{[EV]}$ as the input image, although MEF used three multi-exposure images $I_{M1}, I_{M2}$ and $I_{M3}$ as input ones. By comparing MEF with approach A and B (e.g. comparing MEF \cite{mertens2009exposure} with the proposed method using \cite{mertens2009exposure}), it is confirmed that both approach A and B provide higher TMQI scores than MEF, even though the proposed ones used a single image as an input image. Statistical naturalness scores (in Table \ref{tab:HDRNaturalness}) also show a similar trend to Table \ref{tab:HDRTMQI}. By considering CIEDE2000 scores in Table \ref{tab:HDRCIEDE}, it is also confirmed that approach A has better CIEDE scores than MEF. Figure \ref{fig:results} shows an example of images generated by each method. In this figure, the results of approach A are not shown because there were few visual differences between approach A and approach B. This is because exposure values of input images were determined in the same way as that utilized in approach B for estimating $L_{0 \mathrm{EV}}$ (given by eq. \ref{eq:unknownEV}), in Simulation 1. From the figure, it is confirmed that the proposed method can produce an image with almost the same as ones fused by MEF. These results demonstrate that the proposed method is effective as well as MEF. Moreover, CIEDE2000 scores denote that approach A can produce images with higher quality, in terms of the color distortion, than approach B. \item[b) Comparison with contrast enhancement methods] Contrast enhancement also allows us to enhance the quality of images from a single image. To clearly show the effectiveness of the proposed method, we compared the proposed method with typical contrast enhancement methods. Contrast enhancement methods provided higher TMQI and statistical naturalness scores than that of the proposed ones as shown in Tables \ref{tab:HDRTMQI} and \ref{tab:HDRNaturalness}. Especially, CACHE which is the state-of-the-art method has the best scores in all methods. However, they have the worst CIEDE2000 scores (see Table \ref{tab:HDRCIEDE}). The result means that the use of a contrast enhancement method would produce some serious color distortion. By comparing Fig. \ref{fig:results} with Fig. \ref{fig:orgImages}, it is also confirmed that contrast enhancement methods bring color distortion, e.g. the carpet on stairs (boxed by red line). In addition, since contrast enhancement methods aim to maximize image contrast, the resulting images sometimes have unnatural contrast due to over-enhancement (see regions boxed by blue line in Fig. \ref{fig:results}). By contrast, the proposed method can prevent both the color distortion and the over-enhancement. Therefore, the proposed methods outperforms contrast enhancement methods in terms of the color distortion and the over-enhancement. The results of Simulation 1 show that the proposed method enables us to produce high-quality images as well as conventional MEF, even when a single image is used as an input image. Besides, the proposed method also outperforms CE in therms of the color distortion and the over-enhancement. Comparison between approach A and B demonstrate that approach A can provide better CIEDE2000 scores than approach B, although approach B can strongly enhance the contrast of images as described later. \end{description} \subsubsection{Simulation 2} In Simulation 2, statistical naturalness scores also show a similar trend to Simulation 1 (see Table \ref{tab:CameraNaturalness}). Besides, Table \ref{tab:CameraCIEDE} shows that proposed methods using approach B as in 3.2 has worse CIEDE2000 scores than CLAHE and CACHE. This is due to the difference of estimating method for $L_{0\mathrm{EV}}$ between digital cameras and the proposed method using approach B. In approach B, estimated $L_{0\mathrm{EV}}$ is differ from ones estimated by digital cameras. As a result, brightness of images produced by the proposed one using approach B is differ substantially from the input image as shown in Fig. \ref{fig:resultsCamera}. On the other hand, approach A enables us to avoid color distortions since it estimates $L_{0\mathrm{EV}}$ by using exposure values calculated by digital cameras. Thus, approach A has the lowest CIEDE2000 scores in the methods (see Table \ref{tab:CameraCIEDE}). From Fig. \ref{fig:resultsCamera}, it is also confirmed that CE methods (He and CACHE) cause the loss of details in bright regions boxed by red line. This is due to the fact that these CE methods decrease the number of gradations assigned for bright regions, to enhance dark regions. By contrast, both approaches A and B can enhance images without the loss of details, as well as conventional MEF. From these results, the proposed method enables us to generate images with high quality, as well as conventional MEF, from a single image. In addition, approach A outperforms typical contrast enhancement methods in terms of the color distortion. On the other hand, approach B can strongly enhance the contrast of images without loss of details, unlike conventional CE methods. \begin{figure}[!t] \centering \subfloat[Mertens\cite{mertens2009exposure}]{ \includegraphics[width=2.6cm]{./MEFMertens_memorial_boxed.jpg} \label{fig:Mertens}} \subfloat[Sakai\cite{sakai2015hybrid}]{ \includegraphics[width=2.6cm]{./MEFYoshida_memorial_boxed.jpg} \label{fig:Yoshida}} \subfloat[Nejati\cite{nejati2017fast}]{ \includegraphics[width=2.6cm]{./MEFNejati_memorial_boxed.jpg} \label{fig:Nejati}}\\ \subfloat[HE]{ \includegraphics[width=2.6cm]{./HistogramEqualization_memorial_boxed.jpg} \label{fig:HE}} \subfloat[CLAHE\cite{zuiderveld1994contrast}]{ \includegraphics[width=2.6cm]{./CLAHE_memorial_boxed.jpg} \label{fig:CLAHE}} \subfloat[CACHE\cite{wu2017contrast}]{ \includegraphics[width=2.6cm]{./CACHE_memorial_boxed.jpg} \label{fig:CACHE}}\\ \subfloat[Proposed (B) \newline with \cite{mertens2009exposure}]{ \includegraphics[width=2.6cm]{./PMEFMertensWithoutEV_memorial_boxed.jpg} \label{fig:PMEFMertens}} \subfloat[Proposed (B) \newline with \cite{sakai2015hybrid}]{ \includegraphics[width=2.6cm]{./PMEFYoshidaWithoutEV_memorial_boxed.jpg} \label{fig:PMEFSakai}} \subfloat[Proposed (B) \newline with \cite{nejati2017fast}]{ \includegraphics[width=2.6cm]{./PMEFNejatiWithoutEV_memorial_boxed.jpg} \label{fig:PMEFNejati}}\\ \caption{Images $I'$ generated from image ``Memorial''} \label{fig:results} \end{figure} \begin{figure}[!t] \centering \subfloat{ \includegraphics[width=.4\columnwidth]{./MEFMertens_estate.jpg} } \subfloat{ \includegraphics[width=.4\columnwidth]{./HistogramEqualization_estate.jpg} }\\ \vspace{-2mm} \addtocounter{subfigure}{-2} \subfloat[Mertens\cite{mertens2009exposure}]{ \includegraphics[width=.4\columnwidth]{./MEFMertens_estate_zoom.jpg} \label{fig:MertensCorridor1}} \subfloat[HE]{ \includegraphics[width=.4\columnwidth]{./HistogramEqualization_estate_zoom.jpg} \label{fig:HECorridor1}}\\ \subfloat{ \includegraphics[width=.4\columnwidth]{./CACHE_estate.jpg} } \subfloat{ \includegraphics[width=.4\columnwidth]{./PMEFMertensWithEV_estate.jpg} }\\ \vspace{-2mm} \addtocounter{subfigure}{-2} \subfloat[CACHE\cite{wu2017contrast}]{ \includegraphics[width=.4\columnwidth]{./CACHE_estate_zoom.jpg} \label{fig:CACHECorridor1}} \subfloat[Proposed (A) \newline with \cite{mertens2009exposure}]{ \includegraphics[width=.4\columnwidth]{./PMEFMertensWithEV_estate_zoom.jpg} \label{fig:PMEFMertensCorridor1}}\\ \subfloat{ \includegraphics[width=.4\columnwidth]{./PMEFMertensWithoutEV_estate.jpg} }\\ \addtocounter{subfigure}{-1} \subfloat[Proposed (B) \newline with \cite{mertens2009exposure}]{ \includegraphics[width=.4\columnwidth]{./PMEFMertensWithoutEV_estate_zoom.jpg} \label{fig:PMEFMertensCorridor1}}\\ \caption{Images $I'$ generated from image ``Estate rsa'' (top) and zoom-in views of their upper right corner (bottom).} \label{fig:resultsCamera} \end{figure} \begin{table*}[!t] \centering \caption{Experimental results for Simulation 1 (TMQI). ``MEF,'' and ``CE'' indicate multi-exposure fusion and contrast enhancement, respectively.} {\footnotesize \begin{tabular}{l|l|lll|lll|ll|ll|ll} \hline \hline \multirow{3}{*}{Methods} & \multirow{3}{10mm}{Input image} & \multicolumn{3}{c|}{MEF} & \multicolumn{3}{c|}{CE} & \multicolumn{6}{c}{Proposed}\\\cline{3-14} & & \multicolumn{1}{c}{\cite{mertens2009exposure}} & \multicolumn{1}{c}{\cite{sakai2015hybrid}} & \multicolumn{1}{c|}{\cite{nejati2017fast}} & \multicolumn{1}{c}{HE} & \multicolumn{1}{c}{\cite{zuiderveld1994contrast}} & \multicolumn{1}{c|}{\cite{wu2017contrast}} & \multicolumn{2}{c|}{\cite{mertens2009exposure}} & \multicolumn{2}{c|}{\cite{sakai2015hybrid}} & \multicolumn{2}{c}{\cite{nejati2017fast}}\\ &&&&&&&& \multicolumn{1}{c}{A} & \multicolumn{1}{c|}{B} & \multicolumn{1}{c}{A} & \multicolumn{1}{c|}{B} & \multicolumn{1}{c}{A} & \multicolumn{1}{c}{B}\\ \hdashline AtriumNight & 0.8388 & 0.8514 & 0.8510 & 0.8402 & 0.8536 & 0.8236 & \textbf{0.8710} & 0.8579 & 0.8604 & 0.8576 & 0.8601 & 0.8449 & 0.8473 \\ MtTamWest & 0.7189 & 0.7784 & 0.7785 & 0.7718 & 0.7838 & \textbf{0.8838} & 0.8133 & 0.7990 & 0.8215 & 0.7964 & 0.8182 & 0.7885 & 0.8139 \\ SpheronNapa & 0.7239 & 0.7485 & 0.7483 & 0.7515 & 0.7423 & \textbf{0.7933} & 0.7734 & 0.7633 & 0.7670 & 0.7624 & 0.7660 & 0.7572 & 0.7610 \\ Memorial & 0.8404 & 0.8427 & 0.8429 & 0.8396 & 0.8381 & 0.7872 & 0.8415 & 0.8461 & 0.8522 & 0.8473 & \textbf{0.8538} & 0.8379 & 0.8438 \\ Rend 11 & 0.7932 & 0.8242 & 0.8231 & 0.8142 & 0.8649 & 0.8908 & \textbf{0.8994} & 0.8312 & 0.8563 & 0.8303 & 0.8552 & 0.8207 & 0.8474 \\\hdashline Average & \multirow{2}{*}{0.7830} & \multirow{2}{*}{0.8090} & \multirow{2}{*}{0.8088} & \multirow{2}{*}{0.8034} & \multirow{2}{*}{0.8165} & \multirow{2}{*}{0.8358} & \multirow{2}{*}{\textbf{0.8397}} & \multirow{2}{*}{0.8195} & \multirow{2}{*}{0.8315} & \multirow{2}{*}{0.8188} & \multirow{2}{*}{0.8307} & \multirow{2}{*}{0.8099} & \multirow{2}{*}{0.8227} \\ (5 images) & & & & & & & & & & & & & \\\hdashline Average & \multirow{2}{*}{0.8088} & \multirow{2}{*}{0.8151} & \multirow{2}{*}{0.8151} & \multirow{2}{*}{0.8130} & \multirow{2}{*}{0.8376} & \multirow{2}{*}{0.8248} & \multirow{2}{*}{\textbf{0.8581}} & \multirow{2}{*}{0.8294} & \multirow{2}{*}{0.8355} & \multirow{2}{*}{0.8290} & \multirow{2}{*}{0.8353} & \multirow{2}{*}{0.8236} & \multirow{2}{*}{0.8301} \\ (60 images) & & & & & & & & & & & & & \\\hline \end{tabular} } \label{tab:HDRTMQI} \end{table*} \begin{table*}[!t] \centering \caption{Experimental results for Simulation 1 (Statistical Naturalness) ``MEF,'' and ``CE'' indicate multi-exposure fusion and contrast enhancement, respectively.} {\footnotesize \begin{tabular}{l|l|lll|lll|ll|ll|ll} \hline \hline \multirow{3}{*}{Methods} & \multirow{3}{10mm}{Input image} & \multicolumn{3}{c|}{MEF} & \multicolumn{3}{c|}{CE} & \multicolumn{6}{c}{Proposed}\\\cline{3-14} & & \multicolumn{1}{c}{\cite{mertens2009exposure}} & \multicolumn{1}{c}{\cite{sakai2015hybrid}} & \multicolumn{1}{c|}{\cite{nejati2017fast}} & \multicolumn{1}{c}{HE} & \multicolumn{1}{c}{\cite{zuiderveld1994contrast}} & \multicolumn{1}{c|}{\cite{wu2017contrast}} & \multicolumn{2}{c|}{\cite{mertens2009exposure}} & \multicolumn{2}{c|}{\cite{sakai2015hybrid}} & \multicolumn{2}{c}{\cite{nejati2017fast}}\\ &&&&&&&& \multicolumn{1}{c}{A} & \multicolumn{1}{c|}{B} & \multicolumn{1}{c}{A} & \multicolumn{1}{c|}{B} & \multicolumn{1}{c}{A} & \multicolumn{1}{c}{B}\\ \hdashline AtriumNight & 0.1672 & 0.2185 & 0.2176 & 0.1644 & 0.3110 & 0.1398 & \textbf{0.4060} & 0.2411 & 0.2530 & 0.2398 & 0.2518 & 0.1829 & 0.1931 \\ MtTamWest & 0.1972 & 0.2326 & 0.2328 & 0.2531 & 0.2231 & \textbf{0.7518} & 0.4140 & 0.3027 & 0.3781 & 0.2906 & 0.3612 & 0.2931 & 0.3681 \\ SpheronNapa & 0.0116 & 0.0106 & 0.0105 & 0.0149 & 0.0418 & \textbf{0.1694} & 0.0720 & 0.0367 & 0.0430 & 0.0345 & 0.0403 & 0.0315 & 0.0368 \\ Memorial & 0.2094 & 0.2113 & 0.2122 & 0.1945 & 0.2544 & 0.0444 & 0.2890 & 0.2311 & 0.2609 & 0.2367 & \textbf{0.2684} & 0.1935 & 0.2209 \\ Rend 11 & 0.1637 & 0.2425 & 0.2365 & 0.2054 & 0.4703 & 0.5784 & \textbf{0.7145} & 0.2555 & 0.3645 & 0.2507 & 0.3576 & 0.2129 & 0.3197 \\\hdashline Average & \multirow{2}{*}{0.1498} & \multirow{2}{*}{0.1831} & \multirow{2}{*}{0.1819} & \multirow{2}{*}{0.1665} & \multirow{2}{*}{0.2601} & \multirow{2}{*}{0.3368} & \multirow{2}{*}{\textbf{0.3791}} & \multirow{2}{*}{0.2134} & \multirow{2}{*}{0.2599} & \multirow{2}{*}{0.2105} & \multirow{2}{*}{0.2558} & \multirow{2}{*}{0.1828} & \multirow{2}{*}{0.2277} \\ (5 images) & & & & & & & & & & & & & \\\hdashline Average & \multirow{2}{*}{0.2078} & \multirow{2}{*}{0.2000} & \multirow{2}{*}{0.2002} & \multirow{2}{*}{0.1903} & \multirow{2}{*}{0.3283} & \multirow{2}{*}{0.2683} & \multirow{2}{*}{\textbf{0.4496}} & \multirow{2}{*}{0.2543} & \multirow{2}{*}{0.2839} & \multirow{2}{*}{0.2528} & \multirow{2}{*}{0.2826} & \multirow{2}{*}{0.2278} & \multirow{2}{*}{0.2575} \\ (60 images) & & & & & & & & & & & & & \\\hline \end{tabular} } \label{tab:HDRNaturalness} \end{table*} \begin{table*}[!t] \centering \caption{Experimental results for Simulation 1 (CIEDE2000) ``MEF,'' and ``CE'' indicate multi-exposure fusion and contrast enhancement, respectively.} {\small \begin{tabular}{l|r|rrr|rrr|rr|rr|rr} \hline \hline \multirow{3}{*}{Methods} & \multirow{3}{10mm}{Input image} & \multicolumn{3}{c|}{MEF} & \multicolumn{3}{c|}{CE} & \multicolumn{6}{c}{Proposed}\\\cline{3-14} & & \multicolumn{1}{c}{\cite{mertens2009exposure}} & \multicolumn{1}{c}{\cite{sakai2015hybrid}} & \multicolumn{1}{c|}{\cite{nejati2017fast}} & \multicolumn{1}{c}{HE} & \multicolumn{1}{c}{\cite{zuiderveld1994contrast}} & \multicolumn{1}{c|}{\cite{wu2017contrast}} & \multicolumn{2}{c|}{\cite{mertens2009exposure}} & \multicolumn{2}{c|}{\cite{sakai2015hybrid}} & \multicolumn{2}{c}{\cite{nejati2017fast}}\\ &&&&&&&& \multicolumn{1}{c}{A} & \multicolumn{1}{c|}{B} & \multicolumn{1}{c}{A} & \multicolumn{1}{c|}{B} & \multicolumn{1}{c}{A} & \multicolumn{1}{c}{B}\\ \hdashline AtriumNight & 0.000 & 2.872 & 2.816 & 1.628 & 8.769 & 7.536 & 10.127 & 2.231 & 2.511 & 2.208 & 2.490 & \textbf{1.176} & 1.357 \\ MtTamWest & 0.000 & 3.881 & 3.864 & 2.715 & 5.875 & 4.994 & 5.869 & 1.891 & 3.832 & 1.879 & 3.826 & \textbf{1.335} & 2.806 \\ SpheronNapa & 0.000 & 4.565 & 4.561 & 2.821 & 4.204 & 8.724 & 5.024 & 2.346 & 2.627 & 2.334 & 2.617 & \textbf{1.472} & 1.794 \\ Memorial & 0.000 & 2.984 & 2.932 & 3.544 & 6.795 & 9.617 & 9.105 & 1.762 & 2.690 & \textbf{1.742} & 2.682 & 2.443 & 3.213 \\ Rend 11 & 0.000 & 3.447 & 3.403 & 2.947 & 7.418 & 7.343 & 8.766 & 2.892 & 5.582 & 2.862 & 5.560 & \textbf{2.212} & 4.827 \\\hdashline Average & \multirow{2}{*}{0.000} & \multirow{2}{*}{3.550} & \multirow{2}{*}{3.515} & \multirow{2}{*}{2.731} & \multirow{2}{*}{6.612} & \multirow{2}{*}{7.643} & \multirow{2}{*}{7.778} & \multirow{2}{*}{2.224} & \multirow{2}{*}{3.448} & \multirow{2}{*}{2.205} & \multirow{2}{*}{3.435} & \multirow{2}{*}{\textbf{1.727}} & \multirow{2}{*}{2.800} \\ (5 images) & & & & & & & & & & & & & \\\hdashline Average & \multirow{2}{*}{0.000} & \multirow{2}{*}{3.353} & \multirow{2}{*}{3.326} & \multirow{2}{*}{2.433} & \multirow{2}{*}{7.527} & \multirow{2}{*}{7.397} & \multirow{2}{*}{8.785} & \multirow{2}{*}{2.417} & \multirow{2}{*}{3.434} & \multirow{2}{*}{2.400} & \multirow{2}{*}{3.424} & \multirow{2}{*}{\textbf{1.912}} & \multirow{2}{*}{2.839} \\ (60 images) & & & & & & & & & & & & & \\\hline \end{tabular} } \label{tab:HDRCIEDE} \end{table*} \begin{table*}[!t] \centering \caption{Experimental results for Simulation 2 (Statistical Naturalness) ``MEF,'' and ``CE'' indicate multi-exposure fusion and contrast enhancement, respectively.} {\footnotesize \begin{tabular}{l|l|lll|lll|ll|ll|ll} \hline \hline \multirow{3}{*}{Methods} & \multirow{3}{10mm}{Input image} & \multicolumn{3}{c|}{MEF} & \multicolumn{3}{c|}{CE} & \multicolumn{6}{c}{Proposed}\\\cline{3-14} & & \multicolumn{1}{c}{\cite{mertens2009exposure}} & \multicolumn{1}{c}{\cite{sakai2015hybrid}} & \multicolumn{1}{c|}{\cite{nejati2017fast}} & \multicolumn{1}{c}{HE} & \multicolumn{1}{c}{\cite{zuiderveld1994contrast}} & \multicolumn{1}{c|}{\cite{wu2017contrast}} & \multicolumn{2}{c|}{\cite{mertens2009exposure}} & \multicolumn{2}{c|}{\cite{sakai2015hybrid}} & \multicolumn{2}{c}{\cite{nejati2017fast}}\\ &&&&&&&& \multicolumn{1}{c}{A} & \multicolumn{1}{c|}{B} & \multicolumn{1}{c}{A} & \multicolumn{1}{c|}{B} & \multicolumn{1}{c}{A} & \multicolumn{1}{c}{B}\\ \hdashline Arno & 0.0031 & 0.0264 & 0.0243 & 0.0360 & \textbf{0.2246} & 0.0448 & 0.1291 & 0.0095 & 0.0947 & 0.0092 & 0.0903 & 0.0072 & 0.1200 \\ Cave & 0.0006 & 0.0188 & 0.0174 & 0.0527 & \textbf{0.3231} & 0.0034 & 0.0070 & 0.0004 & 0.0009 & 0.0004 & 0.0011 & 0.0005 & 0.0001 \\ Chinese garden & 0.0772 & 0.1076 & 0.1141 & 0.1341 & \textbf{0.3460} & 0.0880 & 0.2298 & 0.1044 & 0.2267 & 0.1034 & 0.2552 & 0.0904 & 0.1739 \\ Corridor 1 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & \textbf{0.3556} & 0.0000 & 0.0015 & 0.0000 & 0.2112 & 0.0000 & 0.2076 & 0.0000 & 0.2371 \\ Corridor 2 & 0.0000 & 0.0085 & 0.0077 & 0.0053 & \textbf{0.3031} & 0.0006 & 0.0473 & 0.0001 & 0.0854 & 0.0001 & 0.0817 & 0.0000 & 0.1066 \\ Estate rsa & 0.0049 & 0.0458 & 0.0411 & 0.0411 & 0.4502 & 0.1564 & \textbf{0.6606} & 0.0160 & 0.1910 & 0.0149 & 0.1850 & 0.0118 & 0.1641 \\ Kluki & 0.2843 & 0.3584 & 0.3388 & 0.2889 & 0.3526 & 0.4205 & \textbf{0.9720} & 0.3992 & 0.6323 & 0.3852 & 0.6151 & 0.3731 & 0.6129 \\ Laurenziana & 0.4360 & 0.3424 & 0.3261 & 0.3799 & 0.3967 & 0.6133 & \textbf{0.9213} & 0.5328 & 0.8753 & 0.5232 & 0.8799 & 0.4939 & 0.8344 \\ Lobby & 0.0006 & 0.0037 & 0.0032 & 0.0043 & 0.4276 & 0.0031 & 0.0206 & 0.0008 & 0.4635 & 0.0008 & \textbf{0.4733} & 0.0008 & 0.4448 \\ Mountains & 0.2867 & 0.0622 & 0.0563 & 0.0692 & 0.4029 & 0.6072 & \textbf{0.8669} & 0.2741 & 0.1514 & 0.2669 & 0.1483 & 0.3588 & 0.1774 \\ Ostrow tumski & 0.0055 & 0.0199 & 0.0176 & 0.0489 & 0.1545 & 0.0636 & 0.1955 & 0.0119 & 0.3626 & 0.0115 & 0.3478 & 0.0117 & \textbf{0.4887} \\ Window & 0.0020 & 0.0068 & 0.0065 & 0.0070 & 0.2777 & 0.0133 & 0.0397 & 0.0043 & 0.3515 & 0.0042 & 0.3401 & 0.0036 & \textbf{0.4653}\\\hline \end{tabular} } \label{tab:CameraNaturalness} \end{table*} \begin{table*}[!t] \centering \caption{Experimental results for Simulation 2 (CIEDE2000) ``MEF,'' and ``CE'' indicate multi-exposure fusion and contrast enhancement, respectively.} {\small \begin{tabular}{l|r|rrr|rrr|rr|rr|rr} \hline \hline \multirow{3}{*}{Methods} & \multirow{3}{10mm}{Input image} & \multicolumn{3}{c|}{MEF} & \multicolumn{3}{c|}{CE} & \multicolumn{6}{c}{Proposed}\\\cline{3-14} & & \multicolumn{1}{c}{\cite{mertens2009exposure}} & \multicolumn{1}{c}{\cite{sakai2015hybrid}} & \multicolumn{1}{c|}{\cite{nejati2017fast}} & \multicolumn{1}{c}{HE} & \multicolumn{1}{c}{\cite{zuiderveld1994contrast}} & \multicolumn{1}{c|}{\cite{wu2017contrast}} & \multicolumn{2}{c|}{\cite{mertens2009exposure}} & \multicolumn{2}{c|}{\cite{sakai2015hybrid}} & \multicolumn{2}{c}{\cite{nejati2017fast}}\\ &&&&&&&& \multicolumn{1}{c}{A} & \multicolumn{1}{c|}{B} & \multicolumn{1}{c}{A} & \multicolumn{1}{c|}{B} & \multicolumn{1}{c}{A} & \multicolumn{1}{c}{B}\\ \hdashline Arno & 0.000 & 8.621 & 8.601 & 10.319 & 12.433 & 8.593 & 12.896 & 3.317 & 12.391 & 3.293 & 12.365 & \textbf{2.289} & 13.228 \\ Cave & 0.000 & 15.858 & 15.826 & 19.969 & 31.178 & 6.045 & 9.757 & 1.353 & 31.862 & \textbf{1.290} & 31.881 & 1.297 & 32.508 \\ Chinese garden & 0.000 & 11.954 & 11.882 & 10.922 & 16.282 & 13.556 & 15.954 & 2.594 & 15.706 & 2.470 & 15.660 & \textbf{2.294} & 15.231 \\ Corridor 1 & 0.000 & 3.794 & 3.785 & 2.551 & 40.235 & 6.738 & 19.344 & 1.347 & 36.950 & 1.335 & 36.948 & \textbf{0.944} & 37.685 \\ Corridor 2 & 0.000 & 22.179 & 22.164 & 19.810 & 30.185 & 9.368 & 24.636 & 3.377 & 27.568 & 3.364 & 27.558 & \textbf{1.812} & 28.086 \\ Estate rsa & 0.000 & 11.064 & 11.025 & 8.969 & 17.134 & 14.656 & 21.380 & 3.916 & 15.092 & 3.877 & 15.071 & \textbf{2.999} & 13.963 \\ Kluki & 0.000 & 11.081 & 11.017 & 5.740 & 3.103 & 12.403 & 12.160 & 2.457 & 5.412 & 2.389 & 5.356 & \textbf{1.870} & 4.945 \\ Laurenziana & 0.000 & 10.809 & 10.789 & 7.449 & 6.372 & 9.849 & 11.054 & 2.097 & 7.696 & 2.032 & 7.667 & \textbf{1.711} & 7.269 \\ Lobby & 0.000 & 8.552 & 8.520 & 8.232 & 33.087 & 7.074 & 16.938 & 1.339 & 31.463 & 1.312 & 31.457 & \textbf{1.022} & 31.529 \\ Mountains & 0.000 & 6.066 & 6.069 & 6.325 & 13.475 & 6.246 & 9.603 & 1.248 & 4.308 & 1.239 & 4.308 & \textbf{0.852} & 4.131 \\ Ostrow tumski & 0.000 & 7.077 & 7.032 & 8.297 & 9.976 & 8.562 & 11.694 & 2.114 & 15.677 & 2.089 & 15.667 & \textbf{1.795} & 17.287 \\ Window evaluative & 0.000 & 5.077 & 5.057 & 4.537 & 22.795 & 6.531 & 8.342 & 2.246 & 21.415 & 2.230 & 21.422 & \textbf{1.477} & 21.859\\\hline \end{tabular} } \label{tab:CameraCIEDE} \end{table*} \section{Conclusion} Our proposed method produces pseudo multi-exposure images from a single image and the use of a local contrast enhancement method improves their quality. The proposed method is done by utilizing the relationship between the exposure values and pixel values. Approaches A and B used in the proposed method enables us to avoid color distortions and to strongly enhance the image contrast, respectively. Approach B is available even when the exposure value of an input image is unknown, while approach A is only available when the exposure value is known. Experimental results showed that the proposed method can effectively enhances images as well as conventional multi-exposure image fusion methods, without multi-exposure images. In addition, the proposed approach A outperforms typical contrast enhancement methods in terms of the color distortion. On the other hand, approach B allows us to strongly enhance the contrast of images without loss of details, unlike conventional contrast enhancement methods. \input{./draft.bbl} \profile{Yuma Kinoshita}{ received his B.Eng. and M.Eng. degrees from Tokyo Metropolitan University, Japan, in 2016 and 2018, respectively. From 2018, he has been a Ph.D. student at Tokyo Metropolitan University. He received IEEE ISPACS Best Paper Award in 2016. His research interests are in the area of image processing. He is a student member of IEEE and IEICE. } \profile{Sayaka Shiota}{ received the B.E., M.E. and Ph.D. degrees in intelligence and computer science, Engineering and engineering simulation from Nagoya Institute of Technology, Nagoya, Japan in 2007, 2009 and 2012, respectively. From February 2013 to March 2014, she had worked at the Institute professor. In April of 2014, she joined Tokyo Metropolitan University as an Assistant Professor. Her research interests include statistical speech recognition and speaker verification. She is a member of the Acoustical Society of Japan (ASJ), the IEICE, and the IEEE. } \profile{Hitoshi Kiya}{ received his B.Eng. and M.Eng. degrees from Nagaoka University of Technology, Japan, in 1980 and 1982, respectively, and his D.Eng. degree from Tokyo Metropolitan University in 1987. In 1982, he joined Tokyo Metropolitan University as an Assistant Professor, where he became a Full Professor in 2000. From 1995 to 1996, he attended the University of Sydney, Australia as a Visiting Fellow. He was/is the Chair of IEEE Signal Processing Society Japan Chapter, an Associate Editor for IEEE Trans. Image Processing, IEEE Trans. Signal Processing and IEEE Trans. Information Forensics and Security, respectively. He also serves/served as the President of IEICE Engineering Sciences Society (ESS), the Editor-in-Chief for IEICE ESS Publications, and the President-Elect of APSIPA, His research interests are in the area of signal and image processing including multirate signal processing, and security for multimedia. He received the IWAIT Best Paper Award in 2014 and 2015, IEEE ISPACS Best Paper Award in 2016, the ITE Niwa-Takayanagi Best Paper Award in 2012, the Telecommunications Advancement Foundation Award in 2011, the IEICE ESS Contribution Award in 2010, and the IEICE Best Paper Award in 2008. He is a Fellow Member of IEEE, IEICE and ITE. } \end{document}
1,116,691,501,045
arxiv
\section{Introduction} How users interact with the Internet has evolved from the birth of Web 2.0. The convenience of storing, publishing and sharing contents results in an information overload for users when getting information which they are interested in. The recommender system, with the purposes of improving user experiences and helping users to get information suited to their interests, is one of the commonest modules in the web application. An increasingly influential set of websites such as Delicious, Flickr, Youtube and LinkedIn provide the users with the service to tag the items such as URL links, movies, photos, etc. The information given by tags reveals users' interests, depicts the items more precisely and provides more opportunities and resources for data analysis and knowledge discovery. It is wisdom for us to exploit the abundant information of tags to recommend interesting items to the users in the social tagging applications. Although different applications of social tagging systems have different items, they all allow people to store and share the interesting contents and can be modeled as tripartite graphs. Thus, the tripartite graph is one of the commonest topological structures in the social tagging systems which have three types of nodes. In the social tagging applications, the nodes stand for users, items and tags, and users are interested in some kinds of items which are attached with different tags. Figure 1 illustrates the modeled topological structure of the social tagging systems. \begin{figure}[htbp] \centering \includegraphics[width=7cm,height=4.8cm]{Figure1.pdf} \caption{The structure of the social tagging systems.} \end{figure} The recommender systems have been well studied in the previous research in ~\cite{Admavicius.(2005),Aiolli.(2013),Bernardes.(2015),Shi(2014),Bobadilla(2013),Guan(2014),Zeng(2014)}. At the same time, much effort has been made in the studies of social tagging systems in both structure domain and algorithm domain in ~\cite{Schenkel(2008),Ifada(2014),Ramage(2009),Huang(2014)}. The main challenges of constructing a personalized recommender system for the social tagging applications are as follows: \begin{itemize} \item \emph{Large volume of data}. In the online social tagging systems, there are enormous amount of users, items and tags. To analyse the data with the purpose of developing a recommender system to improve the user experiences brings high requirements on computational capabilities, and the time performance of the algorithm need to be outstanding. \item \emph{Diversity and novelty of items and tags}. There are various types of items and tags in the social tagging systems such as Delicious and Youtube. It is rarely easy to get the semantics of the items and tags. The challenges to understand the semantics may bring about noisy and inaccuracy models \cite{Zhou(2010),Vosecky.(2014)}. \item \emph{Timeliness}. The online applications produce enormous amount of data continuously. It is necessary to construct models which are fast enough to avoid being overwhelmed with tremendous new information. If the processing time of a model is too long, the results of the model can not be used because of the timeliness problem. \item \emph{Data sparseness}. In the real-world social tagging applications, there are users who have few store actions and share actions, moreover, many items are just be concerned few times. Considering this kind of data into the user modeling methods can result in inaccuracy models and slow down the algorithm~\cite{Vosecky.(2014),Lu.(2009)}. \item \emph{Cold start problem}. The cold start problem is the commonest challenge in recommender systems where recommendations to users are required when the users newly sign up in the applications. The cold start problems of different online applications have been studied in ~\cite{Martins(2013),Schein(2002),Mirbakhsh(2015)}. \end{itemize} In this paper, we address some of the above challenges (e.g., timeliness and data sparseness) by proposing a fast collaborative user model (FCUM) which accelerates the ordinary user-based collaborative filtering (UCF) without accuracy loss. And we demonstrate the performance of FCUM by an experimental evaluation on one real-world dataset which is crawled from the famous Delicious. In the following paragraphs, we introduce the FCUM and acceleration method briefly, more details are in the following sections. \textbf{Fast Collaborative User Model.} The items stored, tagged and shared by users can provide rich information of the users' interests, moreover, the tags can also reveal the interests of the users. In the FCUM, we exploit the information from both the items and the tags. Actually, if we conduct the recommendation procedure by using the information from all users, the UCF model will be noisy and will bring about high requirements on computational capabilities. Therefore, we extract non-overlapping user clusters and corresponding overlapping item clusters simultaneously and construct the FCUM. In the experiment, we first get the scores of the items which stand for how much the users like the items according to the behaviors of other users in the same cluster. Then a rank procedure is conducted, and we recommend the items to users in each cluster according to the ranklist. Finally, we evaluate the FCUM in the aspects of accuracy and time performance. Moreover, the conducted contrastive experiments will be depicted later. \textbf{Acceleration Method.} As mentioned above, we extract non-overlapping user clusters and corresponding overlapping item clusters simultaneously to construct the FCUM for recommending items to users. In this way, we just need to conduct the UCF in the clusters separately that accelerates this ordinary algorithm. During this procedure, we use a K-means-like approach to extract these clusters. First, the users are averagely and randomly distributed to clusters. Then, we update the centroid of each cluster and redistribute the users in each cluster to clusters in terms of the similarities of users and the centroids of clusters. We just iterate this procedure less times and will not wait for the convergence. Finally, the non-overlapping user clusters and corresponding overlapping item clusters are obtained respectively. This coarse clustering procedure separates the useful information from the noise (i.e., redundant information) in the FCUM and accelerates the following UCF. The rest of the paper is organized as follows: Section 2 gives out the details of the FCUM which is the foundation of the fast recommendation algorithm. And, section 3 presents the design of our experiments on the Delicious dataset. Then, section 4 shows the results of the experiments and the evaluation indicators of the FCUM from many aspects. Finally, in Sec. 5, we conclude our findings and give out what we will do in the future. \section{Fast Collaborative User Model} Collaborative filtering has successfully been applied in recommender systems in \cite{Admavicius.(2005),Bernardes.(2015),Shi(2014)}. In this subsection, we will comprehensively introduce the FCUM derived from collaborative filtering, which is an important module of the recommender system for a social tagging application. A social tagging system has various of users, items and tags. The behaviors of the users, including resource usages and annotation actions in the applications, can be represented as the user-item-tag triple form. The recommender systems for the social tagging applications have been studied in ~\cite{Chelmis(2013),Peng(2010),Zhang(2010)}. The main idea of this paper is to construct a FCUM that is \emph{fast} and \emph{accurate} enough for the recommendations in social tagging applications. In the following paragraphs, we will give out the basic notations used in this paper. As shown in Fig. 1, the social tagging system is denoted as a tripartite graph, \begin{equation} G_{urt} = (U,R,T,E_{ur},E_{rt},E_{ut}) \label{eq:1} \end{equation} where $U$, $R$, $T$ stand for the finite sets of users, items and tags, and $E_{ur}$ ,$E_{rt}$ and $E_{ut}$ (which is projected straightforward from the original graph) describe the finite sets of the edges between users and items, items and tags, users and tags. In the real-world online social tagging systems, the graphs are very sparse. In the aspect of the users, the tripartite graph can be projected into two bipartite graphs, which can be denoted as the user-item bipartite graph and the user-tag bipartite graph: \begin{equation} G_{ur} = (U,R,E_{ur}), \label{eq:2} \end{equation} \begin{equation} G_{ut} = (U,T,E_{ut}). \label{eq:3} \end{equation} Thus, the users can be characterized by the resource usage information (e.g. items) and the annotation action information (e.g. tags). The projected bipartite graphs are illustrated in Fig.2(a) and Fig.2(b), respectively. \begin{figure}[htbp] \centering \subfigure[the user-item bipartite graph.]{ \includegraphics[width=0.48\columnwidth]{Figure2a.pdf}} \subfigure[the user-tag bipartite graph.]{ \includegraphics[width=0.465\columnwidth]{Figure2b.pdf}} \caption{The projected bipartite graphs from the tripartite graph.} \end{figure} In other words, the interests of a user can be represented as two vectors: the user-item vector and the user-tag vector. The user-item vector can be denoted as: \begin{equation} \vv{V_{u_{i}}^{R}} = (e_{u_{i}}^{r_{1}}, e_{u_{i}}^{r_{2}}, ... , e_{u_{i}}^{r_{N_{R}}}) \label{eq:4} \end{equation} where $\vv{V_{u_{i}}^{R}}$ represents the characteristic of user $i$ in the aspect of the items, $N_{R}$ represents the total number of the items and the $e_{u_{i}}^{r_{j}}$ is as follow: \begin{equation} e_{u_{i}}^{r_{j}} = \begin{cases} 1, & \text{user $i$ tagged item $j$.} \\ 0, & \text{otherwise.} \\ \end{cases} \label{eq:5} \end{equation} Similarly, the user-tag vector can be denoted as: \begin{equation} \vv{V_{u_{i}}^{T}} = (e_{u_{i}}^{t_{1}}, e_{u_{i}}^{t_{2}}, ... , e_{u_{i}}^{t_{N_{T}}}) \label{eq:6} \end{equation} where $\vv{V_{u_{i}}^{T}}$ represents the characteristic of user $i$ in the aspect of the tags, $N_{T}$ represents the total number of the tags and the $e_{u_{i}}^{t_{j}}$ is as follow: \begin{equation} e_{u_{i}}^{t_{j}} = \begin{cases} 1, & \text{user $i$ used tag $j$.} \\ 0, & \text{otherwise.} \\ \end{cases} \label{eq:7} \end{equation} The item nodes and the tag nodes can be modeled in the same way above. Although the FCUM is general purpose for arbitrary tripartite graphs, in this paper, we just concentrate on the characteristics of the users and recommend the items to the users. This model can be easily extended and the framework of the fast recommendation algorithm for social tagging systems is illustrated in Fig.3. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{Figure3.pdf} \caption{The framework of our fast recommendation algorithm for social tagging systems.} \end{figure} \subsection{Similarity} Memory-based collaborative filtering techniques (e.g. UCF) and clustering algorithms rely on the notion of similarity between pairs of users \cite{Admavicius.(2005),Bernardes.(2015),Nanopoulos(2009)}. The similarity between user $i$ and user $j$ can be represented through many kinds of similarity measures, such as Pearson correlation coefficient \cite{Admavicius.(2005)}, cosine similarity \cite{Admavicius.(2005),Bernardes.(2015),Nanopoulos(2009)} and Euclidean distance \cite{Nanopoulos(2009)}. In \cite{Nanopoulos(2009),Francois(2007)}, the influence of the high dimensional and sparse data to the Euclidean distances has been studied. They found that when the data is high dimensional and sparse, its Euclidean distances seem to concentrate and all the Euclidean distances between pairs of data elements seem to be very similar. Moreover, the Pearson correlation coefficient is more suitable for scoring systems rather than tagging systems. Herein, the cosine similarity is used and described as: \begin{equation} cos\textrm{-}sim(u_i,u_j) = \frac{\vv{V_{u_i}} \bm\cdot \vv{V_{u_j}}}{\Vert \vv{V_{u_i}} \Vert \Vert \vv{V_{u_j}} \Vert}. \label{eq:8} \end{equation} Both the resource usages and annotation actions can reveal the interests of the users in the social tagging systems. In this paper, we jointly take into account the resource usages and annotation actions to calculate the similarity of two users. It is computed as: \begin{equation} sim(u_i,u_j) = \beta \frac{\vv{V_{u_i}^R} \bm\cdot \vv{V_{u_j}^R}}{\Vert \vv{V_{u_i}^R} \Vert \Vert \vv{V_{u_j}^R} \Vert} + (1 - \beta) \frac{\vv{V_{u_i}^T} \bm\cdot \vv{V_{u_j}^T}}{\Vert \vv{V_{u_i}^T} \Vert \Vert \vv{V_{u_j}^T} \Vert} \label{eq:9} \end{equation} where $\beta$ is a parameter ranging from 0 to 1. Therefore, we evaluate the similarity of two users by means of improved cosine similarity in Eq.(\ref{eq:9}). In the FCUM, the value of $\beta$ is set as 0.5 due to the fact that these two types of cosine similarities follow a similar distribution in the Delicious dataset. \subsection{Cluster Extraction} The most crucial part of the FCUM is cluster extraction. In order to accelerate the following UCF, we need to extract useful information from massive data in the social tagging applications. In this subsection, we will introduce the extracting procedure in details and give out its pseudo-code. Clustering algorithms are involved in many different domains, which are used to detect the clusters in social tagging systems in \cite{Ramage(2009),Lu.(2009)}, to handle image segmentation in \cite{Isa(2009)}, to extract the social dimension in \cite{Tang.(2010)}, etc. In the FCUM, a coarse clustering algorithm, whose similarity measure is based on Eq.(\ref{eq:9}), is used to extract useful information to accelerate the recommendation procedure without accuracy loss. It partitions the users in the social tagging applications into non-overlapping clusters, consecutively, the items are also divided into corresponding overlapping clusters. Figure 4 illustrates a visual view of the result. \begin{figure}[htbp] \centering \includegraphics[width=9cm,height=5cm]{Figure4.pdf} \caption{The result description of the coarse clustering procedure.} \end{figure} For the coarse clustering algorithm, its steps are similar to those in the K-means approach. The purpose of the cluster extraction procedure is not to obtain convergent user/item clusters of the social tagging systems but to extract \emph{only} part of the user/item information for accelerating the following UCF. Therefore, it is unnecessary to make the algorithm iterate to convergence. More concretely, look back into the K-means approach~\cite{Jain(1999),Kanungo(2002),Kriegel(2009)}, the first step is that distributing the nodes to arbitrary clusters, and the second one, which will be iterated many times to a convergent result, is to calculate the centroid of each cluster and redistribute each node to a new cluster based on the similarity of the node and the centroid. In the present work, for the second step, it only needs to set a \emph{low} iteration times to coarsely cluster these users and items. The experimental results (more details presented in Sec. 4) show that this operation (even set the iteration times as 2) rarely affect the accuracy evaluation indicators of the recommender system. In this way, we accelerate the procedure of cluster extraction. To keep our description of K-means-like approach much more self-contained, we show a comprehensive operating process. Let $K_c$ be the number of the user clusters, and $C_j^U$ $(1\leq j \leq K_c)$ represents the user cluster whose index is $j$. It has been known that each user is characterized as two vectors in Eq.(\ref{eq:4}) and Eq.(\ref{eq:6}). The centroid of each user cluster includes two parts: \begin{equation} \vv{cent_{C_j^U}^R} = \frac{1}{N_{C_j^U}}\sum_{u_i \in C_j^U} \vv{V_{u_i}^R}, \label{eq:10} \end{equation} \begin{equation} \vv{cent_{C_j^U}^T} = \frac{1}{N_{C_j^U}}\sum_{u_i \in C_j^U} \vv{V_{u_i}^T}, \label{eq:11} \end{equation} where $N_{C_j^U}$ stands for the number of users in the user cluster $j$. Then, the similarity between the user and corresponding centroid is computed as: \begin{equation} sim(u_i,C_j^U) = \gamma \frac{\vv{V_{u_i}^R} \bm\cdot \vv{cent_{C_j^U}^R}}{\Vert \vv{V_{u_i}^R} \Vert \Vert \vv{cent_{C_j^U}^R} \Vert} + (1-\gamma)\frac{\vv{V_{u_i}^T} \bm\cdot \vv{cent_{C_j^U}^T}}{\Vert \vv{V_{u_i}^T} \Vert \Vert \vv{cent_{C_j^U}^T} \Vert,} \label{eq:12} \end{equation} where $\gamma$ is a parameter ranging from 0 to 1. Due to the fact that the $\frac{\vv{V_{u_i}^R} \bm\cdot \vv{cent_{C_j^U}^R}}{\Vert \vv{V_{u_i}^R} \Vert \Vert \vv{cent_{C_j^U}^R} \Vert}$ and the $\frac{\vv{V_{u_i}^T} \bm\cdot \vv{cent_{C_j^U}^T}}{\Vert \vv{V_{u_i}^T} \Vert \Vert \vv{cent_{C_j^U}^T} \Vert}$ follow similar distributions, $\gamma$ is set as 0.5. On this basis, we extract the non-overlapping user clusters and corresponding overlapping item clusters (i.e., each item cluster is constrained by the user-item bipartite graph, which brings that some of items overlaps in different clusters). Algorithm 1 illustrates the pseudo-code. It is necessary to point out that the time complexity of Algorithm 1 is $O(T(K_c+N_U)(N_R+N_T))$ where $T$ stands for the number of iteration times, $K_c$ represents the number of user clusters, and $N_U$, $N_R$, $N_T$ are the number of users, items and tags. In the following, the procedure of UCF is only conducted to recommend items in $C_j^R$ (item cluster) to the associated users in $C_j^U$ (user cluster), which extremely accelerate the recommendation process. \floatname{algorithm}{Algorithm} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \begin{algorithm}[!htb] \caption{user and item clusters extraction} \begin{algorithmic}[1] \Require the user-item bipartite graph $G_{ur}$; the user-tag bipartite graph $G_{ut}$; the number of user clusters $K_c$; the number of iteration times $iterTime$; \Ensure the non-overlapping user clusters $C_j^U$; the corresponding overlapping item clusters $C_j^R$; \State assign each user to a random cluster \State $iterTime \gets 2$ \While {$iterTime > 0$} \State calculate the centroids of the clusters \For {$j = 1 \to K_c$} \State $temp\textrm{-}C_j^U = \varnothing$ \EndFor \For {each user $u_i$} \For {$j = 1 \to K_c$} \State calculate the $sim(u_i,C_j^U)$ \EndFor \State find the index $j$ of the cluster which maximizes $sim(u_i,C_j^U)$ \State $temp\textrm{-}C_j^U$ = $temp\textrm{-}C_j^U \cup \{u_i\}$ \EndFor \For {$j = 1 \to K_c$} \State $C_j^U = temp\textrm{-}C_j^U$ \EndFor \State $iterTime = iterTime - 1$ \EndWhile \For {$j = 1 \to K_c$} \State $C_j^R = \varnothing$ \For {each user $u_i$ in $C_j^U$} \For {each resource usage $u_i\textrm{-}r_k$ in $G_{ur}$} \State $C_j^R = C_j^R \cup \{r_k\}$ \EndFor \EndFor \EndFor \end{algorithmic} \end{algorithm} \subsection{User Based Collaborative Filtering} The UCF has been used to recommend movies, songs, jobs, books and other products in the e-commercial systems, online social systems and other types of online applications~\cite{Admavicius.(2005),Aiolli.(2013),Bernardes.(2015)}. In this procedure, each user obtains an item ranklist, in which he likely tends to store, tag and share the top items. To rank the items, the score function is denoted as follows: \begin{equation} score(u_i,r_k | u_i \in C_j^U) = \begin{cases} \sum_{u_s \in C_j^U}f[sim(u_i,u_s)|r_k], & e_{u_i}^{r_k} == 0 \\ -1, & otherwise \\ \end{cases} \label{eq:13} \end{equation} \begin{equation} f[sim(u_i,u_s)|r_k] = \begin{cases} sim(u_i,u_s), & e_{u_s}^{r_k} == 1 \\ 0, & otherwise \\ \end{cases} \label{eq:14} \end{equation} After the calculation, we sort the scores and get the ranklist for every user. Because a higher score represents that a user is more likely to store, tag and share the item, we recommend the item with a higher user specific score to the user. In addition, the time complexity of the UCF is $O(N_U(N_UN_R+N_T))$, where $N_U$, $N_R$, and $N_T$ represent the number of users, items and tags, respectively. However, In the FCUM, the time cost can be represented as $O(\sum_{j=1}^{K_c}N_{C_j^U}(N_{C_j^U}N_{C_j^R}+N_{C_j^T}))$, where $N_{C_j^U}$ stands for the number of users in the $j$th user cluster, $N_{C_j^R}$ denotes the number of items in the $j$th item cluster, and $N_{C_j^T}$ is the number of tags which are associated with the users in the $j$th user cluster. Obviously, due to the fact that $\sum_{j=1}^{K_c}N_{C_j^U} == N_U$, the time cost of the proposed algorithm is less than that of the ordinary one even including the time cost of the coarse clustering procedure. \subsection{Extending model} In this paper, we propose the foundation of the FCUM that only uses the information from resource usages and annotation actions to construct the model. However, in the real-world social tagging applications, we can get information from user profiles, explicit user relationships, as well as implicit relationships among the users and items such as the situations that two users both click one link but do not tag it or the IP addresses of two users are located in the same city. These features are able to be represented in the vector forms like Eq.(\ref{eq:4}) and Eq.(\ref{eq:6}), then add them to the FCUM. Thus, the similarities among users can be extended as: \begin{equation} sim(u_i,u_j)= \sum_{k=1}^{N_f}\beta_k\frac{\vv{V_{u_i}^{f_k}} \bm\cdot \vv{V_{u_j}^{f_k}}}{\Vert \vv{V_{u_i}^{f_k}} \Vert \Vert \vv{V_{u_j}^{f_k}} \Vert} \label{eq:15} \end{equation} \begin{equation} \sum_{k=1}^{N_f}\beta_k = 1, \label{eq:16} \end{equation} where $N_f$ is the number of feature vectors, $\vv{V_{u_i}^{f_k}}$ is the $k$-th feature vector of user $u_i$, and $\beta_k$ is the parameter ranging from 0 to 1. Similarly, in the cluster extraction procedure introduced in Sec. $2.2$, the centroid of each cluster can be represented as $N_f$ vectors. Thus, it is able to construct a hybrid FCUM\cite{Jin(2011)}. \section{Experimental Design} \subsection{Evaluation Indicators} To evaluate whether the recommended items meet the users' interests based on the FCUM, we divide the available dataset into training and testing subsets according to the timestamps. And, three common evaluation indicators, recall, precision and F1-score, are described respectively \cite{Bernardes.(2015)}: \begin{equation} recall@k = \frac{1}{N_U} \sum_{i=1}^{N_U} \frac{\Vert R_{u_i}^k \cap T_{u_i} \Vert}{\Vert T_{u_i} \Vert} \label{eq:17} \end{equation} \begin{equation} precision@k = \frac{1}{N_U} \sum_{i=1}^{N_U} \frac{\Vert R_{u_i}^k \cap T_{u_i} \Vert}{k} \label{eq:18} \end{equation} \begin{equation} f_1@k = \frac{2 \times precision@k \times recall@k}{precision@k + recall@k} \label{eq:19} \end{equation} where $k$ is the ranklist (i.e., recommendation list) length, $R_{u_i}^k$ is the finite set of items $\{r_{u_i}^1, r_{u_i}^2, ..., r_{u_i}^k\}$ recommended to user $u_i$, $T_{u_i}$ is the test set for user $u_i$. The recall describes the true positive rate, the precision is referred to the positive predictive value, and F1-score combines the recall and the precision into the harmonic mean. \subsection{Dataset and Platform} In the real-world social tagging systems, the tripartite graphs are usually sparse. It suggests that many users have only few resource usages and annotation actions, and many items are stored, tagged and shared only few times. Thus, the accuracy and time performance will be obviously influenced. Before conducting experiment, a pre-process is done for the original dataset. The original dataset is crawled from a real-world social tagging system, Delicious. It contains 1867 users, 69226 URLs (items), 53388 tags, and 437595 user-URL-tag triples. In the pre-process, we filter out the user nodes, resource nodes, tag nodes, whose degrees are lower than a threshold by iteratively removing these nodes and edges from the graph. For example, when the threshold equals to 5, there are 1617 users, 21983 URLs, 5301 tags and 236659 user-URL-tag triples in the filtered graph. Note that we also consider other thresholds in the following experiments. After that, according to the timestamp, the filtered dataset is divided into 80$\%$ training subset and 20$\%$ testing subset in terms of the user-URL-tag triples. Moreover, in this procedure, we make sure that every user is in the testing subset. Table 1 is characterized by the number of users, URLs, tags and user-URL-tag triples in the training set, in the testing set as well as in total. The experimental platform is a notebook computer with 4 AMD A10-4600M APU cores and 4GB DRAM. Its operating system is ArchLinux with a linux kernel of 4.1.5. We use g++ 5.2.0 with $-O2$ compiler optimization level. \begin{table}[htbp] \centering \begin{tabular}{| m{0.15\columnwidth}<{\centering} | m{0.1\columnwidth}<{\centering} | m{0.1\columnwidth}<{\centering} | m{0.1\columnwidth}<{\centering} | m{0.3\columnwidth}<{\centering}|} \hline & users & URLs & tags & user-URL-tag triples \\ \hline training set & 1617 & 20338 & 5299 & 188671 \\ \hline testing set & 1617 & 8055 & 4758 & 47988 \\ \hline total & 1617 & 21983 & 5301 & 236659 \\ \hline \end{tabular} \caption{The statistics of the filtered dataset(degree threshold = 5)} \end{table} \begin{table}[htbp] \centering \begin{tabular}{| m{0.17\columnwidth}<{\centering} | m{0.1\columnwidth}<{\centering} | m{0.1\columnwidth}<{\centering} | m{0.1\columnwidth}<{\centering} | m{0.1\columnwidth}<{\centering} | m{0.1\columnwidth}<{\centering}| m{0.12\columnwidth}<{\centering} | } \hline indicators & $n = 2$ & $n = 4$ & $n = 6$ & $n = 8$ & $n = 10$ & \textbf{UCF} \\ \hline $recall@5$ & \textbf{0.11916} & 0.11676 & 0.11197 & 0.11324 & 0.11524 & 0.11146 \\ $precision@5$ & \textbf{0.05244} & 0.05071 & 0.04799 & 0.04836 & 0.05022 & 0.0491 \\ $F_1@5$ & \textbf{0.07283} & 0.07071 & 0.06718 & 0.06778 & 0.06995 & 0.06817 \\ \hline $recall@10$ & \textbf{0.14772} & 0.14536 & 0.14341 & 0.14367 & 0.14441 & 0.14268 \\ $precision@10$ & \textbf{0.03822} & 0.03643 & 0.0363 & 0.03618 & 0.03643 & 0.03599 \\ $F_1@10$ & \textbf{0.06073} & 0.05825 & 0.05794 & 0.0578 & 0.05818 & 0.05748 \\ \hline $recall@15$ & \textbf{0.16099} & 0.16033 & 0.15899 & 0.15825 & 0.15863 & 0.158 \\ $precision@15$ & \textbf{0.03051} & 0.03039 & 0.02985 & 0.02964 & 0.02997 & 0.02956 \\ $F_1@15$ & \textbf{0.0513} & 0.05109 & 0.05026 & 0.04993 & 0.05042 & 0.0498 \\ \hline $recall@20$ & \textbf{0.16829} & 0.16751 & 0.16709 & 0.16673 & 0.1667 & 0.1676 \\ $precision@20$ & \textbf{0.02557} & 0.02548 & 0.02514 & 0.02523 & 0.02536 & 0.02566 \\ $f_1@20$ & \textbf{0.0444} & 0.04423 & 0.0437 & 0.04383 & 0.04402 & 0.04451 \\ \hline \end{tabular} \caption{The influence of iteration time on performance.} \end{table} \section{Results and Discussion} In order to achieve the extraction of non-overlapping user clusters and corresponding overlapping item clusters during a relatively short time, the iteration times is set no more than 10. Moreover, the degree threshold mentioned in Sec. $3.2$ equals to 5 in terms of the sparseness of the graph. Firstly, we explore the influence of the iteration time on the the evaluation indicators of the recommender system. In the cluster extraction procedure, the average number of users in each cluster is 90, which roughly fixes initial 18 clusters. Note that the initially random allocation of users to each cluster is insensitive to the experimental results (see in Fig.6). For iteration number from n=2 to n=10 (step length 2), three evaluation indicators of the recommender system are computed restricted to ranklist length and in the comparison of UCF, which are illustrated in Tab. 2 and in Fig.5. \begin{figure}[htbp] \centering \subfigure[recall]{ \includegraphics[width=0.48\columnwidth,height=4.7cm]{Figure5a.pdf}} \subfigure[precision]{ \includegraphics[width=0.48\columnwidth,height=4.7cm]{Figure5b.pdf}} \subfigure[$f_1$]{ \includegraphics[width=0.48\columnwidth,height=4.7cm]{Figure5c.pdf}} \subfigure[time cost@20]{ \includegraphics[width=0.48\columnwidth,height=4.67cm]{Figure5d.pdf}} \caption{(color online) The influence of iteration time on performance as a function of ranklist length.} \end{figure} More concretely, in Tab. 2, it can be found that these evaluation indicators are insensitive to the iteration time $n$, suggested by their similar values in restricted to the ranklist length, and our model doesn't weaken the accuracy of the recommendation system in a comparison of the UCF (more details shown in Fig.5). More importantly, when the iteration time equals to 2, the statistical averages of these evaluation indicators show that the FCUM have a relatively better accuracy (see in Fig.5(a)-(c)) when the ranklist length ranges from 3 to 19, and the time performance is significantly better than the UCF (see in Fig.5(d)), that is, the time cost is reduced greater than 90$\%$. Furthermore, the comprehensive analysis of these evaluation indicators in Fig.5 shows that the recall increases and precision decreases as a function of the ranklist length, which makes the F1-score be optimal when the ranklist length is 3. \begin{figure}[htbp] \centering \subfigure[recall]{ \includegraphics[width=0.48\columnwidth,height=4.7cm]{Figure6a.pdf}} \subfigure[precision]{ \includegraphics[width=0.48\columnwidth,height=4.7cm]{Figure6b.pdf}} \subfigure[$f_1$]{ \includegraphics[width=0.48\columnwidth,height=4.7cm]{Figure6c.pdf}} \subfigure[time cost]{ \includegraphics[width=0.48\columnwidth,height=4.54cm]{Figure6d.pdf}} \caption{(color online) The influence of the average number of users in each cluster on performance as a function of ranklist length.} \end{figure} As mentioned in above analysis, we set the average number of users in each cluster to 90 and obtain the initial number of clusters. It is well known that the initial number of clusters to some extent affect the final result when cluster procedure converges. Actually, it is unnecessary to make clustering procedure finally converge in the FCUM, thus we don't care about its convergent result, but only concentrate on the performance of the recommender system. Nevertheless, to keep the study self-contained, we still perform experiments that whether the average number of users in each cluster finally affect these evaluation indicators. Herein, when the iteration time and degree threshold is 2 and 5 respectively, we adjust the average number of users in each cluster from 40 to 100 with the step equalling 10 and independently conduct each experiment. Figure 6 shows that the average number of users in each cluster is also a trivial factor affecting these evaluation indicators which are suggested by their similar values in restricted to the ranklist length, as well as the lower time cost in a comparison of the UCF. Through these experiments, we demonstrate that the FCUM not only strongly enhances the efficiency, but also relatively improves the accuracy of the recommender system. \begin{figure}[htbp] \centering \subfigure[recall]{ \includegraphics[width=0.48\columnwidth,height=4.7cm]{Figure7a.pdf}} \subfigure[precision]{ \includegraphics[width=0.48\columnwidth,height=4.7cm]{Figure7b.pdf}} \subfigure[$f_1$]{ \includegraphics[width=0.48\columnwidth,height=4.7cm]{Figure7c.pdf}} \subfigure[time cost]{ \includegraphics[width=0.48\columnwidth,height=4.65cm]{Figure7d.pdf}} \caption{(color online)the influence of the degree threshold to the performance.} \end{figure} In addition, in the preprocess procedure, we take account of the data sparseness of the social tagging system (or tripartite graph) and the degree threshold of each node. Thus, It is worthy to discuss whether the different degree thresholds change the result in the FCUM. We further increase the degree threshold from 6 to 10, and obtain these filtered tripartite graphs. According to the initial condition setting $n=2$ and $u=90$, a number of contrastive experiments are performed based these filtered tripartite graphs, whose results are shown in Fig.7. It can be found that with the increment of the degree threshold, the evaluation indicators of the recommendation algorithm become better. However, in these situations, only the high degree nodes are considered and it aggravates the cold start problem. In a real-world recommender system, we need to make a trade-off among the cold start problem, the accuracy of the model and the time complexity, it's application dependent, we just give out a hint and the discussion of these issues is far beyond this paper. \section{Conclusion} In this paper, we propose a fast and elegant collaborative user model based on cluster extraction to recommend items to users in social tagging systems. And the cluster extraction is insensitive to the parameters such as the iteration time and initial number of clusters. The extensive experiments demonstrate that the recommendation algorithm based on this model behaves much more efficiently due to the fact that the time cost is dramatically reduced greater than 90$\%$, and relatively accurately in the comparison of UCF. Moreover, it exploits both the information of the resource usages and annotation actions, and can be extended to use more information which can be represented as vectors extracted from the social tagging applications. As relevant issues for future work, we plan to characterize each user in the social tagging systems not only by the information of resource usages and annotation actions, but also explore the explicit or implicit relationships between users such as their personal attributes, friend relationships, follow relationships, which help for improving the performance of recommender system. \section*{Acknowledge} This work is partially supported by the National Natural Science Foundation of China (Grant Nos. 61370150 and 61433014) and Special Project of Sichuan Youth Science and Technology Innovation Research Team (Grant No. 2013TD0006). \section*{References}
1,116,691,501,046
arxiv
\section{Introduction} Gravitational microlensing provides a unique window on extrasolar planetary systems with sensitivity to cool, low-mass planets \citep{1996ApJ...472..660B,2006Natur.439..437B, 2006ApJ...644L..37G, 2008ApJ...684..663B,2008A&A...483..317K, 2010ApJ...710.1641S} that are currently well beyond the reach of other methods. Microlensing is also sensitive to planets orbiting very faint stars and hence spectral types not routinely examined with other techniques. Microlensing occurs when a foreground (lens) star passes close to the line of sight with a background (source) star. The gravity of the foreground star acts as a magnifying lens, increasing the apparent brightness of the background star as it gets close to the line of sight. A planetary companion to the lens star will induce a perturbation to the microlensing light curve with a duration that scales with the square root of the planet mass, lasting typically a few hours for an Earth to a few days for a Jupiter \citep{1992ApJ...396..104G,1991ApJ...374L..37M,1964PhRv..133..835L}. Hence planets are now routinely discovered by dense photometric sampling of ongoing microlensing events. The inverse problem, finding the properties of the lensing system from an observed light curve, is a complex non-linear one within a wide parameter space. The planet/star mass ratio and projected star-planet separation can usually be measured with high precision. However in the absence of higher order effects such as parallax motion and/or extended source effects, in general there are no direct constraints on the physical masses and orbits of the planetary system. In the least information case, model distributions for the spatial mass density of the Milky Way, the velocity distribution of potential lens and source stars, and a mass function of the lens stars are required via a Bayesian analysis to derive probability distributions for the masses of the planet and the lens star, their distance, as well as the orbital radius and period of the planet. With complementary high angular resolution observations, currently done either by HST or with adaptive optics, it is possible to get additional strong constraints on the system parameters and determine masses to about 10 \%. This can be done by directly measuring the light coming from the lens and measuring the lens and source relative proper motion \citep{2006ApJ...647L.171B,2007ApJ...660..781B,2010ApJ...713..837B, 2008Sci...319..927G, 2009ApJ...695..970D,2010ApJ...711..731J} An extrasolar planet with a best-fit mass ratio of $q \sim 2 \times 10^{-4}$ was discovered in the microlensing event MOA 2007-BLG-192 \citep{2008ApJ...684..663B}. The best fit microlensing model shows both microlensing parallax and finite source effects. Combining these, we obtained the lens masses of $M_l = 0.06^{+0.028}_{-0.021}˜M_\odot$ for the primary and $3.3^{+4.9}_{-1.6}˜M_\oplus$ for the planet. The incomplete light curve coverage of the planetary anomaly led to a significant degeneracy in the lens models and the lack of strong constraints on the source size to a poorly determined Einstein radius. Together this resulted in rather large uncertainties in the physical parameter estimates of the system. Additional constraints are required to exclude competing microlens solutions and to refine our knowledge of the physical parameters of the system. It is possible to constrain masses and parameters of the system thanks to high angular resolution imaging. Most microlensing events only provide a single parameter, the Einstein ring crossing time $t_E$ that depends on the mass of the lens system $M_L$, its distance $D_L$, the source distance $D_S$ and their relative velocity. However, when the relative lens-source proper motion $\mu_{rel}$ can be determined this yields the the angular Einstein ring radius $\theta_E= \mu_{rel}t_E$. Moreover $\theta_E$ is linked to the lens system mass by \begin{equation} M_L = {c^2\over 4G} \theta_E^2 {D_S D_L\over D_S - D_L}\ , \label{eq-mdl1} \end{equation} Therefore, since the distance of the source $D_S$ is known from its magnitude and colors, Equation \ref{eq-mdl1} is a mass-distance relation for the lens star. Another constraint is needed to obtain a complete solution to the microlensing event. This can be achieved by directly detecting light from the planetary host star (the lens). Combining this measurement with Equation \ref{eq-mdl1} and a mass luminosity relation will yield the mass of the lens. This has been done already for several microlensing events where the system is composed of a star and a gaseous planet \citep{2004ApJ...606L.155B,2006ApJ...647L.171B,2005ApJ...628L.109U, 2009ApJ...695..970D, 2008Sci...319..927G,2010ApJ...711..731J}.\\ We observed \event~ in JHK using adaptive optics on the VLT while it was still amplified by a factor of 1.23 and again when the microlensing was over. Here, we combined the NACO JHK flux measurements at these 2 epochs with the color estimate of the source star \citep{2010ApJ...710.1800G} and the microlensing model \citep{2008ApJ...684..663B} to disentangle the flux coming from the source and from the lens star to refine estimates of the parameters of the system. \section{The data set} We obtained JHKs measurements using the NACO AO system mounted on Yepun during the night 6/7 Sept. 2007, while the source star was still magnified by a factor of 1.23. AO corrections were performed on a natural guide star \footnote{The LGSF, which in theory should have yielded better performance, was not available at that time. However the pro and contra of LGS vs NGS for us have to be evaluated on a case by case base, since in the crowded field of microlensing targets you often find suitable NGS references which may give even better corrections than the LGSF according to the ETC observation preparation software.} and observations with the S27 objective (27" x 27 " FOV, pixelscale=0.02715") were conducted in jitter mode with multiple exposures at random offsets within 10" of the target. In the absence of suitable "empty" sky patch close to the target, this strategy was chosen to ensure an accurate estimation of the sky background and to filter out bad pixels. The second epoch(s) were obtained with the same observing strategy more than 22 months later with the event being at baseline, i.e. when the source was not magnified anymore. An overview of the NACO data set is given in Table~\ref{NACOdatable}.\\ To perform absolute calibration of the NACO images we obtained $90 \times 10$ s dithered images in JHKs with the Japanese/South African IRSF 1.4 m telescope at SAAO (non AO, ~8' x 8' FOV, pixelscale=0.45") of the \event ~field on 29th of Aug. 2008, i.e. at a time when the event was at baseline. \subsection{Reduction} Following a "lucky imaging" approach we visually inspect each of the NACO raw images and remove the ones where the AO correction was obviously poor. The remaining raw frames are then dark-subtracted with darks of exposure times matching the science frames, flatfielded with skyflats, median co-added and sky-subtracted using recipes from the Jitter/Eclipse infrared data reduction package by \cite{1997Msngr..87...19D,1999ASPC..172..333D}. To avoid border effects, we keep only the intersection of the different dither positions of the co-added frames for our photometric analysis. Table \ref{NACOdatable} provides an overview over the NACO data set.\\ The IRSF data, which was taken to gauge our NACO data, has been dark-subtracted, flat-fielded and sky subtracted using the on-the-mountain pipeline package for the SIRIUS camera \citep{2003SPIE.4841..459N}. \begin{table} \caption{Log of JHKs NACO data. According to the Paranal night logs the Epoch 1 night was classified as photometric, whereas the Epoch 2 observations were taken in clear sky condition. We give the exposure time, modified Julian Date, airmass and measured full width at half max on the coadded frames.}\label{NACOdatable} \centering \begin{tabular}{llllcl} \hline Band & n $\times$~Exp~[s] & MJD & Airmass& FWHM ["] &\\% strehl [$\%$]\\ \hline \multicolumn{6}{c}{\it Epoch 1}\\ \hline J & $6 \times 60$ & 54350.00781250 &1.005& 0.14&\\ H & $20 \times 25$ & 54350.02734375 &1.023& 0.19&\\ Ks & $10 \times 25$ & 54349.98828125&1.002&0.09 &\\ \hline \multicolumn{6}{c}{\it Epoch 2} \\ \hline J & $23 \times 60$& 55036.08593750 & 1.015& 0.34 & \\ H &$22 \times 30$ & 55036.06640625& 1.034 & 0.29 & \\ Ks & $24 \times 30$ &55015.10156250 & 1.088& 0.10 & \\ \hline \end{tabular} \end{table} % \begin{figure} \centering \includegraphics[width=9cm]{IRSF_NACO_CALIB} \caption{{\bf Left:} Extract of IRSF Ks band image of \event~ used to calibrate the NACO photometry of the $18"\times18"$ large intersection fov of the coadded NACO frames in Ks band ({\bf right}). \event~is marked with the half cross hair. The stars annotated with "1" and "2" serve as psf-reference and photometric zeropoint calibrators. The bright stars north of the two references are either too crowded, in the non-linear regime or too far away from the target. Furthermore these two stars are common to all bands and epochs. } \label{FigNACO_IRSF_CALIB} \end{figure} \begin{figure*} \centering {\includegraphics[ width=9cm]{JK_cmd_IRSF_NACO_mb192_ep1}} { \includegraphics[ width=9cm]{HK_cmd_IRSF_NACO_mb192_ep1} } \caption{{\bf Left}: The (J-Ks, J) CMD in the 2MASS system of the \event ~ field combining the data from the IRSF (within 3' of target, black points) and NACO (within 18", blue points). In red the photometry of the measured lens+source flux at magnification A=1.23 is displayed together with the inferred decomposed fluxes of the source (green) and the lens (planetary host star). Overplotted are Marigo et al (2008) solar metallicity isochrones of ages $\log ({\rm Gyrs})= 9.00, 9.88,10.15$ at distance modulus of dm=14.38 and estimated extinction of $A_J=0.72, A_{Ks}=0.29$. {\bf Right}: Same as above but for (H-Ks, H). } \label{FigCMDs_1}% \end{figure*} \begin{figure} \centering { \includegraphics[ width=9cm]{JH_cmd_IRSF_NACO_mb192_ep1}} \caption{Same as Fig.~\ref{FigCMDs_1} but for (J-H,J). } \label{FigCMDs_2} \end{figure} \section{Photometric Analysis} As in our previous analysis on planetary microlensing event \planetjulia ~\citep{2010ApJ...711..731J}, we extract the photometry of NACO images using Starfinder \citep{2000A&AS..147..335D}. This tool is tailor suited to perform photometry of AO images of crowded fields. It creates a numerical PSF template from chosen stars within frame, which is then used for psf-fitting of all stars in the field. To build our psf reference we chose the star marked as "1" in Fig.\ref{FigNACO_IRSF_CALIB} based on the following criteria. It is close to the target (within less than 4"), sufficiently bright but well within the linearity regime of the detector and common to all final reduced JHKs images of both epochs. The IRSF photometry catalog was created with DoPhot \citep{1993PASP..105.1342S} \subsection{Building a calibration ladder}\label{ladder} The few stars in in our small NACO FOV common with the 2MASS catalog are unfortunately not suited as calibrators for the NACO photometry, since they are either in the nonlinear regime of the NACO detector or too blended on the 2MASS plate scale. The data from the IRSF telescope however allows us to robustly link the NACO photometry to the 2MASS system using the following route. We first perform the astrometry of the IRSF images with respect to the online 2MASS catalog using GAIA/Skycat and WCSTools. Then, using only stars marked as AAA (highest 2MASS quality flag) in the JHKs bands we crossmatch the common stars to compute the photometric transformation between the two catalogs by sigma clipping, demanding an astrometric accuracy of the match of better than 0.6" and adopting the filter transfomation coefficients derived by \cite{2007PASJ...59..615K}. To minimize the effect of source confusion and blending contamination we cut off at magnitude 13 for the 2MASS reference stars and sum up the flux of closeby neighbors for the IRSF sources to account for the much coarser pixel scale of the 2MASS catalog. The psf reference star is contained in the IRSF catalog, as well as star "2" (Fig.\ref{FigNACO_IRSF_CALIB} ).We examine their long term photometric stability in the OGLE database and find that over more than seven years both stars are stable (in the optical I-band) at levels of $\lesssim 1\%$, which makes them well suited as zeropoint calibrators of our NACO field. While we adopt star "1" as final prime photometric calibrator, since star "2" is more crowded, we determine zeropoints from both stars as a consistency check. To account for the different plate scales between NACO and IRSF we sum up the flux of all the sources in NACO which are contained within the IRSF psf. We note that observing conditions (sky transparency and atmospheric coherence times) for the second epoch data set were inferior to the epoch 1 measurements and the uncertainties in the absolute zeropoints of epoch 2 are therefore larger. Since we are mainly interested however in the relative photometry of the two epochs we can align the epoch 2 photometry with respect to more accurately calibrated epoch 1. Table \ref{ZPtable} summarizes the this way determined transformations to calibrate the NACO data with respect to the 2MASS system and Table \ref{TARGETtable} shows our derived photometry for \event. \begin{table} \caption{JHKs NACO photometry for \event, i.e. lens+source (no dereddening applied). The absolute photometry error budget is composed by adding in quadrature the errors on the zeropoint, the formal error reported by Starfinder and the backgound error as estimated from the scatter between epoch 1 and epoch 2 comparison stars. For epoch 1 J and H bands, we adopt the background error estimate as derived from the K band, since the poor epoch 2 quality in J and H would overestimated the epoch 1 errors. }\label{TARGETtable} \centering \begin{tabular}{ccccl} \hline Band & J & H & Ks \\ \hline \multicolumn{4}{c}{ {\it {\bf NACO Epoch 1}} }\\ \hline \hline \multicolumn{4}{l}{\it ~calibrated against IRSF $~~~~$ } \\ \hline & $19.209\pm 0.043 $ & $18.281\pm 0.042 $ &$17.948\pm 0.035$ \\ \hline \multicolumn{4}{c}{\it {\bf NACO Epoch 2}} \\ \hline \multicolumn{4}{l}{\it ~calibrated against IRSF $~~~~$} \\ \hline & $19.324\pm 0.073 $ & $18.548\pm 0.112 $ &$17.989\pm 0.038$ \\ \hline $\Delta {~\rm Epochs} $ & $0.115\pm 0.085 $ & $0.267\pm 0.120 $ &$ 0.041 \pm 0.052$ \\ \hline \hline \multicolumn{4}{l}{\it ~aligned with respect to Epoch 1 } \\ \hline & $19.283\pm 0.071 $ & $18.498\pm 0.087 $ &$18.011\pm 0.042$ \\ \hline $\Delta {~\rm Epochs} $ & $0.074\pm 0.083 $ & $0.217\pm 0.097 $ &$ 0.063\pm 0.055$ \\ \hline \end{tabular} \end{table} \section{Results}\label{sec:results} In Fig.~\ref{FigCMDs_1} and ~\ref{FigCMDs_2} we present the color-magnitude diagrams for the combined IRSF and NACO (epoch 1) data. To estimate the interstellar extinction, we first determine the position of the red clump center by taking the median of the distributions in color and magnitude inside a window centered on a first guessed estimated position. Then we fit the tip of the Red Giant Branch as given by the isochrones of \cite{2008A&A...482..883M} adopting the distance modulus $\rm{dm}=14.38\pm 0.07$ as found for the \event~field by \cite{2008ApJ...684..663B}. With a best fit age of $\log({\rm Gr})=9.88 $ we find for the extinction coefficients: $A_J=0.72 \pm 0.10$, $A_H=0.46\pm 0.10$, $A_K=0.29 \pm 0.10$, which for this line of sight is consistent with extinction maps from \cite{1998ApJ...500..525S} and \cite{2006A&A...453..635M} \subsection{The case for a luminous lens} The standard general microlens lightcurve model is given as \begin{equation} F(t) = F_{\rm S} A(t) + F_{\rm B}, \label{lc} \end{equation} where $F$ is the measured flux at the telescope, $F_{\rm S}$ is the intrinsic unmagnified source flux, $A(t)$ the time dependent magnification given by the lens model and the blend flux $F_{\rm B}=F_{ \rm L}+F_{\rm Background }$, which contains the lens flux $F_{ \rm L}$ and $F_{\rm Background }$ the flux of any unrelated ambient stars within the aperture. While the source flux $F_{\rm S}$ can be determined with high precision from the light curve modeling of the non-AO data, given a large magnification gradient, the background term normally dominates over the lens term in the seeing-limited photometry in the crowded Galactic Bulge fields of microlensing. Hence the benefits of conducting high spatial resolution imaging as we have done with NACO are obvious. Eliminating, respectively reducing the contribution of contaminating background sources , provides a better estimate of the lens flux and so finally of the physical characteristics of the lens system. In \cite{2010ApJ...711..731J} the lens flux could be estimated by comparing a single NACO AO epoch with an excellent seeing-limited light curve in the same passband from which the source flux had been previously determined with good accuracy. For \planet~we have no such light curve in the NACO passbands but a well determined measurement of the source flux in the I band (Cousins system), which we can transform into the expected source flux for JHKs bands. Note that while in theory our two point NACO "light curve" can be used to solve Eq.~\ref{lc} for the lens and source fluxes directly the resulting uncertainties are very large \citep{2009ApJ...695..970D} due to very small magnification "lever arm" for our event and so following the path of \cite{2010ApJ...711..731J} is much more accurate. First however the two epochs can be used without the knowledge of the source flux to check whether there is an indication we detect light from the lens as follows. The expected magnification gradient between the two NACO epochs based on the best-fit model of \cite{2008ApJ...684..663B} is $\Delta m = 0.230 \pm 0.015$~mag. Note that this gradient is basically the same for all competing planetary models, since the first epoch was taken close to the baseline of the event, where the single lens approximation describes the data very well. If the lens is dark and no unrelated source is contaminating our photometry (see Sec.~\ref{alt}) we then would expect to measure this difference in the relative photometry of the two epochs in each band. Since the quality of epoch 1 is superior we choose epoch 1 as reference to which we align epoch 2. We compare the photometry between the two epochs for each band using 3 different alignment procedures. First we compare the derived absolute photometry (with respect to 2MASS using the calibration ladder described in Sec.~\ref{ladder}). Then we align the epoch 2 with respect to (calibrated) epoch 1 using all common stars within 4 arcsec (to minimize effect of psf variations) of the target. The resulting magnitude differences for the target and the absolute photometry values are summarized in Table~\ref{TARGETtable}. The difference between the epochs is shown in Fig\ref{Fig:NACOEp1Ep2}. For all bands regardless of the aligning method used, except for H (the set with the poorest epoch 2 data quality), the measured difference is less than in the case of a dark lens, albeit with different levels of significance. For K band, the best data set, a dark lens is inconsistent with the measurement at $2 \sigma$ for the absolute alignment and at $3 \sigma$ for the relative alignment. \subsection{Is the blended light from the lens star ? }\label{alt} The mean density of stars of comparable brightness and color $\pm 0.20$~mag to the detected blend is less than 0.2 per ${\rm arcsec}^2$ as derived from our best / sharpest data set, the Ks band of epoch 1. Given the image quality of $0.09 "$ FWHM, this conservatively implies a probability of less than $2 \%$ for the blend being unrelated to the microlens event. Another possibiltity to consider is that the blend stems from a companion to the source star. Close companions with periods $\lesssim 100 ~\rm {d}$ can be ruled out by the xallarap signal limits in the light curve and very wide separation companions $\gtrsim 700~\rm{AU}$ would be resolved in the Ks NACO data. This still leaves a large range of allowed separations but taking into account the color difference the possible fraction of low mass secondaries should not be larger then $8\%$ according to \cite{1991A&A...248..485D}. However only future AO or HST images, when the source and lens will have moved sufficiently far apart to be spatially resolved, will be able to securely rule out such a scenario. \subsection{Source star constraints} Using a new method to align microlensing lightcurves from different telescopes and filters \cite{2010ApJ...710.1800G} refined the \event~ $(V-I)_{\rm o,s}$ dereddened source color measurement of $(V-I)_{\rm o,s}=1.04 \pm 0.21$ from \cite {2008ApJ...684..663B}, deriving $(V-I)_{\rm o,s}=1.24 \pm 0.06$ in the Cousins photometric system. To compare this value to the NIR bands of this study we transform this V-I color to J-K in the 2MASS system using first the dwarf color table of \cite{1988PASP..100.1134B} to find $(J-K)_{\rm o,s}=0.73$ (Cousins system) and then with the 2MASS-Bessel$~\&~$Brett filter relation\footnote{http://www.astro.caltech.edu/~jmc/2mass/v3/transformations/} we finally derive $(J-K)_{\rm o,s}=0.70\pm 0.07$. From our NACO "light curve" using Eq.~\ref{lc} and our lens model , we find after dereddening $(J-K)_{\rm o,s}=0.66\pm 0.51$. While the uncertainty derived from linear regression is large, this independent source color determination is very consistent with the colors found by \cite{2010ApJ...710.1800G} and \cite {2008ApJ...684..663B} and strengthens the case for the source being a K4-5 dwarf in the Bulge at $7.51 \pm 0.25~ \rm{kpc}$. Given the better accuracy of the \cite{2010ApJ...710.1800G} source color, we however adopt their value in the following analysis. \subsection{Lens/planetary system constraints} From the \event~ light curve the I band source flux is well determined to $I_{s}=21.44 \pm 0.08$ \citep{2008ApJ...684..663B}. Using the source color derived in the previous section and the extinction coefficients determined from the IRSF data, we can translate this I band estimate into the NACO passbands to derive $J_{s}=19.67 \pm 0.12, H_{s}=18.78 \pm 0.10,K_{s}= 18.54\pm 0.10$ (2MASS system). Using our best lens model and eq.~\ref{lc} we then derive the following estimates for the unlensed apparent lens flux: $J_{l}=20.98 \pm 0.30, H_{l}=19.91 \pm 0.30, K_{l}= 19.29\pm 0.20$ from Epoch 1 and $J_{l}=20.59 \pm 0.40, H_{l}=20.10 \pm 0.50, K_{l}= 19.04\pm 0.20$ from Epoch 2. Taking the weighted average we finally get as best estimate for the lens flux: $J_{l}=20.73 \pm 0.32, H_{l}=19.94 \pm 0.35, K_{l}=19.16 \pm 0.20 $. We now can use mass luminosity relations to translate the photometry estimates of the apparent lens flux into estimates of the planetary host star mass. We adopt the relations of \cite{2000A&A...364..217D} for M-dwarfs (with mases $> 0.10 ~M_{\odot}$) and \cite{2000ApJ...542..464C} for L-dwarfs (masses $<0.10 M_{\odot}$), where the transition between the two relations at $ \sim 0.10~ M_{\odot}$ has been linearly interpolated. The best lens model gives an estimate for the distance and mass of the lens via the measurement of the parallax $\pi_E$ using Equation \ref{eq-mdl1}. In Fig.~\ref{FigLensmassLight} the implied apparent lens brightness based on the mass-magnitude relations and our constraint on the parallax is plotted as a function of lens mass. All bands agree that the lens mass is in the range $ 0.07< M_{\rm L}/ M_{\odot} <0.10$ with a best estimate of $M_{\rm L}/ M_{\odot}= 0.087 \pm 0.010$, prefering a stellar over a sub-stellar host. This is consistent with the previous best estimate of $M_{\rm L}/ M_{\odot}= 0.06 \pm 0.04$ \citep {2008ApJ...684..663B}, but which was not able to distinguish between the different host star possibilities. This new refined lens mass affects also the planet mass of \planet. This is due to a light curve degeneracy between the planetary mass ratio $q$, and the source star radius crossing time $t_\ast$. The detection of light from the lens star means that it must be massive enough to be above the Hydrogen burning threshold, and this constrains $t_\ast < 0.05\,$days and rules out the cusp crossing models (models I-P of Table 1 in \citet{2008ApJ...684..663B}, the remaining surviving models consistent with the NACO data are listed here in the appendix in table \ref{tabsurvive}1). This constraint on $t_\ast$ pushes the mass ratio, $q$, toward somewhat smaller values. As a result the range of allowed planetary masses is nearly unchanged. The physical parameters of the star-planet system can be estimated by the same type of Markov Chain Monte Carlo (MCMC) calculations used in \citet{2008ApJ...684..663B} or \citet{2009ApJ...695..970D}. But we now add constraints that the lens star must satisfy the JHK mass luminosity relations of \citet{2000A&A...364..217D}, under the assumption that 25\% of the dust responsible for the extinction of the source star is also in the foreground of the lens star plus planet system. The uncertainty in the lens magnitude is taken to be 0.3 mag in each passband. This accounts for the uncertainty in the extinction as well as the uncertainty in the \citet{2000A&A...364..217D} mass-luminosity relations, which become large at low masses because of the metalicity dependence of the minimum stellar mass. The parameter values resulting from this calculation are listed in Table~\ref{tab-mcmc}. The planet mass is now $3.2^{+5.2}_{-1.8}M_{\oplus}$, while to host star mass is $0.084{+0.015\atop -0.012} M_{\odot}$ and the two dimensional star-planet separation during the event is $a = 0.66{+0.19\atop -0.14} \,$AU. The MCMC lens distance estimate is $D_L = 700{+210\atop -120} \,$pc which agrees with our more direct estimate of $660^{+100}_{-70}\rm{pc}$ is less then half of the total extinction towards the source, our derived lens colors are consistent with a late M spectral type \citep{2010ApJ...710.1627L} of the planetary host. \begin{figure} \centering \includegraphics[width=7.1cm]{Ldwarf_isochrones_K.pdf} \includegraphics[width=7.1cm]{Ldwarf_isochrones_J.pdf} \includegraphics[width=7.1cm]{Ldwarf_isochrones_H.pdf} \caption{Mass-Magnitude relations for K ({\bf top }) , J ({\bf middle}) and H ({\bf bottom}) bands derived from \cite{2000A&A...364..217D} for M-dwarfs (with mases $> 0.10 ~M_{\odot}$) and L-dwarfs (masses $<0.10 M_{\odot}$) from \cite{2000ApJ...542..464C}. The transition between the two relations at $ \sim 0.10~ M_{\odot}$ has been linearly interpolated. The black curves show the most likely range of distances for the \planet~system as found by \cite{2008ApJ...684..663B} and the horizontal lines marks the estimate for the lens flux from the NACO data as well as the upper limit of the lens flux from measured lens+source flux for a range of possible interstellar extinctions.} \label{FigLensmassLight} \end{figure} \subsection{Additional constraints from future high angular resolution observations} Another improvement can be achieved by measuring the amplitude and direction of the relative proper motion of source and lens in combination with the microlensing modelling of the parallax signal caused by the Earth's motion. Such physical measurements break a model degeneracy in the projected Einstein radius $\tilde{r}_E$ \citep{2007ApJ...660..781B,2008ApJ...684..663B}. In the case of MOA-2007-BLG-192 the degeneracy is particularly acute, because of a gap in event coverage, with different by equally significant models requiring widely different projects and hence directions for the relative proper motion, even though the models yield similar amplitudes: $\sim 5 ~{\rm mas~yr^{-1}}$. The measurement of both $\theta_E$ and $\tilde{r}_E$ yields the lens system mass $M_L = c^2/(4G) ~\theta_E \tilde{r}_E$. Ideally, the relative lens-source proper motion $\mu_{rel}$ is measured when detecting both the lens star and the source star as done by \cite{2001Natur.414..617A}. The two stars will not be fully resolved for many years. However, due to the unique stability of the HST point spread function (PSF) it is possible to measure source-lens separations (with position angles) much smaller than the width of the PSF. This is accomplished by measuring the elongation of the combined lens-source image due to the fact it is a combination of two point source images rather than one. The lens and source stars of MOA-2007-BLG-192 will be about 25 mas apart already in 2012. Simulations by Bennett et. al. (2007) show that measurements of both the amplitude and orientation are possible for MOA-2007-BLG-192 already in 2012. These measurements combined with our modelling will improve the knowledge of system parameters (masses, orbital separation) to about 10 \%. The key point is that the direction of the elongation will give us a measurement of the direction of the relative lens-source proper motion $\mu_{rel}$ and therefore resolve the remaining parallax degeneracies \citep{2007ApJ...660..781B,2008ApJ...684..663B} . \subsection{Properties of the planetary system} The effective temperature of the planet, for the parameters of the parent star and orbit separation given above, is $47^{+7}_{-8}\rm{K}$ for an albedo of zero, and $40^{+6}_{-7}\rm{K}$ for an albedo of 0.5. Based on observations of a tenuous atmosphere (20-60 microbars) of nitrogen on Pluto, the temperature of bright surfaces ices on Pluto at perihelion is estimated to be between 35-40 K \citep{1999Icar..141..299S}. Thus, if nitrogen were available, the surface of this planet might look like that of Pluto on the basis of stellar heating alone. However, the large mass of the planet compared with that of Pluto necessitates examining the possible role of heat from the interior of the planet. The maximum temperature possible with zero albedo, 54 K, remains below the pure nitrogen melting point of 63 K, and well below the methane melting point of 91 K. The present-day terrestrial heat flow ($0.087 W/m^2$) value is about 10 times less than the roughly $1 W/m^2$ deposited by the lensing star on its planet at local noon. Thus the average heat flow coming from the planet itself will not raise the surface temperature significantly, even for a fully rocky body three times the mass of the Earth \citep{2010ttt..work...75L}. Of course, we do not know the age of the star; were we to use the Hadean heat flow value for the Earth \citep{2008Natur.456..493H} for the 3.2 Earth mass body, the influx from geothermal heating could exceed the energy received from the star. The surface temperature could then be above the nitrogen melting point, leading to the possibility of liquid nitrogen lakes or seas if the atmospheric pressure were 0.1 bar or more. The lensing star-planet system is likely older than this, and hence the planet's heat flow correspondingly less. Because the distribution of heat flow on a terrestrial planet can be strongly heterogenous, one could imagine places on the surface with much higher heat flow than the average value for the planet, such that temperatures might exceed the melting point not just of nitrogen but of methane. Thus, if sufficiently quantities of these molecules were present, the planet's surface might have zones resembling the hydrocarbon lakes and seas of Saturn's moon Titan. The possibility of liquid water cannot be discounted, but would most likely be below the surface or in very restricted, volcanically active, locales. \section{Conclusions} In this study we have presented the analysis of the photometric data in the near infrared bands JHKs at two different epochs of planetary microlensing event ~\event~ obtained with the AO system NACO mounted on UT4 at ESO. According to the best-fit lens models as given in \cite{2008ApJ...684..663B} the difference in the magnification of the source is $0.230 \pm {0.015}$ for the two epochs. If the lens is non-luminous this would be the expected photometric gradient in our data set in the absence any blended light contribution. Our data set is inconsistent with such a scenario at the $3 \sigma$ level. In fact the data implies that there is a significant amount of blended light at the location of \event. Assuming that this blend is the lens, the data favors a scenario where the lens would be a closeby ($660^{+100}_{-70}\rm{pc}$) late M-dwarf. This is consistent with the estimates for a stellar lens as based on constraints from extended source and parallax effects as discussed in \cite{2008ApJ...684..663B}. While the data available at the time of the discovery paper was consistent with a broad range of planetary host masses, the new NACO data presented here support the hypothesis of a stellar host for \planet. Of course it is conceivable that the detected blend stems not from the lens, but either from a stellar companion to the source, the lens or an unrelated background star. However the probabilities for such scenarios are low and using Ockham's razor the most likely explanation is the the lens is an M-dwarf, which implies a planetary mass of $3.2^{+5.2}_{-1.8}M_{\oplus}$ for \planet, placing it among the front row of known least massive cool planets in orbit around one of the least massive host stars. \planet~ is a landmark exoplanet discovery suggesting that planet formation occurs down to to the very low mass end of the stellar population. It is the first microlensing event for which multi epoch AO data has been obtained and demonstrates the usefulness of this technique for microlensing, for constraining the physical characteristics of microlensing planetary systems and providing important experiences to optimize future AO observations, which ideally should be carried out in ToO mode for the first epoch, to ensure the source is still significantly magnified. \begin{acknowledgements} We like to say thanks to the ESO Paranal and Garching teams for their high quality service and support in carrying out the NACO observations. Especially thanks to ESO staff C.Dumas, C. Lidman and P. Amico. Thanks also to the IRSF observatory staff. D.K. and A.C. especially thank ARI, where part of the work was done during a visit. Special thanks to the support from ANR via the HOLMES project grant. Special thanks to David Warren for supporting the work done at Canopus Observatory. D.P.B.\ was supported by grants NNX07AL71G \& NNX10AI81G from NASA and AST-0708890 from the NSF. Work by S.D. was performed under contract with the California Institute of Technology (Caltech) funded by NASA through the Sagan Fellowship Program. \end{acknowledgements}. \bibliographystyle{aa}
1,116,691,501,047
arxiv
\section{Introduction} CMEs are often observed to have twisted ``flux rope'' magnetic field structures \citep{liu2008,vourlidas2014}. If favorably oriented, these can lead to extended southward excursions of the interplanetary magnetic field (IMF) as the CME passes by, resulting in periods of enhanced reconnection on Earth's dayside and energy input into the magnetosphere. Draping of the IMF around the CME as it moves through the solar wind may also give rise to southward fields. In contrast, northward-directed fields inhibit reconnection, resulting in a weaker magnetospheric response \citep{dungey1961}. While prolonged southward fields are often observed without the presence of a clear structured transient, the additional plasma parameters often associated with CMEs usually make them the most geo-effective events \citep[e.g.,][]{tsurutani1997,zhang2014}. Thus, inferring the direction of the flux rope fields inside a CME before it arrives at Earth would be a major advance in geomagnetic activity prediction. In addition, the CME's initial configuration and its interaction with the inhomogeneous ambient solar wind can lead to deformations, rotations, and deflections of the magnetic field, which are difficult to quantify \citep[e.g.,][]{odstrcil1999,savani2010,nieves2012}. Distortions of CMEs have previously been observed by coronagraphs. However, their influence on the magnetic structure is difficult to estimate because the magnetically-dominated regions of CMEs appear as dark cavities within images, such as those seen by the STEREO spacecraft \citep{howardt2012b}. Therefore, a common approach to predicting magnetic vectors within a CME propagating towards Earth is to use solar observations as inputs into 3D computational simulations. Unfortunately, obtaining realistic magnetic field directions at Earth from such calculations is scientifically challenging and computationally intensive \citep{manchester2014}. Thus, models used for routine CME forecasts by various space weather services do not include magnetic structures within the simulated CMEs \citep[e.g.,][]{Zheng2013, shiota2014}. For example, ENLIL models the propagation of CMEs from $\sim20$ solar radii (Rs) to beyond Earth at 215 Rs and includes the background solar wind magnetic field. However, the CME is simplified to a high pressure plasma pulse with a size and propagation direction estimated from solar imagery \citep{Zheng2013}. CME arrival-time predictions from these models provide lead times of $\sim2-3$ days, and their accuracy has been well investigated \citep{taktakishvili2010, vrsnak2014, colaninno2013}. In contrast, the important magnetic vector information is only revealed when in situ measurements are made by spacecraft upstream of Earth at the first Lagrangian position (L1) $\sim1$ hour prior to the CME arriving at Earth, thereby severely limiting the lead time available for reliable, magnetic field-based, storm warnings. Difficulties in observationally determining the magnetic profile of a CME arriving at Earth from only solar imagery predominately lie with several complex stages that change the initial solar configuration to the final topological structure at Earth. We suggest that for forecasting purposes, statistically significant predictions can be made by simplifying the complex behavior to a core set of parameters. In this paper, we highlight three key components of a proof of concept developed to improve the prediction of a storm's severity: 1. the use of the hemispheric helicity rule to provide a robust initial magnetic configuration at the Sun; 2. Define a `volume of influence' of the CME, within the heliosphere, for which the Earth's trajectory can be estimated; and 3. incorporating magnetic vectors from a simplified magnetic flux rope model to create a time-series upstream of Earth. From the analysis of eight Earth-directed CMEs between 2010 and 2014, we conclude that the incorporation of magnetic field vectors in this way can significantly improve geomagnetic forecasts by providing a time-varying magnetic profile of the CME. The time varying magnetic profile is then incorporated with an experimental technique to create a time varying Kp index forecast that replicates the forecast deliverables by NOAA. Further details on the geomagnetic indices and their uncertainties that are borne from the vector estimates are described by Savani et al. (2015, herein referred to as Paper II). \section{Event selection} This article discusses the proof-of-concept architecture for estimating the magnetic vectors with the aid of a case study CME event that was released from the Sun on January 7th 2014 (see Figure \ref{sunA}). The recent release of this event allows for comparisons between the results described here and the current processes employed by real time space weather forecasters and their estimated geomagnetic indices (described further in Paper II). A total of eight Earth-directed CME events between 2010 and 2014 were selected with three driving criteria: 1. Unambiguously define the solar source of the overlying field arcade from a single or double active region and possibly with an eruptive flare (see section \ref{SoIn} for more details); 2. A clear leading edge structure from multiple remote observations to unambiguously define the size and orientation of the CME morphology (see section \ref{Rem} for more details); and 3. a significant measurable effect by geomagnetic indices. The eight events described in this paper were chosen from a CME list compiled by \cite{colaninno2013} and Patsourakos, S.(personal list), with details of each event displayed in Table 1. Further Earth-directed CME lists with more generic requirements have also been published \cite[e.g.,][]{richardson2010, mostl2014}. The hemispheric solar source region of the CME is identified from solar observations. Figure \ref{sunA} displays a 171\AA{} image from the AIA instrument onboard the SDO spacecraft \citep{Lemen2012} taken at 20.14 UT on 7th January, 2014. This event has an inconclusive Earth-arrival time and in situ profile, and has been chosen to highlight the complexity in forecasting processes. The uncertainty from this event stems from the predicted arrival time being approximately 24 hours earlier than when on-duty forecasters labeled the actual arrival from real-time L1 in situ data. Throughout this period, the solar wind plasma parameters displayed significantly lower velocities than were expected as well as missing a strong and distinctive magnetic field rotation of an obstacle. \section{Solar Initiation} \label{SoIn} The helicity and initial orientation of the magnetic flux rope structure within a CME are inferred from the ``Bothmer-Schwenn'' scheme. This relates the flux rope properties to sunspots, the solar cycle, and whether the CME originates on the northern or southern solar hemisphere \citep{bothmer1998}. The reliability of this solar hemispheric rule remains controversial. It was only in late-2013 when the probability of a CME's topology conforming to the hemispheric rule was re-confirmed to be $\geq$80$\%$ \citep{wang2013, hale1925}. Thus, the initial helicity and field structure of CMEs can be inferred from this scheme with a reliability that is likely to be $\sim$80$\%$. Ordinarily, a CME is linked to a single active region where the standard Bothmer-Schwenn scheme should be applied. However in cases such as this January 2014 event, the magnetic loop structure before eruption has a leading negative polarity spanning over two active regions. Thus, a South-West-North, ``SWN'', flux rope field direction from southern hemisphere of solar cycle 23 under the Bothmer-Schwenn scheme is appropriate. This implies the CME has a right handed chirality. \cite{harra2007} highlighted the complexity of estimating the orientation of an interplanetary CME from simple solar observations. The work displayed that two CMEs released in November 2004 from a similar source location had drastically different final topologies. However, this can be reconciled with the Bothmer-Schwenn scheme if the different polarity of the active region's leading edge is taken into account. In this article, we consider six simpler CMEs released from a single active region and examine whether it is possible to generate more reliable predictions of the field structure at 1 AU. We also investigate two more complicated cases where connected active regions are involved (September 27th, 2012 and January 7th 2014). \section{Remotely Sensed Evolution} \label{Rem} Since deflections, rotations and other interactions may occur during CME propagation to Earth, the initial Bothmer-Schwenn configuration is adjusted using coronagraphic data from the SOHO and STEREO missions \citep{brueckner1995,howard2008}. The final tilt and source region of the magnetic flux rope, after which radial propagation is assumed \citep{nieves2013}, is estimated using the graduated cylindrical shell (GCS) model \citep{thernisien2009}, when the CME reached $\sim 15$ Rs. Figure \ref{gcsmodel} displays images from the COR2 instruments onboard the two STEREO spacecraft (A and B) and the LASCO instrument onboard SOHO that are used to triangulate the CME structure. Where three well-separated observations exist, the GCS model provides relatively well-constrained estimates of the orientation and size of the CME without any ambiguity \citep{liu2010,rodriguez2011}. The GCS model may still be implemented without multi-point observations in the same way as various other cone-structure methods can be implemented. However in such cases, the possible degeneracy in the observational morphology limits all methods and thus highlights the difficulties in performing reliable forecasts. The outputs from the GCS model along with estimates of the average CME size \citep{yashiro2004} are used to create a ``volume of influence'' defined as the volume the CME is expected to traverse as it propagates through the heliosphere. The shaded region in Figure \ref{sunA} displays the projection of this ``volume of influence" onto the Sun, suggesting that the Earth grazed the northern edge of this case study event. The projected area is calculated from the `shadow' of the CME that is assumed to be cylindrical in shape with circular cross-section. Two parameters (flux rope axis length and flux rope width) are required to estimate the projected area. The axis length, shown as a dashed curve on Figure \ref{sunA}, is estimated from the half angle, $\alpha_{haw}$, given by half the angular width of the CME in a direction parallel to the GCS model axis. The projected width of the CME transverse to this axis is assumed equal to the average CME width, as found from statistical studies \citep{nieves2013}. Any uncertainty in the inferred CME orientation is likely to have only a minimal effect on the predicted magnetic field vectors since it is likely to be eclipsed by the larger uncertainties arising from estimating the magnetic field strength and the assumption of a symmetric cylindrical flux rope, as explained below. Further testing of these assumptions are addressed in paper II and are tested relative to predicted estimates of Kp. The coronagraphic images show how the coronal magnetic loops seen in SDO have been deflected to the south west (Figure \ref{sunA}). Here, we use coronagraphic imagery to estimate the final CME radial trajectory but future work could attempt to increase prediction lead times by, for example, incorporating CME deflections by coronal holes \citep{cremades2004, makela2013}. The shortest (perpendicular) distance between the Earth's projected location and the flux rope axis is indicated on Figure \ref{sunA} with a blue curve. Normalizing this to the total perpendicular distance to the flux rope outer edge (flux rope radius) gives a quantity that is correlated with the impact parameter (Y0) which is a key parameter for in situ flux rope modelers. The theoretical model of the impact parameter used in this study is displayed in Figure \ref{ImpParam}; the Earth's projected arc distance is displayed in solar radii. This theoretical function is justified by using a simple linear correlation between the Earth's distance and Y0 for the inner core region surrounding the flux rope axis (inner highlighted area in Figure \ref{sunA}). The outer area is correlated with a trigonometric sine function and is designed to physically represent the distortions to the idealized flux rope that occur during propagation as well as possible draping of the surrounding solar wind magnetic field outside the actual flux rope structure. These distortions to the flux rope are sometimes termed `pancaking' \citep{riley2004a,savani2011a} with recent studies suggesting the inner core of a CME is likely to maintain a quasi-cylindrical structure while the outer structure may become severely deformed by the ambient medium \citep{demoulin2009, savani2013b}. \section{In Situ Flux Rope} To generate an estimate time-series of the magnetic vector direction passing over a fixed point such as L1, we must employ a methodology to create a 1-D (spacecraft) trajectory through a theoretical structure, and to define the start time of the object at this fixed point. \subsection{Time of Arrival} To improve the time of arrival prediction of a CME is beyond the scope of this work, and several advances on this topic have been performed. Currently there are several procedures to calculate the speeds of remotely-observed CMEs, quantify their deceleration, and forecast their speeds upstream of Earth at L1 \citep[see further literature within, e.g.,][]{owens2004,colaninno2013,tucker2015}. We choose to assume a simple average of the measured CME speed close to the Sun as determined by the NOAA Space Weather Prediction Center (SWPC) and the predicted speed at Earth. In the case of the 7th January, 2014 event, this gives a CME speed of 1300 km/s and a predicted arrival time of 8th January, 21.45 UT. By combining this information with the flux rope model described below, a time-series of magnetic vectors is created. In order to compare the accuracy of the modeled magnetic vector time-series with data and test the technique in the `research domain', we manually adjust the arrival time of the model fit to the best guess estimate within the L1 data. The field rotations between the model estimate and data were then manually inspected. However, for a readily implementable process for estimating the magnetic vectors in advance, different forecasting systems can simply employ their best estimate of the arrival time. \subsection{Flux Rope Model} The configuration of the magnetic flux rope is calculated by assuming a constant-alpha force-free (CAFF) flux rope \citep[and references therein]{burlaga1988,zurbuchen2006} and a cylindrical geometry locally around the Earth's predicted trajectory through the CME. Previously, triangulation of the CME direction from remote sensing have provided adequate information as to the expected structure arriving at L1 \citep{liu2010b}. However a Grad-Shafranov reconstruction technique used by \cite{liu2010b} would not be appropriate in creating a model to estimate the structure. Future work may consider implementing a more complex model that better represents the distortions occurring to a CME at L1 \citep[e.g.,][]{marubashi1997,hidalgo2002,owens2006} The magnetic vectors generated along the Earth's trajectory from a CAFF flux rope model is created from the MHD momentum equation under magnetostatic equilibrium; which can be reduced to $\textbf{j}=\alpha\textbf{B}$. A solution to this equation can be used to generate a cylindrical magnetic flux rope with circular cross section, with the components of the magnetic field vector expressed by Bessel functions, and $\alpha$ commonly set to 2.41 \citep{savani2013a}. Future work can consider reducing $\alpha$ as a simple solution to potential flux erosion occurring to the CME during propagation \citep{ruffenach2012}. The projected axis onto a 2D plane of the CME is provided by a single angle orientation ($\phi$) estimated from GCS model. However, the component of the flux rope axis parallel to the radial direction is estimated theoretically, by measuring the shortest distance between the Earth trajectory through the CME away from the CME nose, Ln. In practice, this was performed by measuring half the length of the flux rope axis ($R_{ax}$) and the length between the flux rope axis center and the position where the Earth perpendicular position (Figure \ref{sunA}, blue curve) meets the flux rope axis ($D_E$); thereby defining, $L_n \equiv D_E/R_{ax}$. $L_n = 0$ represents the case where the CME nose is propagating directly towards Earth, and there is no radial contribution to the flux rope axis. However when $L_n =1$, the flux rope axis is entirely radial in direction, as might be the case when the Earth's trajectory is along a CME leg. Figure \ref{RadComp} displays how the radial contribution to the axis vector is estimated from an angular value ($\lambda$) that varies between $0^{\circ}$ (CME nose) and $90^{\circ}$ (CME leg) in a scheme similar to that expressed by \cite{janvier2013}. Both $\phi$ and $\lambda$ are used to create a 3D flux rope axis direction. The magnitude of the magnetic field along the central flux rope axis is assumed in this case study to be 18.0 nT. This is calculated by assuming the maximum estimated magnetic field strength within the plasma pile-up region simulated by the WSA-ENLIL+Cone model (10.3 nT) corresponds to the magnetic field strength at closest approach within the flux rope structure. The impact parameter obtained using the `volume of influence' (set at 0.91 for the January 2014 event) is then used to estimate the maximum field strength along the central flux rope axis. In effect, this technique estimates the $\mid\textbf{B}\mid$ of a CME from a correlation of the inner heliospheric CME velocity and a simulated background solar wind field strength driven by magnetograms. The flux rope axis direction, chirality, magnetic field magnitude and impact parameter provide a complete set of parameters to generate a time series of magnetic vectors along a theoretical Earth trajectory (Figure \ref{MagVec} red curves). \subsection{Magnetic Field Strength} The field strength is inferred from a model currently used for forecasting by CCMC, so this method could be implemented using existing forecasting capabilities. In the future, other methods whose uncertainties have not yet been statistically quantified might be used, for example estimating the poloidal and total flux content of a CME from flare ribbon brightening \citep{longcope2007}; flux rope speed and poloidal flux injection to estimate field strength \citep{kunkel2010}; using radio emissions from the CME core \citep{tun2013}; and using the shock stand-off distance from remote observations which has recently shown the possibility of estimating the field strength upstream of a CME \citep{savani2011b, poomvises2012, savani2012b}. Considering the final focus of estimating the magnetic vectors is to predict the terrestrial effects with quantifiable uncertainty, the uncertainty in the predicted Kp index was estimated by varying the field strength over the range $\mid\textbf{B}\mid=18.0_{-1\sigma}^{+2\sigma}$, where $\sigma$ = 6.9nT \citep{lepping2006}. The uncertainty in field strength represents the statistical average from 82 flux rope fittings estimated between 1995-2003. The magnetic vectors were recalculated for each field strength and used to drive estimates of Kp which are described in more detail in Paper II. \section{Results} \label{res} In order to test the validity of this proof-of-concept architecture for estimating the magnetic vectors within CMEs, a total of eight CME events have been investigated. By using the same technique as Figure \ref{sunA}, the solar disc in 171\AA $\,$ AIA and projected CME `volume of influence' for these events between 2010 and 2014 are displayed Figure \ref{solarsurvey}. Their predicted magnetic vectors are displayed in Figure \ref{Bsurvey}, along with the measured L1 in situ data. Spherical coordinates are used to display the magnetic field rotation over the Cartesian system as the orientation components remain independent of the magnetic field strength component. The angular rotation ($B_\phi$ and $B_\theta$) in the predicted magnetic field closely follows the broad rotational structure seen within the in situ data, with a negative $B_\theta$ indicating a southward magnetic field excursion. The deviations in results between the estimated and measured values are discussed below. For the events investigated, it has been noticed that if the overlying magnetic field arcade displayed in solar imagery (e.g., within 171\AA $\,$ AIA) traverses two active regions in close proximity, an adjustment to the standard scheme is required. In particular, if the solar arcade is between two active regions, the leading polarity is reversed and the initial magnetic structure is defined by the Bothmer-Schwenn scheme from the previous solar cycl . The scenario of this more complex behavior is shown in the case study event of this article (January 2014, panel G in Figures \ref{solarsurvey} and \ref{Bsurvey}), as well as in an event on September 2012 (Figure \ref{solarsurvey} and \ref{Bsurvey}, panel F). Therefore, we suggest that the ubiquitous use of the Bothmer-Schwenn scheme with a simplistic flux rope model is capable of generating a zeroth-order characterization of the rotating magnetic field topology with a flux rope CME. Figure \ref{Bsurvey} also illustrates the limitations of a symmetrical flux rope model, which is frequently highlighted by a variety of in situ models, in that the model field strength are by definition stronger near the center of the flux rope whereas the observed fields occasionally deviate from this pattern. As an example, panels (a), (e) and (f) display the strongest field near the CME leading edge or sheath, which sometimes occurs when a fast CME compresses against the solar wind ahead. The rotating nature of the magnetic field's southward excursion has important consequences for improving start time predictions of significant values in Kp index or aid strength estimates of the Dst storm onset. Panels (a), (b) of Figure \ref{Bsurvey} show examples of an initial prolonged northward magnetic field component and thereby would predict a delayed start of large Kp values (see Paper II for more details). There are also processes that can influence the accuracy of the predicted fields, in particular the interaction of CMEs during passage from the Sun to the Earth \citep{shen2012}. As an example, two CME events launched in quick succession were detected as a single strong event within in situ data and displayed in panel C of Figure \ref{Bsurvey}. In the interim, an experienced observer may be able to manually adjust the computational models in response to such unusual situations in a heuristic manner used by forecasters. The estimation of the impact parameter (i.e., perpendicular to the flux rope axis) is an important variable in influencing the predicted magnetic vector. This parameter affects the estimated peak magnetic field strength as well the expected angular change in the field rotation. The influence on total field rotation goes from observing a maximum $180^\circ$ rotation to a minimum of $0^\circ$ between a trajectory through the core and edge, respectively. As an example, the predicted vectors for panel C and H in Figure \ref{Bsurvey} display a significantly larger variation in field rotation than was observed. For the case of the January 2014 event, draping of the surrounding solar wind magnetic field is likely to account for significant portion of the measured terrestrial disturbance due to Earth's trajectory through the outer northern edge. As a first principle, the field rotation from a draped magnetic field as measured from a 1-D spacecraft trajectory can be modeled with the minimal rotations created from a large impact parameter modeled flux rope described below. For cases as extreme as this, a forecast system that generates a subtle field rotation may be considered more appropriate than generating a `missing-Earth' scenario, but extensive statistical analysis will be required to minimize uncertainty for such cases. Statistically, a spacecraft should have no relationship with the CME trajectory, and the frequency distribution of CME versus the spacecraft distance is expected to be approximately uniform. This has not always been observed with in situ detectors \citep{lepping2010}, but this is likely due to the trajectory being outside their core flux rope behaviour \citep{demoulin2013}. Therefore a split behaviour of the impact parameter, used in this work, between the central core and the outer regions is an appropriate choice. A common uncertainty in impact parameter from various models is considered as approximately $\pm10\%$ \citep{alhaddad2012}. Therefore in Paper II, changes to the impact parameter over the uncertainty range are used to create an ensemble of predicted vectors in order to investigate their consequences on the Kp index. The estimated magnetic vectors from the CME is quasi-invariant to any trajectory variations that are parallel to the flux rope axis. This is because the simplistic model is an axis-symmetric cylinder. The small changes to the estimated vectors that does occur is a result of small adjustments to the radial component of the CME axis direction. The estimated vectors change rapidly once the predicted trajectory approaches the legs of the flux rope axis as the influence of the radial component is highly non-linear. Under such situations, detection of a CME with in situ data usually becomes inconclusive \citep{owens2012} and therefore unlikely to have a major impact for the purposes of predicting large Kp values at Earth. \section{Discussion, Conclusions and Future Work} This article displays a reliable mechanism by which magnetic vectors can be forecasted. The current process lays the organizational structure that is based on remote sensing and empirical relationships. The example January 2014 event is severely deflected away from the Sun-Earth line and thus highlights the importance of including evolutionary estimates of CMEs from remote sensing when attempting to provide reliable forecasts (as previously suggested by \cite{mcallister2001}). Also, to improve the reliability of the magnetic vector forecast, the initial topological structure determined by the Bothmer-Schwenn scheme must be adjusted for cases where the overlying field arcade clearly traverses two active regions. While the current process lays the organizational structure, in its current format, the concept has not yet been statistically proven to be more beneficial at helping estimate the geo-effectiveness of an Earth-directed CME. For this, Paper II describes a first approach to create an ensemble of magnetic vector predictions that are used to create predicted geomagnetic indices, (e.g., Kp). This approach leads to a time varying Kp prediction and for those estimates to have quantifiable uncertainties. In order to create this proof-of-concept, several assumptions and simplifications have been made. This is both a strength for the technique being computationally fast, as well as a weakness for the simplifications being unable to always capture the detailed nature of a complicated geomagnetic storm. A detailed statistical investigation is therefore required to further understand the probability distribution of accurate forecasts versus false positives. The compressed solar wind plasma in between supersonic magnetic flux rope obstacles and their driven shock fronts has not been addressed in this article, even though they have been shown to be significant drivers of magnetospheric storms \citep{huttunen2004}. Panel C, D and E in Figure \ref{Bsurvey} displays the more pronounced examples of high amplitude fluctuations in the magnetic field just prior to the start of the flux rope CME. Therefore, future work may consider approaches that can better forecast these components of geo-effective CMEs. In order to determine the usefulness of predicting the magnetic vectors for the purposes of estimating geomagnetic indices, a standardized procedure that all future techniques can be tested against will be beneficial. Such a forecast skill score (e.g., Heidke or Brier skill score) will perhaps be more useful than a traditional RMSE of individual data points between predicted and observed \citep[e.g.,][]{mays2015}, as this will potentially prevent uncertainty in arrival time values skewing the results. \begin{acknowledgments} This work was supported by NASA grant NNH14AX40I and NASA contract S-136361-Y to NRL. We thank Y-M Wang (NRL) for constructive comments about active region helicity, and M. Stockman (SWPC) and B. Murtagh (SWPC) for clarifying the forecasting policy and procedures at SWPC. The OMNI data were obtained from the GSFC/SPDF OMNIWeb interface at http://omniweb.gsfc.nasa.gov \end{acknowledgments}
1,116,691,501,048
arxiv
\section{Introduction} Making audio recordings of lectures is cheap (in money and time), and technically straightforward. Together, these mean that it is easy for lecturing staff to create this additional resource without much in the way of support, which in turn makes it easy for them to do so routinely and robustly, with little intellectual or technical buy-in. It is also reasonably easy to distribute the audio to students, and people have in the past done so using VLEs or services such as Apple's iTunes. It is hard to escape the feeling, however, that while it is easy to make recordings, they are hard to exploit fully: there is more value in lecture recordings than is readily accessible. Students can listen to a lecture they missed, or re-listen to a lecture at revision time, but their interaction is limited by the affordances of the replaying technology. Listening to lecture audio is generally solitary, linear, and disjoint from other available media. In this paper, we describe a tool we are developing at the University of Glasgow, which enriches students' interactions with lecture audio. We describe our experiments with this tool in session 2012--13. Our general ambitions are: \begin{itemize} \item to elicit (and share) student generated content in the form of tags attached to audio instants, and links between the audio and other lecturer- or student-generated material; \item to enable and encourage students to interact with the available material, which helps them reprocess it intellectually through, amongst other things, a type of prompt rehearsal; \item to support that reprocessing with pedagogically well-founded exercises and activities; and \item to enable (`empower') students to interact with institutionally provided materials, on multiple devices (including mobile), in an attractive and up-to-the-minute style. \end{itemize} In practice, the `audiotag' tool organises and distributes lecture recordings, supports tagging instants within the audio, and supports peer `likes' of those tags; see \prettyref{s:audiotag} for more detail. During session 2012--13, the Audiotag team received funding from Glasgow University (i) to formally evaluate the audiotag service in the context of lecture courses across the university, (ii) to evolve it towards greater usability, (iii) to develop teaching techniques to help students exploit the service possibilities, and (iv) to work with a student developer revisiting the interface and imaginatively exploiting the available service ecology, with cross-links to other media. To our surprise, we report below a suprisingly low engagement with the audio lectures, on the part of the students we have worked with, which has frustrated our attempts to devise more interesting pedagogical exercises. We discuss some possible explanations for this. In \prettyref{s:background} we describe some of the motivating background for our current work. In \prettyref{s:audiotag} we describe the software system we have developed to support this work, and in \prettyref{s:experiment} the results of using this tool to support a set of six lecture courses in astronomy. Finally, in \prettyref{s:summary} we reflect on the results we have obtained. \section{Background and motivation} \label{s:background} It is still relatively uncommon for lecturers to make available recordings of their lectures. The latest Digital Natives survey~\cite{gardiner11} shows that 90\% of students `expect' lecture recordings, so there is at least some, possibly somewhat unfocused, demand for them. Basic audio-recordings of lectures are easy to produce and distribute (creating a podcast is both cost- and time-efficient) so that there are few real technical or cost barriers to making recordings available. Though there is often some scepticism about the practice, in our experience relatively few lecturers are too shy to have their words recorded, or raise for example intellectual property concerns. Why, then, is lecture recording not ubiquitous? We can find some explanation by looking more closely at the supply of recordings, the demand for them, and the pedagogical justification for and use of them; we find something resembling a vicious circle. We believe that the supply barriers are deemed significant because the demand is too low, the demand is low (or at least too vague) because the student body is unfamiliar with the possibility and so does not know to ask for a supply, and the pedagogical benefits (which might cause lecturers to create the supply irrespective of demand) are underexplored because too few lecturers use the technique for them to successfully explore the space of possibilities. \paraheader{Supply:} Digital voice recorders are now inexpensive or ubiquitous (they range from \pounds 30--\pounds150, and many smartphones have adequate recording capabilities out of the box), most people seem to have reasonably ready access to basic audio-editing software, and they can distribute audio files by uploading them to the university Moodle servers. Several of the current group used the free application `Audacity' to make minimal edits\footnote{See \url{http://audacity.sourceforge.net}}, which took perhaps 15 minutes of effort after a lecture; we do not expect lecturers (or support staff) to do any elaborate post-production beyond, perhaps, top-and-tailing or de-noising, and in particular we do not expect anyone to produce anything more sophisticated than a reasonably audible hour of one individual's monologue. The final step of making a podcast from the audio collection is more intricate, but Moodle, like many similar services, has a podcasting plugin\footnote{The distinction between a podcast and a mere collection of audio files is the presence of a `feed' -- an RSS or Atom file -- which allows a `feed reader' application to be automatically notified of the appearance of new `episodes', so that a user doesn't have to repeatedly re-check the audio source.}. Each of these technical obstacles is by itself relatively minor, but in combination they are a barrier substantial enough that only an enthusiast would currently breast them. There is also a type of `supply' question from the students' side, in the supply of technical expertise which students can already be assumed to possess. Students (or the younger ones at least) have been described as `digital natives', more than 98\% of whom have ready access to a computer, 65\% of whom share photos on social networks, and 20\% of whom even report that they edit audio or video, at some level, on a monthly basis. Given this, it is very tempting to assume that there is little or no effective barrier to students' uptake of reasonably straightforward learning technology. \paraheader{Demand:} It is not particularly surprising that a large fraction of students report -- in both formal and informal feedback -- that they would welcome lecture recordings~\cite{gardiner11}. However this does not appear to be reflected in actual usage figures when the recordings are made available (see also the usage analysis below in \prettyref{s:results}). One likely reason for this is that an hour-long recording is not a particularly usable format: it may be useful to provide a `listen-again' opportunity on a long commute, but the devices that students naturally use to listen to podcasts, being primarily targeted at either music or at podcasts patterned after magazine-style radio programmes, are not easy to use for dipping into, or referring to chunks within, a long recording. We discussed the usage of recordings with small groups of students shortly after the corresponding exams, so after the students had had the opportunity to investigate or reinvestigate their potential as a revision aid. The students stressed that the recordings were in practice notably more useful for some material than for others. A course, or a section, which was ``just maths and facts'' (to quote one student) might be more effectively revised using printed notes or slides, rather than a recorded oral explanation; in contrast, the recording might be the most valuable resource for more conceptual material. Whatever this implies about learning modalities and strategies, it is clear that it adds an extra variable to what material we should expect to be useful in a particular context. \paraheader{Pedagogic utility:} Despite the lack of an urgent demand from our intended users, we believe that there is a great deal of educational value latent within lecture audio. This arises partly from its pragmatic use as a revision aid, but also, more fundamentally, because it represents a different modality for instruction, which may complement or in extreme cases replace more traditional textual routes for some students. From this position is it natural to investigate that use of our system within a peer-assisted learning technique such as Jigsaw~\cite{aronson13}, which members of our team have already successful used within the university; in the event, however, we have not yet had the opportunity to verify our intuitions here. In summary, therefore, the supply barriers are overall neither negligible nor notably large; the student demand is diffuse, but still present enought that we believe modest support will elicit this in a more focused form; the pedagogical pressure is still rather vague (in the sense that we as teachers are unsure how best to exploit the resource). Though these observations fail to be positively encouraging, they do not undermine our intuition that a relatively modest technological intervention can have a pronounced and useful -- possibly even transformative -- effect. \section{The Audiotag system} \label{s:audiotag} At the heart of our experiment here is a prototype system, `Audiotag', developed by one of the authors, which supports upload of audio recordings, distribution of recordings via podcasts, and collaborative user tagging of instants within the audio. The system is currently online on-campus, and the code is available at \url{https://bitbucket.org/nxg/audiotag/}, under an open licence. We used versions 0.5 and 0.6 during the course of the session. Some of the authors have used an earlier version of this system in previous years, to make recordings available to students in astronomy, but without laying much stress on the tool, or on the tagging functionality it offers. The system (i) organises and distributes related recordings into `podcasts'; (ii) supports per-use `tagging' of instants within the audio, in a manner similar to well-known social websites such as Delicious or Flickr; (iii) supports `likes' of tags, therefore supporting student voting on successful or insightful tagging actions; and (iv) is designed to be coupled to other tools (we are wrestling with the pedagogic and user-interface challenges of live tagging via mobile devices, in lectures), so that we can support an `ecology' of applications which link to, and are linked from, the tagged audio instants. There is a video demo of a recent (but not completely up-to-date) version of the system at \url{http://vimeo.com/50070137}. In \prettyref{f:screenshot} we show the user interface to one of the recordings, showing a recording starting at 10:04 on 19 September 2012, and showing two instants within the opening few minutes tagged with, respectively `moodle' and `axioms'; this panel can be scrolled to left and right, and zoomed in and out to show more or less of the recording. The user can play, skip and rewind the audio using the buttons below the display, and add tags to the `current instant' using the tag box at the bottom. As well, students can `like' a tag. The system is integrated with the university-wide IT identity system, so that users do not have to register separately for the system. \begin{figure} \begin{centering} \includegraphics{audiotag-screenshot-small.png}\\ \end{centering} \caption{\label{f:screenshot}Screenshot of the audiotag web-based application} \end{figure} As well as making recordings available to listen through this interface, the system also generates a podcast feed so that users can subscribe to notifications when new recordings are added to a course. The system has a very simple permissions model: each course has an `owner', who is typically the lecturer; only the `owner' can upload recordings, and only logged-in users can add tags, but we have not so far felt it necessary to restrict access to the audio, so that anyone can download the lecture audio, and view all the tags, without authenticating. \section{Delivering lectures to students -- our experimental evidence this year} \label{s:experiment} Two of the authors (NG and NL) have previously used early versions of the Audiotag server to deliver lecture audio to students, in both second year and honours, but without laying much stress on it. Anecdotal evidence suggests that students occasionally used lecture recordings to catch up on lectures they had missed, but most use was at revision time, at the end of the session, when students would listen to complete lectures rather than dropping in to particular instants; several students reported listening to the lectures whilst commuting. There was very little tagging activity in these earlier presentations, but students spontaneously expressed enthusiasm, both informally and in course-monitoring questionnaires, for the idea of making the lectures available. In session 2012--13 we obtained money, from an internal Glasgow University learning development fund, to improve the user interface and to experiment with different ways of integrating the Audiotag server with other pedagogical techniques. Our hope was that we could use the broad insights of the Jigsaw technique (namely its principled approach to multi-modal group work) to help students enrich their learning by creating links between their own lecture notes, pre-distributed lecture notes, and the audio recordings. First, however, there is a bootstrap problem. Before we can create any dense and multi-modal network of links to tagged audio, we have to have that tagged audio. Our experience of previous years suggested that this was unlikely to happen spontaneously (even though we believed that we had significantly improved the interface), so we resorted to an apparently reliable alternative: bribery. Part of the financial support was intended as `incentives', which in this case took the form of Google Nexus~7 tablet computers as prizes for three of the courses. We studied six one-semester courses, each of which was a coherent block of 10 lectures given by a single lecturer, within a larger full-session course. The collection of courses is indicated in \prettyref{f:courses}. \begin{figure} \begin{tabular}{|rlcccc|} \hline \emph{Code} & \emph{Course} & $N$ & \emph{Sem} & \emph{Year} & \emph{Prize?}\\ \hline a1cos & Astronomy 1: Cosmology & 112 & 2nd & 1 & no \\ sats & Astronomy 2: Stars and their Spectra & 69 & 2nd & 2 & no \\ cos & Honours Astronomy: Cosmology & 58 & 1st & honours & no \\ e1lds1 & Exploring the Cosmo & 264 & 2nd & 1 & yes \\ a2sr & Astronomy 2: Special Relativity & 69 & 1st & 2 & yes \\ grg1 & Honours Astronomy: General Relativity & 38 & 1st & honours & yes \\ \hline \end{tabular} \caption{\label{f:courses}The courses studies. $N$ is the number of students in the class; `sem' is the semester in which the class was studied (of two); 'year' is the year of study of the students in the class, where `honours' represents a mixture of third, fourth and fifth-year students; and `prize?' indicates whether one of the discussed incentives was available for students.} \end{figure} Courses `a1cos', `e1lds1' and `cos' were taught by NL, courses `a2sr' and grg1 by NG, and `sats' by another colleague in astronomy\footnote{We are grateful to Matt Pitkin for his willingness to experiment here.}. There were five other courses this year where lecturers experimented with the system, and uploaded either a complete or partial set of lectures; in none were the results obviously different from the three `no-prize' courses listed above. These courses represent a broad range of students. `Exploring the Cosmos' is a large first-year course often chosen as a filler; while the students generally enjoy it and are challenged by it (sometimes more than they expected, under both headings), it is not an academic priority for many of its students. `Astronomy 1' and `Astronomy 2' are required courses for students aiming for astronomy degrees. The honours courses are both regarded as quite challenging, and are compulsory or optional for different subsets of the honours class; by this stage the honours students are highly motivated and in good command of their learning strategies. In the three `prize' courses, the class was introduced to the system via an in-lecture demonstration or pointer to the vimeo.com video mentioned above, and told that there was a prize -- the tablet computer -- to be awarded for the `best tagger'; after discussion with the class, it was decided that this prize would be awarded to the students whose tags had accumulated the most `likes' by the day of the course's final exam, in May. In the `cos', `a2sr' and grg1 courses, the lecturer added a number of demonstration tags (7, 20, 27 respectively) to the first lecture. In the three `no-prize' courses, students were introduced to the system, and encouraged once or twice to use it. None of the classes were prescribed any activities specifically involving the tagging system. \subsection{Results} \label{s:results} From examining the server logs, we discover the RSS (podcast) feeds for the studied courses were all downloaded on numerous occasions (see figure below); a single subscription would account for numerous downloads. Unfortunately, the server logging available in this version does not allow us to determine how many unique subscribers there were or what the RSS clients were, and all we can say at this point was that we suspect there was only a single subscriber to the `sats', `cos' and `e1lds1' feeds, or perhaps two (so at most a few percent of the respective classes), but that a substantial fraction of the students in the other courses did subscribe to the podcast feeds. However many students subscribed to the podcasts, only a very small number of students have gone on to add tags. In \prettyref{f:likes}, we list the number of students who added tags, the number of tags that they added, and the number of subsequent tag `likes'. \begin{figure} \begin{tabular}{|lclcc|} \hline \emph{Student} & \emph{Course} & \emph{Tags (in lectures 1-10)} & \emph{Total} & \emph{Likes} \\ \hline KM & e1lds1 & 4, 5, 5, 6, 5, 3, 4, 6, 0, 0 & 38 & 28, 27 \\ HP & e1lds1 & 0, 2, 0, 0, 0, 0, 0, 0, 0, 0 & 2 & 1, 1 \\ GA & a2sr & 0, 9, 0, 0, 28, 16, 0, 0, 0, 0 & 38 & \\ KE & a2sr & 0, 0, 20, 24, 0, 1, 25, 0, 25, 32 & 127 & \\ SL & a2sr & 0, 0, 0, 0, 0, 0, 0, 0, 0, 2 & 2 & \\ MG & grg1 & 0, 0, 0, 0, 0, 0, 0, 0, 21, 15 & 36 & 2\\ MS & grg1 & 0, 33, 2, 41, 43, 34, 0, 40, 25, 15 & 233 & \\ \hline \end{tabular} \caption{\label{f:likes}The students who used the `tagging feature. `Student' is an (anonymised) label for the student; `course' is one of the courses mentioned in \prettyref{f:courses}; `tags' is the number of tags added by this student, per lecture, which gives a total number in column `total'; and `likes' is the number of times this student's tags were `liked' by one or two other students.} \end{figure} The three students who tagged extensively (KM, KE and MS) did so fairly consistently, and the two students who `liked' most, added no tags themselves. The students appear to have added tags fairly promptly after the lectures, with the exception of KE's, MG's and MS's tags on their respective lectures 9 and 10, which were tagged respectively one, one, and four months after the corresponding lectures. There are no obvious patterns amongst the students who added tags (and certainly no patterns significant with so few students). There is perhaps a slight overrepresentation of non-native-english speakers -- this \emph{might} be a function of prior educational styles, or of language difficulties. Our original plan was to use the three first-semester courses to establish a baseline upon which to investigate the effect of other pedagogical interventions in semester two. The surprisingly low response, however, caused us to change our plans, and make the same low-intervention observations again to try to establish a more robust baseline, or to investigate whether there was any difference between the first and second semesters. \section{Discussion} \label{s:summary} As we discussed in \prettyref{s:background}, we were initially confident that a technically modest intervention would produce a significant effect. This confidence seems to have been misplaced: either the barriers are higher than we expected, or our intervention was more modest than is required. We list some tentative explanations below. \paraheader{Interface -- general:} User interface design is always harder than it appears, and it may be that the interface is simply too hard for users to grasp readily. We think this is rather unlikely, however, since the interface has been considerably simplified from earlier versions of the system, and the informal feedback we have obtained from students has included suggestions for adjustments without giving any impression that there is a major usability problem. \paraheader{Interface -- interaction model:} The implicit interaction model, in the current design, is that a student will either review a lecture shortly after it is delivered, or else return to a lecture at revision time, and work through it adding tags. While this deliberate review technique is often suggested to students, we suspect rather few follow it in fact. It may be that this interaction model is more firmly locked in to the system's current interface than we had thought, so that rather few students are prompted to use it as part of their existing study habits. If so, dealing with it would require either a change in the underlying interaction model, or else the introduction of explicit exercises to force the students into interaction. Over the course of the year, an undergraduate Computing Science student worked on an alternative interaction model, in which students use a mobile device to add tags during a lecture, selecting from a pre-set repertiore of tags\footnote{We thank Melissa Campbell for her contributions to the project}. These tags might represent key moments marking `I'm lost here' or `exam', and because they are added while the user is already interacting with the lecture audio (as live speech rather than as a recording), they might evade the model-related problems described above. Tags such as `I'm lost' are probably most comfortably kept at least semi-private; this requires a non-trivial server change, and so while this approach is promising, it was not possible to fully develop it in this prototype cycle. One way to align the system's model and the students' is, as above, to change the system. An alternative is to change the students: we have designs for specific exercises which (for example) require the students to make explicit the links between course handouts and lecture audio, so forcing an increase in the number of tags, and thereby intended to create enough value in the set of tags, that students will interact with the tags completely enough that they cross a threshold to spontaneously adding more. \paraheader{Unfamiliarity:} We have supposed that students would be sufficiently familiar with the concept of tagging online content, through their experience of existing `Web 2.0' services, that tagging audio would require no introduction, little training and only mild encouragement. It is not obvious that this is false, but until we have ruled it out, we must consider the possibility that we simply did not introduce the system clearly enough, so that the students failed to understand what to do. If so, this would be a depressingly simple explanation for the lack of engagement. \paraheader{Incentive:} The incentive we used on this occasion was a deliberately generous prize. Although the nature of an incentive can sometimes have paradoxical effects on the response, the results above indicate that the courses where there was tagging activity are precisely the courses where a prize was offered, so the prize does seem to have had its intended effect (albeit less pronounced than we expected). Overall, this project was a technical success but so far puzzlingly disappointing in its outcomes. We initially believed we had rather small barriers to overcome, namely the barriers dividing students' current practice and interest from the benefits latent in an easily-obtainable audio resource. We expected that we would see rather natural use of the tagging facilities in the various student populations, so that we could promptly go on to investigate how this use was changed by pedagogically motivated exercises. The results of our investigation suggest one or more of the following: (i) that the barriers are higher than we have described in \prettyref{s:background}, or (ii) that we have a poor model of how audio tagging fits in to students' current practice, or else (iii, which is not a completely separate issue) that the `natural' baseline level and pattern of tagging (that is, without forcing from lecturers' exercises) is significantly lower than the idea of the `digital native' student might suggest. In the coming session we plan to repeat the experiment with a modified interface and a clearer notion of the place of lecturer-driven exercises, in order to better investigate the shape of the barriers between students and the latent value of lecture audio recordings.
1,116,691,501,049
arxiv
\section{Conclusion} \label{sec:conclusion} In this paper we introduced TensorFlow Ranking---a scalable learning-to-rank library in TensorFlow. The library is highly configurable and has easy-to-use APIs for scoring mechanisms, loss functions, and evaluation metrics. Unlike the existing learning-to-rank open source packages which are designed for small datasets, TensorFlow Ranking can be used to solve real-world, large-scale ranking problems with hundreds of millions of training examples, and scales well to large clusters. TensorFlow Ranking is already deployed in several production systems at Google, and in this paper we empirically demonstrate its effectiveness for Gmail search and Quick Access in Google Drive~\cite{tata2017quick}. Our experiments show that TensorFlow Ranking can (a) leverage listwise loss functions, (b) effectively incorporate sparse features through embeddings, and (c) scale up without a significant drop in metrics. TensorFlow Ranking is available to the open source community, and we hope that it facilitates further academic research and industrial applications. \section{Use Cases} \label{sec:eval} Tensorflow Ranking is already deployed in several production systems at Google. In this section, we demonstrate the effectiveness of our library for two real-world ranking scenarios: \emph{Gmail search}~\cite{Wang+al:2016,Zamani+al:2017} and \emph{document recommendation in Google Drive}~\cite{tata2017quick}. In both cases the model is trained on large quantities of click data that is beyond the capabilities of existing open source learning-to-rank packages, e.g., RankLib. In addition, in the Gmail setting, our model contains sparse textual features that cannot be naturally handled by the existing learning-to-rank packages. \vspace{-3pt} \subsection{Gmail Search} In one set of experiments, we evaluate several ranking models trained on search logs from Gmail. In this service, when a user types a query into the search box, five results are shown and user clicks (if any) are recorded and later used as relevance labels. To preserve user privacy, we remove personal information and anonymize data using $k$-anonymization. We obtain a set of features that consists of both dense and sparse features. Sparse features include word- and character-level n-grams derived from queries and email subjects. The vocabulary of n-grams is pruned to retain only n-grams that occur across more than $k$ users. This is done to preserve user privacy, as well as to promote a common vocabulary for learning a shared representations across users. % In total, we collect about 250M queries and isolate 10\% of those to construct an evaluation set. Losses and metrics are weighted by Inverse Propensity Weighting~\cite{Wang+al:2016} computed to counter position bias. \begin{table} \caption{Model performance with various loss functions. $\Delta M$ denotes \% improvement in metric $M$ over the \emph{Sigmoid Cross Entropy} baseline. Best performance per column is in bold.} \vspace{-10pt} \label{tab:gmail} \centering \begin{tabular}{@{}llll@{}} \toprule (a) Gmail Search & $\Delta$MRR & $\Delta$ARP & $\Delta$NDCG \\ \midrule Sigmoid Cross Entropy (Pointwise) & -- & -- & -- \\ Logistic Loss (Pairwise) & +1.52 & +1.64 & +1.00 \\ Softmax Cross Entropy (Listwise) & {\bf +1.80} & {\bf +1.88} & {\bf +1.57} \\ \midrule \midrule (b) Quick Access & $\Delta$MRR & $\Delta$ARP & $\Delta$NDCG \\ \midrule Sigmoid Cross Entropy (Pointwise) & -- & -- & -- \\ Logistic Loss (Pairwise) & +0.70 & +1.86 & +0.35 \\ Softmax Cross Entropy (Listwise) & {\bf +1.08} & {\bf +1.88} & {\bf +1.05} \\ \bottomrule \end{tabular} \end{table} \vspace{-3pt} \subsection{Document Recommendation in Drive} Quick Access in Google Drive~\cite{tata2017quick} is a zero-state recommendation engine that surfaces documents currently relevant to the user when she visits the Drive home screen. We evaluate several ranking models trained on user click data over these recommended results. The set of features consists of mostly dense features, as described in Tata et al.~\cite{tata2017quick}. % In total we collected about 30M instances and set aside 10\% of the set for evaluation. \subsection{Model Effectiveness} \subsubsection{Setup and Evaluation} We consider a simple 3-layer feed-forward neural network with ReLU \cite{nair2010rectified} non-linear activation units and dropout regularization \cite{srivastava2014dropout}. We train models using pointwise, pairwise, and listwise losses defined in Section \ref{sec:losses}, and use \texttt{Adagrad} \cite{duchi2011adaptive} to optimize the objective. We set the learning rate to 0.1 for Quick Access, and 0.3 for Gmail Search. The models are evaluated using the metrics defined in Section \ref{sec:metrics}. Due to the proprietary nature of the models, we only report relative improvements with respect to a given baseline. Due to the large size of the evaluation datasets, all the reported improvements are statistically significant. \subsubsection{Effect of listwise losses} Table~\ref{tab:gmail} summarizes the impact of different loss functions as measured by various ranking metrics for Gmail and Drive experiments respectively. We observe that a listwise loss performs better than a pairwise loss, which is in turn better than a pointwise loss. This observation confirms the importance of listwise losses for ranking problems over pointwise and pairwise losses, both in search and recommendation settings. \subsubsection{Incorporating sparse features} Unlike other approaches to learning to rank, like linear models, SVMs, or GBDTs, neural networks can effectively incorporate sparse features like query or document text. Neural networks handle sparse features by using embedding layers, which map each sparse value to a dense representation. These embedding matrices can be jointly trained along with the neural network parameters, allowing us to learn effective dense representations for sparse features in the context of a given task (e.g., search or recommendation). As prior work shows, these representations can substantially improve model effectiveness on large-scale collections \cite{Xiong+al:2017}. To demonstrate this point, Table ~\ref{tab:sparse_features} reports the relative improvements from using sparse features in addition to dense features on Gmail Search. We use an embedding layer of size 20 for each sparse feature. We note that adding sparse features significantly boosts ranking quality across metrics and loss functions. This confirms the importance of sparse textual features in large-scale applications, and the effectiveness of TensorFlow Ranking in employing these features. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth,keepaspectratio]{images/scale_speed} \caption{Normalized training speed for Gmail Search as a function of number of workers.} \label{fig:scale-speed} \vspace{-8pt} \end{figure} \begin{table}[b] \vspace{-3mm} \caption{Model performance with dense and sparse textual features. $\Delta M$ denotes \% improvement in metric $M$ over the corresponding baseline, when only dense features are used. } \vspace{-10pt} \label{tab:sparse_features} \centering \begin{tabular}{@{}llll@{}} \toprule & $\Delta$MRR & $\Delta$ARP & $\Delta$NDCG \\ \midrule Sigmoid Cross Entropy (Pointwise) & +6.06 & +6.87 & +3.92 \\ Logistic Loss (Pairwise) & +5.40 & +6.25 & +3.51 \\ Softmax Cross Entropy (Listwise) & +5.69 & +6.25 & +3.70 \\ \bottomrule \end{tabular} \end{table} \subsection{Distributed Training} Distributed training is crucial for applications of learning-to-rank to large scale datasets. Models built using the \texttt{Estimator} class allow for easy switching between local and distributed training, without changing the core functionality of the model. This is achieved via the \texttt{Experiment} class~\cite{cheng2017tensorflow} given a distribution strategy. Distributed training using TensorFlow consists of a number of worker tasks and parameter servers. We investigate the scalability of the model built for Gmail Search, the larger of our two datasets. We examine the effect of increasing the number of workers on the training speed. For the distributed strategy, we use between-graph replication, where each worker replicates the graph on a different subset of the data, and asynchronous training, where the gradients from different workers are asynchronously collected to update the gradients for the model. For each of the following experiments, the number of workers ranges between 1 and 200, while the number of training epochs is fixed at 20 million. For robustness, each configuration is run 5 times, and the $95\%$ confidence intervals are plotted. \subsubsection{Effect of scaling on training speed} We look at the impact of increasing the number of workers on the training speed in Figure~\ref{fig:scale-speed}. We measure training speed by the number of training steps executed per second. % Due to the proprietary nature of the data, we report training speed normalized by the average training speed for one worker. The training scales up linearly in the initial phase, however when the pool of workers becomes too large, we are faced with two confounding factors. First is the communication overhead for gradient updates. Second, more workers stay idle waiting for data to become available. Therefore, the I/O costs begin to dominate, and the total training time stagnates and even slows down, as seen in the $20+$ worker region of Figure~\ref{fig:scale-speed}. \subsubsection{Effect of scaling on metrics} In Figure~\ref{fig:scale-accuracy}, we examine the impact of increasing the number of workers on weighted MRR. Due to the proprietary nature of the data, we report the metric value normalized by the average metric for one worker. We observe that scaling does not generally have a significant impact on MRR, with two exceptions: using (a) a single worker, and (b) a large worker pool. We believe that the former requires more training cycles to achieve a comparable MRR. In the latter setting, gradient updates aggregated from too many workers become inaccurate, sending the model in a direction that is not well-aligned with the true gradient. However, in both cases, the overall effect on MRR is very small (roughly $0.03\%$), demonstrating the robustness of scaling with respect to model performance. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth,keepaspectratio]{images/scale_accuracy} \caption{Normalized weighted Mean Reciprocal Rank for Gmail Search as a function of number of workers.} \label{fig:scale-accuracy} \vspace{-8pt} \end{figure} \section{Introduction} \label{sec:intro} With the high potential of deep learning for real-world data-intensive applications, a number of open source packages have emerged in recent years and are under active development, including TensorFlow~\cite{abadi2016tensorflow}, PyTorch~\cite{paszke2017automatic}, Caffe~\cite{jia2014caffe}, and MXNet~\cite{chen2015mxnet}. Supervised learning is one of the main use cases of deep learning packages. % However, compared with the comprehensive support for classification or regression in open-source deep learning packages, there is a paucity of support for ranking problems. A ranking problem is defined as a derivation of ordering over a list of items that maximizes the utility of the entire list. It is widely applicable in several domains, such as Information Retrieval and Natural Language Processing. Some important practical applications include web search, recommender systems, machine translation, document summarization, and question answering~\cite{li2011learning}. In general, a ranking problem is different from classification or regression tasks. While the goal of classification or regression is to predict a label or a value for each individual item as accurately as possible, the goal of ranking is to optimally sort the entire item list such that, for some notion of relevance, the items of highest relevance are presented first. To be precise, in a ranking problem, we are more concerned with the relative order of the relevance of items than their absolute magnitudes. That ranking is a fundamentally different problem entails that classification or regression metrics and methodologies do not transfer effectively to the ranking domain. To fill this void, a number of metrics and a class of methodologies that are inspired by the challenges in ranking have been proposed in the literature. For example, widely-utilized metrics such as Normalized Discounted Cumulative Gain (NDCG)~\cite{jarvelin2002cumulated}, Expected Reciprocal Rank (ERR)~\cite{Chapelle+al:2009}, Mean Reciprocal Rank (MRR)~\cite{craswell2009mean}, Mean Average Precision (MAP), and Average Relevance Position (ARP)~\cite{zhu2004recall} are designed to % emphasize the items that are ranked higher in the list. Similarly, a class of supervised machine learning techniques that attempt to solve ranking problems---referred to as learning-to-rank~\cite{li2011learning}---has emerged in recent years. Broadly, the goal of learning-to-rank is to learn from labeled data a parameterized function that maps feature vectors to real-valued scores. During inference, this scoring function is used to sort and rank items. Most learning-to-rank methods differ primarily in how they define surrogate loss functions over ranked lists of items during training to optimize a non-differentiable ranking metric, and by that measure fall into one of \emph{pointwise}, \emph{pairwise}, or \emph{listwise} classes of algorithms. Pointwise methods~\cite{Fuhr:1989:OPR:65943.65944,Chu:2005:PLG:1102351.1102369,Gey:1994:IPR:188490.188560} approximate ranking to a classification or regression problem and as such attempt to optimize the discrepancy between individual ground-truth labels and the absolute magnitude of relevance scores produced by the learning-to-rank model. On the other hand, \emph{pairwise}~\cite{burges2005learning,Joachims:2002} or \emph{listwise}~\cite{cao2007learning,xia2008listmle,wang2018lambdaloss} methods either model the pairwise preferences or define a loss over entire ranked list. Therefore, pairwise and listwise methods are more closely aligned with the ranking task~\cite{li2011learning}. There are other factors that distinguish ranking from other machine learning paradigms. As an example, one of the challenges facing learning-to-rank is the inherent biases~\cite{Joachims+al:2005,Yue:2010:BPB:1772690.1772793} that exist in labeled data collected through implicit feedback (e.g., click logs). Recent work on unbiased learning-to-rank~\cite{ai2018unbiased,Wang+al:2018, Joachims:WSDM17} explores ways to counter position bias~\cite{Joachims+al:2005} in training data and produce a consistent and unbiased ranking function. These techniques work well with pairwise or listwise losses, but not with pointwise losses~\cite{Wang+al:2018}. From the discussion above, it is clear that a library that supports the learning-to-rank problem has its own unique set of requirements and must offer a functionality that is specific to ranking. Indeed, a number of open source packages such as RankLib\footnote{Available at: \url{https://sourceforge.net/p/lemur/wiki/RankLib/}} and LightGBM~\cite{ke2017lightgbm} exist to address the ranking challenge. Existing learning-to-rank libraries, however, have a number of important drawbacks. First, they were developed for small data sets (thousands of queries) and do not scale well to massive click logs (hundreds of millions of queries) that are common in industrial applications. Second, they have very limited support for sparse features and can only handle categorical features with a small vocabulary. Crucially, extensive feature engineering is required to handle textual features. In contrast, deep learning packages like TensorFlow can effectively handle sparse features through embeddings~\cite{mikolov2013word2vec}. Finally, existing learning-to-rank libraries do not support the recent advances in unbiased learning-to-rank. To address this gap, we present our experiences in building a scalable, comprehensive, and configurable industry-grade learning-to-rank library in TensorFlow. Our main contributions are: \begin{itemize}[leftmargin=*] \item We propose an open-source library for training large scale learning-to-rank models using deep learning in TensorFlow. \item The library is flexible and highly configurable: it provides an easy-to-use API to support different scoring mechanisms, loss functions, example weights, and evaluation metrics. \item The library provides support for unbiased learning-to-rank by incorporating inverse propensity weights in losses and metrics. \item We demonstrate the effectiveness of our library by experiments on two large-scale search and recommendation applications, especially when employing listwise losses and sparse textual features. \item We demonstrate the robustness of our library in a large-scale distributed training setting. \end{itemize} Our current implementation of the TensorFlow Ranking library is by no means exhaustive. We envision that this library will provide a convenient open platform for hosting and advancing state-of-the-art ranking models based on deep learning techniques, and thus facilitate both academic research as well as industrial applications. The remainder of this paper is organized as follows. Section~\ref{sec:ltr} formulates the problem of learning to rank and provides an overview of existing approaches. In Section~\ref{sec:overview}, we present an overview of the proposed learning-to-rank platform, and in Section~\ref{sec:components} we present the implementation details of our proposed library. Section~\ref{sec:eval} showcases uses of the library in a production environment and demonstrates experimental results. Finally, Section~\ref{sec:conclusion} concludes this paper. \section{Implementation} \label{sec:lib} Our library is based on TensorFlow. Similar to design patterns in TensorFlow, it provides functions and closures for users to construct models and also allows users to build custom functions if needed. More specifically, we use the TensorFlow Estimator framework \cite{cheng2017tensorflow} to build ranking models. This framework supports both local and distributed training. The core component of this framework is a \texttt{model\_fn} function that takes features and labels as input and returns loss, prediction, metrics, and training ops, depending on the mode (\texttt{TRAIN, EVAL, PREDICT}). The \textttt{tf.Estimator} framework has two main components: (1) An \texttt{input\_fn}, which reads in data from a persistent storage to output Tensors for features and labels, and (2) A \texttt{model\_fn} which processes input features and labels and returns the predictions, loss functions and evaluation metrics, depending on the mode (\texttt{TRAIN, EVAL, PREDICT}). We describe how we implement these two components for learning-to-rank. \subsection{Input Format} A ranking problem usually has a context (e.g., a query) and a list of examples. For the sake of simplicity, we assume that all the examples have the same set of features. These features can be sparse or dense. For sparse features, we use embeddings to densify them. Our library allows users to specify features (e.g., dense or sparse) using a TensorFlow library such as \texttt{tf.feature\_column}. We allow closures for customized feature transformations and provide utility functions for these transformations. The input as represented as a 2-D tensor for each context feature and a 3-D tensor for each candidate feature. Each input feature has a corresponding Feature Column (\texttt{tf.feature\_column}), which denotes whether the feature is dense or sparse, and the type of elements in that tensor. For sparse features, it also specifies a vocabulary and an embedding lookup to convert sparse features to dense features. The embedding layer can be shared between features with the same vocabulary. \\ \textbf{Feature Transformations}. Transform functions are transformations applied to output of \texttt{input\_fn}. The neural network in scoring function can only take dense tensors as input. The library provides standard functions to transform raw tensors to dense tensors (\texttt{ranking.make\_listwise\_transform\_fn}), and apply shared embeddings over sparse features with the same vocabulary. The user can define a custom \texttt{tfr.feature.make\_transform\_fn} to define any additional feature transformations. \subsection{Model Building} \begin{figure*} \centering \fbox{ \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=\linewidth,keepaspectratio]{images/tf_ranking_architecture} \end{minipage} } \fbox{ \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=\linewidth, keepaspectratio]{images/pseudocode} \end{minipage} } \caption{Building a \texttt{model\_fn} using TensorFlow Ranking library.} \label{fig:arch} \end{figure*} The overall flow of building a \texttt{model\_fn} is shown in Figure~\ref{fig:arch}. There are two important components: \begin{itemize} \item \textbf{Scoring Function}. We focus on single-item scoring functions and multi-item scoring functions~\cite{wang2018groupwise} in this paper. A single-item scoring function takes all context features and all features for a single example as input and outputs a score, as defined in Equation~\ref{eq:pointwise_scoring}. A multi-item scoring function extends this to a group of examples. Conceptually, we slice the tensor for each example list into a number of tensors with shape of \texttt{[batch\_size, group\_size, feature\_size]}, where $\texttt{group\_size} = 1$ for pointwise scoring functions. These group features are combined with context features, and passed to the scoring function to generate scores. After the scoring phase, a voting layer is applied to obtain a tensor with shape \texttt{[batch\_size, list\_size]} for scores, as shown in Figure~\ref{fig:arch} for a mini-batch. The scoring function is a user-specified closure which is passed to the ranking \texttt{model\_fn} builder. \item \textbf{Ranking Head}. The ranking head structure computes ranking metrics and ranking losses, given scores, labels and optionally example weights. In our library, both score and labels tensors have the same shape \texttt{[batch\_size, list\_size]}, representing \texttt{batch\_size} number of example lists. Ranking head also incorporates example weights, described in Section~\ref{sec:example_weights}, which can be per-example with shape \texttt{[batch\_size, list\_size]} or for the entire list with shape \texttt{[batch\_size]}. Ranking head is available in the library via the factory method \texttt{tfr.head.create\_ranking\_head}. \end{itemize} As shown in Figure~\ref{fig:arch}, our library provides factory methods to create metrics and losses. Furthermore, our APIs allow to specify loss functions when creating a ranking head. This enables the user to switch between different loss functions or combine multiple loss functions easily. More importantly, we provide a builder function that takes a scoring function and a ranking head and returns a \texttt{model\_fn} to construct a \texttt{tf.estimator.Estimator}. When \texttt{mode = PREDICT} in \texttt{model\_fn}, the learned scoring function is exported for serving. \section{Learning-to-Rank} \label{sec:ltr} In this section, we provide a high-level overview of learning-to-rank techniques. We begin by presenting a formal definition of learning-to-rank and setting up notation. \subsection{Setup} Let $\mathcal{X}$ denote the universe of items and let $\bm{x} \in \mathcal{X}^n$ represent a list of $n$ items and $x_i \in \bm{x}$ an item in that list. Further denote the universe of all permutations of size $n$ by $\Pi^n$, where $\pi \in \Pi^n$ is a bijection from $[1:n]$ to itself. $\pi$ may be understood as a total ranking of items in a list where $\pi(i)$ yields the rank according to $\pi$ of the $i^{\text{th}}$ item in the list and $\pi^{-1}(r)$ yields the index of the item at rank $r$, and we have that $\pi^{-1}(\pi(i)) = i$. A ranking function $f: \mathcal{X}^n \rightarrow \Pi^n$ is a function that, given a list of $n$ items, produces a permutation or a ranking of that list. The goal of learning-to-rank, in broad terms, is to learn a ranking function $f$ from training data such that items as ordered by $f$ yield maximal utility. Let us parse this statement and discuss each component in more depth. \subsection{Training Data} We begin with a description of training data. Learning-to-rank, which is an instance of supervised learning, assumes the existence of a ground-truth permutation $\pi^\ast$ for a given list of items $\bm{x}$. In most real-world settings, ground-truth is provided in its more general form: a \emph{set} of permutations or in other words a \emph{partial} ranking. In a partial ranking $X_1 \succ X_2 \succ ... \succ X_k$, where $X_i$s are $k$ disjoint subsets of elements of $\bm{x} \in \mathcal{X}^n$ ($k \leq n$), items in $X_i$ are preferred over those in $X_{j>i}$, but within each $X_i$ items may be permuted freely. When $k=n$ partial ranking reduces to a total ranking. As a concrete example, consider the task of \emph{ad hoc} retrieval where given a (textual) query the ranking algorithm retrieves a relevant list of documents from a large corpus. When constructing a training dataset, one may recruit human experts to examine an often very small subset of candidate documents for a given query and grade the documents' relevance with respect to that query on some scale (e.g., 0 for "not examined" or "not relevant" to 5 for "highly relevant"). A relevance grade for document $x_i$ is considered its "label" $y_i$. Similarly, in training datasets that are constructed by implicit user feedback such as click logs, documents are either relevant and clicked ($y=1$) or not ($y=0$). In either case, a list of labels $\bm{y}$ induces a partial ranking of documents. In order to simplify notation throughout the remainder of this paper and without loss of generality, we assume the existence of $\pi^\ast$, a correct total ranking. $\pi^\ast$ can be understood as an ranking induced by a list of labels $\bm{y}\in\mathbb{R}^n$ for $\bm{x}\in\mathcal{X}^n$. As such, our training data set of $m$ items can be defined as $S^m = \{(\bm{x}, \pi^\ast)\, |\, \bm{x}\in\mathcal{X}^n, \pi^\ast\in\Pi^n\}$ or equivalently $S^m = \{(\bm{x}, \bm{y}) \in \mathcal{X}^n\times\mathbb{R}^n\}$. Returning to the case of \emph{ad hoc} retrieval, it is worth noting that each item $x_i \in \bm{x}$ is in fact a pair of query and document $(q, d_i)$: It is generally the case that the pair $(q, d_i)$ is transformed to a feature vector $x_i$. \subsection{Scoring Function} \label{sec:scoring-function} Directly finding a permutation $\pi$ is difficult, as the space of all possible permutations is exponentially large. In practice, a score-and-sort approach is used instead. Let $h : \mathcal{X}^n \rightarrow \mathbb{R}^n$ be a scoring function that maps a list of items $\bm{x}$ to a list of scores $\hat{\bm{y}}$. Let $h(.)|_k$ denotes the $k^{\text{th}}$ dimension of $h(.)$. As discussed earlier, $h$ induces a permutation $\pi$ such that $h(\bm{x})|_{\pi^{-1}(r)}$ is monotonically decreasing for increasing ranks $r$. In its simplest form, the scoring function is univariate and can be decomposed into a per-item scoring function as shown in Equation~\ref{eq:pointwise_scoring}, where $g: x \rightarrow \mathbb{R}$ maps a feature vector to a real-valued score. \begin{equation} \centering h(\bm{x}) = [g(x_i),\, \forall\, 1\leq i \leq n] = [g(x_1),\, g(x_2),\, ...,\, g(x_n)]. \label{eq:pointwise_scoring} \end{equation} The scoring function $h$ is typically parameterized by a set of parameters $\theta$ and can be written as $h(.; \theta)$. Many parameterization options have been studied in the learning-to-rank literature including linear functions~\cite{joachims2006training}, boosted weak learners~\cite{Jun+Hang:2007}, gradient-boosted trees~\cite{friedman2001greedy, burges2010ranknet}, support vector machines~\cite{joachims2006training, Joachims:WSDM17}, and neural networks~\cite{burges2005learning}. Our library offers deep neural networks as the basis to construct a scoring function. This framework facilitates more sophisticated scoring functions such as multivariate functions~\cite{ai2018groupwise}, where the scores of a group of items are computed jointly. Through a flexible API, the library also enables development and integration of arbitrary scoring functions into a ranking model. \subsection{Utility and Ranking Metrics} \label{sec:metrics} We now turn to the notion of utility. As noted in Section~\ref{sec:intro}, the utility of an ordered list of items is often measured by a number of standard ranking-specific metrics. What makes ranking metrics unique and suitable for this task is that, in ranking, it is often desirable to have fewer errors at higher ranked positions; this principle is reflected in many ranking metrics: \begin{equation} \mathit{RR}(\pi, \bm{y}) = \frac{1}{\min_{j}\{y_{\pi^{-1}(j)} > 0\}}, \end{equation} \begin{equation} \mathit{RP}(\pi, \bm{y}) = \frac{\sum_{j=1}^n y_j \pi(j)}{\sum_{j=1}^n y_j}, \end{equation} \begin{equation} \mathit{DCG}(\pi, \bm{y}) =\sum_{j=1}^n \frac{2^{y_j}-1}{\log_2(1 + \pi(j))}, \end{equation} \begin{equation} \mathit{NDCG}(\pi, \bm{y}) = \frac{\mathit{DCG(\pi, \bm{y})}}{\mathit{DCG}(\pi^\ast, \bm{y})}, \end{equation} where $y_i \in \bm{y}$ are ground-truth labels that induce $\pi^\ast$, and $\pi(i)$ is the rank of the $i^{\text{th}}$ item in $\bm{x}$. $\mathit{RR}$ is the reciprocal rank of the first relevant item. $\mathit{RP}$ is the positions of items weighted by their relevance values~\cite{zhu2004recall}. DCG is the Discounted Cumulative Gain~\cite{jarvelin2002cumulated}, and NDCG is DCG normalized by the maximum DCG obtained from the ideal ranked list $\pi^\ast$. Note that, given $m$ evaluation samples ${(\pi_k, \bm{y}_k), 1 \leq k \leq m}$, the mean of the above metrics is calculated and reported instead. For example, the mean reciprocal rank (MRR) is defined as: $$MRR = \frac{1}{m} \displaystyle\sum_{k=1}^N RR(\pi_k, \bm{y}_k).$$ Our library supports commonly used ranking metrics and enables easy development and addition of arbitrary metrics. \subsection{Loss Functions} \label{sec:losses} Learning-to-rank seeks to maximize a utility or equivalently minimize a cost or loss function. Assuming there exists a loss function $\ell(.)$, the objective of learning-to-rank is to find a ranking function $f^\ast$ that minimizes the empirical loss over training samples: \begin{equation} \displaystyle{f^\ast = \argmin_{f: \mathcal{X}^n \rightarrow \Pi^n} \dfrac{1}{m} \sum_{(\bm{x}, \pi^\ast) \in S^m} {\ell(\pi^\ast, f(\bm{x}))} }. \end{equation} Replacing $f$ with a scoring function $h$ as is often the case yields the following optimization problem: \begin{equation} \displaystyle{h^\ast = \argmin_{h: \mathcal{X}^n \rightarrow \mathbb{R}^n} \dfrac{1}{m} \sum_{(\bm{x}, \bm{y}) \in S^m} {\hat{\ell}(\bm{y}, h(\bm{x}))} }, \end{equation} where $\hat{\ell(.)}$ is a loss function equivalent to $\ell(.)$ that acts on scores instead of permutations induced by scores. This setup naturally prefers loss functions that are differentiable. Most ranking metrics, however, are either discontinuous or flat everywhere due to the use of the sort operation and as such cannot be directly optimized by learning-to-rank methods. With a few notable exceptions~\cite{metzler2005directMaximization,Jun+Hang:2007}, most learning-to-rank approaches therefore define and optimize a differentiable surrogate loss instead. They do so by, among other techniques, creating a smooth variant of popular ranking metrics~\cite{Qin:2010:GAF:1842549.1842572,Taylor+al:2008}; deriving tight upper-bounds on ranking metrics~\cite{wang2018lambdaloss}; bypassing the requirement that a loss function be defined altogether~\cite{burges2010ranknet}; or, otherwise designing losses that are loosely related to ranking metrics~\cite{xia2008listmle,cao2007learning}. Our library supports a number of surrogate loss functions. As an example of a \textbf{pointwise} loss, the sigmoid cross-entropy for binary relevance labels $y_j \in \{0, 1\}$ is computed as follows: \begin{equation} \hat{\ell}(\bm{y}, \hat{\bm{y}}) = - \displaystyle\sum_{j=1}^n y_j \log(p_j) + (1-y_j) \log(1-p_j) \end{equation} where $p_j = \frac{1}{1 + \exp(-\hat{y}_j)}$, $\hat{\bm{y}} \triangleq h(\bm{x})$ are scores computed by the scoring function $h$. As an example of a \textbf{pairwise} loss in our library, the pairwise logistic loss is defined as: \begin{equation} \hat{\ell}(\bm{y}, \hat{\bm{y}}) = \displaystyle\sum_{j=1}^n \displaystyle\sum_{k=1}^n \mathbb{I}(y_j > y_k) \log(1 + \exp(\hat{y}_k - \hat{y}_j))) \label{eq:logistic_loss} \end{equation} where $\mathbb{I}(\cdot)$ is the indicator function. Finally, as an example of a \textbf{listwise} loss~\cite{NIPS2009_3708}, our library provides the implementation of Softmax Cross-Entropy, ListNet~\cite{cao2007learning}, and ListMLE~\cite{xia2008listmle} among others. For example, the Softmax Cross-Entropy loss is defined as follows: \begin{equation} \hat{\ell}(\bm{y}, \hat{\bm{y}}) = - \displaystyle\sum_{j=1}^n y_j \log(\frac{\exp(\hat{y}_j)}{\sum_{j=1}^n \exp(\hat{y}_j)}) \end{equation} \subsection{Item Weighting} \label{sec:item_weights} Finally, we conclude this section by a note on bias in learning-to-rank. As discussed in Section~\ref{sec:intro}, a number of studies have shown that click logs exhibit various biases including position bias~\cite{Joachims+al:2005}. In short, users are less likely to examine and click items at larger rank positions. Ignoring this bias when training a learning-to-rank model or when evaluating ranked lists may lead to a model with less generalization capacity and inaccurate quality measurements. Unbiased learning-to-rank~\cite{Joachims:WSDM17, Wang+al:2018} looks at handling such biases in relevance labels. One proposed method~\cite{Wang+al:2016, Wang+al:2018} is to compute Inverse Propensity Weights (IPW) for each position in the ranked list. By incorporating these scores in the training process (usually by way of re-weighting items during loss computation), one may produce a better ranking function. Similarly, IPW-weighted variants of evaluation metrics attempt to counter such biases during evaluation. Our library supports the incorporation of such weights into the training and evaluation processes. \section{Acknowledgements} We thank the members of the TensorFlow team for their advice and support: Alexandre Passos, Mustafa Ispir, Karmel Allison, Martin Wicke, Clemens Mewald and others. We extend our special thanks to our collaborators, interns and early adopters: Suming Chen, Zhen Qin, Chirag Sethi, Maryam Karimzadehgan, Makoto Uchida, Yan Zhu, Qingyao Ai, Brandon Tran, Donald Metzler, Mike Colagrosso, Patrick McGregor and many others at Google who helped in evaluating and testing the early versions of TF-Ranking. \balance \bibliographystyle{ACM-Reference-Format} \section{Platform Overview} \label{sec:overview} Popular use-cases of learning-to-rank, such as search or recommender systems~\cite{tata2017quick}, have several challenges. Models are generally trained over vast amounts of user data, so efficiency and scalability are critical. Features may be comprised of dense, categorical, or sparse types, and are often missing for some data points. These applications also require fast inference for real-time serving. It is against the backdrop of these challenges that we believe, at a high level, neural networks as a class of machine learning models, and TensorFlow~\cite{abadi2016tensorflow} as a machine learning framework are suitable for practical learning-to-rank problems. Take efficiency and scalability as an example. The availability of vasts amounts of training data, along with an increase in computational power and thereby in our ability to train deeper neural networks with millions of parameters in a scalable manner have led to rapid adoption of neural networks for a variety of applications. In fact, deep neural networks have gained immense popularity, with applications in Natural Language Processing, Computer Vision, Speech Signal Processing, and many other areas~\cite{goodfellow2016deep}. Recently, neural networks have been shown to be effective in applications of Information Retrieval as well~\cite{mitra2017neural}. Neural networks can also process heterogeneous features more naturally. Evidence~\cite{bengio2013representation} suggests that neural networks learn effective representations of categorical and sparse features. These include words or characters in Natural Language Processing~\cite{mikolov2013word2vec}, phonemes in Speech Processing~\cite{silfverberg2018sound}, or raw pixels in Computer Vision~\cite{lecun1995convolutional}. Such "representation learning" is usually achieved by way of converting raw, unstructured data into a dense real valued vector, which, in turn, is treated as a vector of implicit features for subsequent neural network layers. For example, discrete elements such as words or characters from a finite vocabulary are transformed using an embedding matrix---also referred to as embedding layer or embeddings~\cite{bengio2013representation}---which maps every element of the vocabulary to a learned, dense representation. This particular representation learning is useful for learning-to-rank over documents, web pages or other textual data. Given these properties, neural networks are excellent candidates for modeling a score-and-sort approach to learning-to-rank. In particular, the scoring function $h(.)$ in Equation~\ref{eq:pointwise_scoring} can be parameterized by a neural network. In such a setup, feature representations can be jointly learned with the parameters of $h(.;\, \theta)$ while minimizing the objective $\hat{\ell}$ averaged over the training data. This is the general setup we adopt and implement in the TensorFlow Ranking library. TensorFlow Ranking is built on top of TensorFlow, a popular open-source library for large scale training, evaluation and serving of machine learning and deep learning models. TensorFlow supports high performance tensor (multi-dimensional vector) manipulation and computation via what is referred to as "computational graphs." A computational graph expresses the logic of a sequence of tensor manipulations, with each node in the graph corresponding to a single tensor operation (\textit{op}). For example, a node in the computation graph may multiply an "input" tensor with a "weight" tensor, a subsequent node may add a "bias" tensor to that product, and a final node may pass the resultant tensor through the Sigmoid function. The concept of a computation graph with tensor operations simplifies the implementation of the \textit{backpropagation}~\cite{chauvin2013backpropagation} algorithm for training neural networks. In a forward pass, the values at each node are computed by composing a sequence of tensor operations, and in a backward pass, gradients are accumulated in the the reverse fashion. TensorFlow enables such propagation of gradients through automatic differentiation~\cite{abadi2016tensorflow}: each operation in the computation graph is equipped with a gradient expression with respect to its input tensors. In this way, the gradient of a complex composition of tensor operations can be automatically inferred during a backward pass through the computation graph. This allows for composition of a large number of operations to construct deeper networks. Another computationally attractive property of the TensorFlow framework is its support, via TensorFlow Estimator \cite{cheng2017tensorflow}, of distributed training of neural networks. TensorFlow Estimator is an abstract library that takes a high-level training, evaluation, or prediction logic and hides the execution logic from developers. An \texttt{Estimator} object encapsulates two major abstract components that can be further customized: (1) \texttt{input\_fn}, which reads in data from a persistent storage and creates tensors for features and labels, and (2) \texttt{model\_fn} which processes input features and labels, and depending on the mode (\texttt{TRAIN}, \texttt{EVAL}, \texttt{PREDICT}), returns a loss value, evaluation metrics, or predictions. The computation graph expressed within the \texttt{model\_fn} may depend on the mode. This is particularly useful for learning-to-rank because the model may need entire lists of items during training (to compute a listwise loss, for example), but during serving, it may score each item independently. The model function itself can be expressed as a combination of a \texttt{logits} builder and a \texttt{Head} abstraction, where the logits builder generates the values in the forward computation of the graph, and the \texttt{Head} object defines loss objectives and associated metrics. These abstractions, along with modular design, the ability to distribute the training of a neural network and to serve on a variety of high-performance hardware platforms (CPU/GPU/TPUs) are what make the TensorFlow ecosystem a suitable platform for a neural network-based learning-to-rank library. In the next section, we discuss how the design principles in TensorFlow and the \texttt{Estimator} workflow inspire the design of the TensorFlow Ranking library. \begin{figure*} \centering \fbox{ \includegraphics[width=0.8\linewidth,keepaspectratio]{images/tf_ranking_arch} } \caption{TensorFlow Ranking Architecture} \label{fig:arch} \end{figure*} \section{Components} \label{sec:components} Motivated by design patterns in TensorFlow and the \texttt{Estimator} framework, TensorFlow Ranking constructs computational sub-graphs via callbacks for various aspects of a learning-to-rank model such as scoring function $h(.)$, losses $\hat{\ell}(.)$, and evaluation metrics (see Section~\ref{sec:ltr}). These subgraphs are combined together in the \texttt{Estimator} framework, via a custom callback to create a \texttt{model\_fn}. The \texttt{model\_fn} itself can be decomposed into a \texttt{logits} builder, which represents the output of the neural network layers, and a \texttt{Head} abstraction, which encapsulates loss and metrics. This decomposition is particularly suitable to learning-to-rank problems in the score-and-sort approach. The \texttt{logits} builder corresponds to the scoring function, defined in Section \ref{sec:scoring-function}, and the \texttt{Head} corresponds to losses and metrics. It is important to note that this decomposition also provides modularity and the ability to switch between various combinations of scoring functions and ranking heads. For example, as we show in the code examples below, switching between a pointwise or a listwise loss, or incorporating embedding for sparse features into the model can be expressed in a single line code change. We found this modularity to be valuable in practical applications, where rapid experimentation is often required. The overall architecture of TensorFlow Ranking library is illustrated in Figure \ref{fig:arch}. The key components of the library are: (1) data reader, (2) transform function, (3) scoring function, (4) ranking loss functions, (5) evaluation metrics, (6) ranking head, and (7) a \texttt{model\_fn} builder. In the remainder of this section, we will describe each component in detail. \subsection{Reading data using \texttt{input\_fn}} The \texttt{input\_fn}, as shown in Figure~\ref{fig:arch}, reads in data either from a persistent storage or data generated by another process, to produce dense and sparse tensors of the appropriate type, along with the labels. The library provides support for several popular data formats (\texttt{LIBSVM, tf.SequenceExample}). Furthermore, it allows developers to define customized data readers. Consider the case when the user would like to construct a custom \texttt{input\_fn}. The user defines a parser for a batch of serialized datapoints that returns a feature dictionary containing 2-D and 3-D tensors for "context" and "per-item" features respectively. This user-defined parser function is passed to a dataset builder, which generates features and labels. Per-item features are those features that are particular to each example or item in the list of items. Context features, on the other hand, are independent of the items in the list. In the context of document search, features extracted from documents are per-item features and other document-independent features (such as query/session/user features) are considered context features. We represent per-item features by a 3-D tensor (where dimensions correspond to queries, items, and feature values) and context features by a 2-D tensor (where dimensions correspond to queries and feature values). The following code snippet shows how one may construct a custom \texttt{input\_fn}. \noindent \begin{lstlisting}[language=Python,breaklines=true,frame=single] def _parse_single_datapoint(serialized): """User defined logic to parse a batch of serialized datapoints to a dictionary of 2-D and 3-D tensors.""" return features def input_fn(file_pattern): """Generate features and labels from input data.""" dataset = tfr.data.build_ranking_dataset_with_parsing_fn( file_pattern=file_pattern, parsing_fn=_parse_single_datapoint, ...) features = dataset.make_one_shot_iterator().get_next() label = tf.squeeze(features.pop(_LABEL_FEATURE), axis=2) return features, label \end{lstlisting} \subsection{Feature Transformation with \texttt{transform\_fn}} As discussed in Section~\ref{sec:overview}, sparse features such as words or n-grams can be transformed to dense features using (learned) embeddings. More generally, any raw feature may require some form of transformation. Such transformations may be implemented in \texttt{transform\_fn}, a function that is applied to the output of \texttt{input\_fn}. The library provides standard functions to transform sparse features to dense features based on feature definitions. A feature definition in TensorFlow is a user-defined \texttt{tf.FeatureColumn}~\cite{cheng2017tensorflow}, an expression of the type and attributes of features. Given feature definitions, the transform function produces dense 2-D or 3-D tensors for context and per-item features respectively. The following code snippet demonstrates an implementation of the \texttt{transform\_fn}. Context and per-item feature definitions are passed to the \texttt{encode\_listwise\_features} function, which in turn returns dense tensors for each of context and per-item features. \noindent \begin{minipage}{\linewidth} \begin{lstlisting}[language=Python,breaklines=true,frame=single] def make_transform_fn(): def _transform_fn(features, mode): context_feature_columns = {"unigrams": embedding_column( categorical_column("unigrams", dimension=10, vocab_list=_VOCAB_LIST))} example_feature_columns = {"utility": numeric_column("utility", shape=(1,), default_value=0.0, dtype=float32)} # 2-D context tensors and 3-D per-item tensors. context_features, example_features = tfr.feature.encode_listwise_features( features, input_size=2, context_feature_columns=context_feature_columns, example_feature_columns=example_feature_columns) return context_features, example_features return _transform_fn \end{lstlisting} \end{minipage} \vspace{-5pt} \subsection{Feature Interactions using \texttt{scoring\_fn}} The library allows users to build arbitrary scoring functions defined in Section~\ref{sec:scoring-function}. A scoring function takes batched tensors in the form of 2-D context features and 3-D per-item features, and returns a score for a single item or a group of items. The scoring function is supplied to the model builder via a callback. The model uses the scoring function internally to generate scores during training and inference, as shown in Figure~\ref{fig:arch}. The signature of the scoring function is shown below, where \texttt{mode}, \texttt{params}, and \texttt{config} are input arguments for \texttt{model\_fn} that supply model hyperparameters and configuration for distributed training~\cite{cheng2017tensorflow}. The code snippet below, constructs a 3-layer feedforward neural network with ReLUs~\cite{nair2010rectified}. \noindent \begin{minipage}{\linewidth} \begin{lstlisting}[language=Python,breaklines=true,frame=single] def make_score_fn(): def _score_fn(context_features, group_features, mode, params, config): """Scoring network for feature interactions.""" net = concat(layers.flatten(group_features.values()), layers.flatten(context_features.values())) for i in range(3): net = layers.dense(net, units=128, activation="relu") logits = layers.dense(cur_layer, units=1) return logits return _score_fn \end{lstlisting} \end{minipage} \vspace{-5pt} \vspace{-5pt} \subsection{Ranking Losses} \label{sec:losses_api} Here, we describe the APIs for building loss functions defined in Section \ref{sec:losses}. Losses in TensorFlow are functions that take in inputs, labels and a weight, and return a weighted loss value. The library has a pre-defined set of pointwise, pairwise and listwise ranking losses. The loss key is an enum over supported loss functions. These losses are exposed using the factory function \texttt{tfr.losses.make\_loss\_fn} that takes a loss key (name) and a weights tensor, and returns a loss function compatible with \texttt{Estimator}. The code snippet below shows how to built the loss function for softmax cross-entropy loss. \noindent \begin{minipage}{\linewidth} \begin{lstlisting}[language=Python,breaklines=true,frame=single] # Define loss key(s). loss_key = tfr.losses.RankingLossKey.SOFTMAX_LOSS # Build the loss function. loss_fn = tfr.losses.make_loss_fn(loss_key, ...) # Generating loss value from the loss function. loss_scalar = loss_fn(scores, labels, ...) \end{lstlisting} \end{minipage} \vspace{-5pt} \subsection{Ranking Metrics} \label{sec:metrics_api} The library provides an API to compute most common ranking metrics defined in Section \ref{sec:metrics}. % Similar to loss functions, a metric can be instantiated using a factory function that takes a metric key and a weights tensor, and returns a metric function compatible with \texttt{Estimator}. The metric function itself takes in predictions, labels, and weights to compute a scalar measure. The metric key is an enum over supported metric functions, described in Section \ref{sec:metrics}. During evaluation, the library supports computing both weighted and unweighted metrics, for example weights defined in Section~\ref{sec:item_weights}, which facilitates evaluation in the context of unbiased learning-to-rank. The code snippet below shows how to build metric function. \noindent \begin{minipage}{\linewidth} \begin{lstlisting}[language=Python,breaklines=true,frame=single] def eval_metric_fns(): """Returns a dict from name to metric functions.""" metric_fns = { "metric/ndcg@5": tfr.metrics.make_ranking_metric_fn( tfr.metrics.RankingMetricKey.NDCG, topn=5) } return metric_fns \end{lstlisting} \end{minipage} \vspace{-5pt} \subsection{Ranking Head} In the \texttt{Estimator} workflow, the \texttt{Head} API is an abstraction that encapsulates losses and metrics: Given a pair of \texttt{Estimator}-compatible loss function and metric function along with scores from a neural network, \texttt{Head} computes the values of the loss and metric and produces model predictions as output. The library provides a \texttt{Ranking} \texttt{Head}: a \texttt{Head} object with built-in support for ranking losses built in Section~\ref{sec:losses_api} and ranking metrics of Section~\ref{sec:metrics_api}. The signature of \texttt{Ranking} \texttt{Head} is shown below. \noindent \begin{minipage}{\linewidth} \begin{lstlisting}[language=Python,breaklines=true,frame=single] def _train_op_fn(loss): """Defines train op used in ranking head.""" return tf.contrib.layers.optimize_loss( loss=loss, global_step=tf.train.get_global_step(), learning_rate=hparams.learning_rate, optimizer="Adagrad") ranking_head = tfr.head.create_ranking_head( loss_fn=tfr.losses.make_loss_fn(_LOSS), eval_metric_fns=eval_metric_fns(), train_op_fn=_train_op_fn) \end{lstlisting} \end{minipage} \vspace{-5pt} \subsection{Model Builder} A model builder, \texttt{model\_fn}, is what puts all the different pieces together: scoring function, transform function, losses and metrics via ranking head. Recall that the \texttt{model\_fn} returns operations related to predictions, metrics, and loss optimization. The output of \texttt{model\_fn} and the graph constructed depends on the mode \texttt{TRAIN}, \texttt{EVAL}, or \texttt{PREDICT}. These are all handled internally by the library through \texttt{make\_groupwise\_ranking\_fn}. The signature of a model builder, along with the overall flow to build a ranking \texttt{Estimator} is shown in Figure~\ref{fig:arch}. The components of the ranking library can be used to construct a ranking model in several ways. The inbuilt model builder, \texttt{make\_groupwise\_ranking\_fn}, provides a ranking model with multivariate scoring function and configurable losses and metrics. The user can also define a custom model builder which can use components from the library, such as losses or metrics. \noindent \begin{minipage}{\linewidth} \begin{lstlisting}[language=Python,breaklines=true,frame=single] input_fn = tfr.data.read_batched_sequence_example_dataset(file_pattern,...) ranking_estimator = estimator.Estimator( model_fn=tfr.model.make_groupwise_ranking_fn( group_score_fn=make_score_fn(), group_size=group_size, transform_fn=make_transform_fn(), ranking_head=ranking_head), params=hparams) # Training loop. for _ in range(num_train_steps): ranking_estimator.train(input_fn(TRAIN_FILES),...) # Evaluation. ranking_estimator.evaluate(input_fn(EVAL_FILES),...) \end{lstlisting} \end{minipage} \vspace{5pt} Ranking models have a crucial training-serving discrepancy. During training the model receives a list of items, but during serving, it could potentially receive independent items which are generated by a separate retrieval algorithm. The ranking model builder handles this by generating a graph compatible with serving requirements and exporting it as a \texttt{SavedModel}~\cite{olston2017tensorflow}: a language agnostic object that can be loaded by the serving code, which can be in any low-level language such as C++ or Java. \section{Related Work}
1,116,691,501,050
arxiv
\section{Introduction} Direct detection sensors \cite{gizeli2004biomolecular} aim at continuous, real time monitoring of the presence and concentration of chemical compounds without the need of a preliminary sample preparation step. Amongst the various direct detection strategies including electrochemical methods \cite{thevenot2001electrochemical,janata2009principles}, optical methods (surface plasmon resonance, integrated optics or spectroscopies \cite{homola1999surface,taules2012overview}), the use of acoustic waves to probe medium property variations is considered in contexts in which other strategies are not suitable either due to the fragile optical setups or because the compound being investigated is not electrochemically active. The two broad strategies of acoustic transducers aim at observing either boundary condition variations due to the absorption of a thin film on a substrate in which the acoustic wave is confined (the so-called Quartz Crystal Microbalance -- QCM\cite{su2005qcm,si2007polymer}), or acoustic velocity variations as the boudary conditions are varied by chemical absorption on the surface of the transducer guiding the propagation of a wave confined to the piezoelectric transducer surface (the so-called Surface Acoustic Wave -- SAW). Various wave polarization conditions meet the surface confinement requirements but only pure shear waves and waves exhibiting acoustic velocities slower than those of the surrounding medium will prevent radiation losses as the sensor is loaded by a liquid: the former approach is implemented in the Love mode transducer concept \cite{moll2007love,tamarin2003study} and the latter in the Lamb wave transducer. All these strategies have been thoroughly investigated in the context of direct detection (bio)sensors. The evolution from the QCM to the SAW strategy has been motivated by the consideration that rising acoustic frequencies lowers the acoustic wavelength and hence magnifies the effect of a chemical species absorption to form a layer of a given thickness: the gravimetric sensitivity quantifies this notion. Rising QCM frequency classically means lowering the substrate thickness and hence making the transducer more fragile. An alternative consideration here is to use a thin piezoelectric film over a thick substrate selected for its low acoustic losses to both provide high acoustic frequency modes and yet a rugged transducer. The work presented here focuses on the study of the gravimetric sensitivity by modeling the acoustic transducer electrical response with a one dimensional model. The dependency of the gravimetric sensitivity on the working frequency is demonstrated. The influence of the adsorbed thickness of the added layer and its acoustic properties on the gravimetric sensitivity is also presented. A discussion is proposed on the gravimetric sensitivity definition which depends on the considered initial condition. A maximum of sensitivity is also obtained for a particular thickness in function of acoustic wavelength. The theoretical results are compared with experimental results obtained by considering copper thin film deposition in dry and wet environments. Finally a way to improve the gravimetric sensitivity is proposed using an appropriate added layer on the sensing surface of the transducer. \section{Acoustic wave transducers} The High-overtone Bulk-Acoustic Resonator (HBAR) concept has evolved from the bulk-acoustic resonator (QCM) strategy by identifying a technological limitation to how thin a piezoelectric film could be made when aiming at rising operating frequencies $f_0$ \cite{abe2012fabrication,xeco::site}. Since a QCM confines half a wavelength $\lambda$ in the substrate thickness $t$, the resonator frequency is related to the acoustic velocity $v$ by $f_0=c/\lambda=v/(2t)$: reaching low $t$ values has been investigated in the free membrane strategy of the Film Bulk Acoustic Resonator (FBAR) \cite{nirschl2009film,xu2011high}. The HBAR prevents the fragile piezoelectric membrane from collapsing by being supported on a low acoustic loss substrate. This work focuses on the determination of gravimetric sensitivity (Eq. \ref{sensi1}) of HBAR to assess the possibility of using such a transducer for direct detection, and various sensing strategies introduced by the unique spectral properties of the device. Assuming a linear relation between an adsorbed mass $\Delta m$ and the transducer resonance frequency shift $\Delta f$, the gravimetric sensitivity $S$ is defined as the relative frequency shift $\frac{\Delta f}{f_0}$ of the resonance when loading the sensing area $A$ \begin{equation} S=\frac{\Delta f}{f_0} \times \frac{A}{\Delta m}=\frac{\Delta f}{f_0} \times \frac{1}{\rho \times \Delta t} \label{sensi1} \end{equation} since $\Delta m= A\rho\Delta t$ with $\rho$ the absorbed layer density and $\Delta t$ its thickness. Eq. \ref{sensi1} is used throughout this work for computing $S$ out of the modeled acoustic transducer frequency variation due to layers with various properties being added over the transducer surface. {\color{red}However, another practical quantity relating directly frequency shift and absorbed mass is the mass-sensitivity constant $C=\frac{\Delta m}{A\cdot \Delta f}$ in ng.cm$^{-2}$.Hz$^{-1}$: the relatioship between these two quantities is $C=\frac{10^9}{S\cdot f_0}$.} The perturbative approach of Sauerbrey \cite{sauerbrey} predicts (Eq. \ref{Sauerbrey2}) a gravimetric sensitivity only dependent on the transducer thickness $t_p$ and the density of the piezoelectric material $\rho_p$, assuming the adsorbed layer is characterized by $\rho=\rho_p$. Hence, a perturbative model hints at a lack of improvement of the gravimetric sensitivity when using high overtone devices which are expected to always exhibit the fundamental mode gravimetric sensitivity. \begin{equation} S= \frac{1}{\rho_p\cdot t_p} \label{Sauerbrey2} \end{equation} which results from considering, in a perturbative approach, that $\frac{\Delta f}{f}=\frac{\Delta \lambda}{\lambda}$ and that the wavelength $\lambda_n$ of the $n$th overtone is related to the substrate thickness by $t_p=\frac{n\lambda_n}{2}$, so that Eq. \ref{sensi1} can be written for each overtone of the QCM as \begin{equation} S=\frac{\Delta f_n}{f_n} \times \frac{2}{\rho_p n \Delta \lambda_n} = \frac{\Delta \lambda_n}{\lambda_n} \times \frac{2}{\rho_p n \Delta \lambda_n} = \frac{2}{\rho_p \times \lambda_1} \label{Sauerbrey_n} \end{equation} with $\lambda_1=n\cdot \lambda_n=t_p/2$ the wavelength of the fundamental mode. Numerical modeling will however be considered to finely analyze the gravimetric sensitivity of HBAR overtones beyond these perturbative assumptions, if only because the HBAR is a complex structure yielding more complex behaviours than this expected constant gravimetric sensitivity with overtone number. {\color{red}Once the sensitivity is established, the detection limit for a resonator operating at frequency $f_0$ and exhibiting a quality factor $Q$ is given by the phase to frequency slope $\frac{d\varphi}{df}=\frac{2Q}{f_0}$. From the sensitivity definition, knowing the smallest detectable phase shift $d\varphi_{min}$ as given in our case in \cite{rabus2013high}, then the smallest relative detectable frequency shift is $\frac{df_{min}}{f_0}=\frac{d\varphi_{min}}{2Q}$ and $S=\frac{df}{f_0}\times\frac{A}{dm}\Leftrightarrow \frac{dm}{A}=\frac{1}{S}\times \frac{df}{f_0}\Rightarrow \frac{dm_{min}}{A}=\frac{d\varphi_{min}}{2Q}\times\frac{1}{S}$. As an example of a numerical application, considering a quality factor of $Q=10000$ and a minimum detectable phase variation \cite{rabus2013high} of $d\varphi_{min}=25\mbox{~m}^\circ$ and $S=150$~cm$^2$/g, then $\frac{dm_{min}}{A}\simeq 4$~ng/cm$^2$.} A HBAR device is a composite resonator including two layers: a thin piezoelectric layer (as a thin QCM) to generate the acoustic wave, and a low acoustic loss substrate used as a cavity to confine the resonances while supporting the thin piezoelectric film. This coupled resonator structure induces a complex admittance spectrum (Fig. \ref{descri_hbar}) with a series of narrow resonances whose amplitudes are modulated throughout the spectrum. \begin{figure}[h!tb] \begin{center} \includegraphics[width=\linewidth]{description_HBAR} \caption{Principle of the HBAR (left) and global view of the real part of the admittance (right).} \label{descri_hbar} \end{center} \end{figure} The envelope of the HBAR response is defined by the piezoelectric layer thickness while the frequency span between each narrow resonance is defined by the substrate layer thickness. Considering first only the piezoelectric layer of thickness $t_{p}$, the resonance frequencies $f_{mode}(n)$ are related to the acoustic velocity $c$ by \begin{equation} f_{mode}(n) = n \times \frac{c}{2t_{p}} \label{fmode} \end{equation} at which the envelope of the admittance is maximum since the piezoelectric thin film pumps a maximum of energy in the substrate by inverse piezoelectric electromechanical conversion. Once the acoustic energy has been coupled to the substrate of thickness $t_{s}$, the frequency spacing $\Delta f_{overtone}$ between narrow resonances is given by \begin{equation} \Delta f_{overtone} \approx \frac{c}{2t_{s}} \label{fharmo} \end{equation} This multitude of modes opens a unique perspective for exploiting the HBAR as gravimetric transducer: wideband acoustic spectroscopy of the mechanical properties of the adsorbed thin film. However, such an approach can only be exploited quantitatively if the gravimetric sensitivity of each mode is known. Two HBAR geometries are considered. A 3.8~$\mu$ m thick AlN piezoelectric thin film deposited on a 25.3~$\mu$ m-thick SiO$_2$ substrate only confines longitudinal waves exhibiting wavelengths ranging from 2 to 8~$\mu$ m when operating at frequencies in the 500~MHz to 5~GHz range. Since longitudinal waves are not appropriate for sensing in liquid media (acoustic radiative losses), the second geometry combines {-- {\color{red} following the IEEE 176-1987 (section 3.6) naming convention --} a lithium niobate LiNbO$_3$ YXl/163$^o$ thin film (selected for its high coupling characteristics) over a YXl/32$^o$ quartz substrate (selected for its low acoustic losses characteristics and low temperature sensitivity): the very different technological processes induce thicker layers of 20~$\mu$ m \cite{masson2007dispersive} and 450~$\mu$ m respectively \cite{ballandras2011high,baron2011rf}. {\color{red}The latter device propagates pure shear waves and is hence compatible with the detection of compounds in liquid phase.} \section{Modeling} For modeling the HBAR resonator admittance and determining the gravimetric sensitivity of the various overtones as boundary conditions are varied, a one dimension modeling software is used based on Boundary Element Modeling (BEM) \cite{reinhardt2003scattering,ballandras2005periodic}. The free parameters tuned during the modeling process are the layer thicknesses and material properties, while the gravimetric sensitivity is extracted from the application of Eq. \ref{sensi1} when the resonance frequency is monitored as a function of adlayer geometrical properties and most significantly its thickness $\Delta e$. \subsection{Gravimetric sensitivity dependencies} The study first focuses on the impact of the side of the HBAR selected as the sensitive surface. Although a practical consideration naturally hints at using the side opposite to the piezoelectric layer coated with electrodes as the sensing area, the gravimetric sensitivity of both sides of the HBAR (the exposed area of the piezoelectric layer or the substrate) will exhibit different coupling with the adlayer and hence different gravimetric sensitivities (Fig. \ref{dessus_dessous}). Two adlayer mechanical properties are considered by selecting material constants of silica or copper. The gravimetric sensitivities are calculated by considering an adsorbed thickness of 5~nm to remain in a perturbative assumption. \begin{figure}[h!tb] \begin{center} \includegraphics[width=\linewidth]{dessus_dessous3} \caption{Modeled HBAR admittance (solid line) and associated gravimetric sensitivity using two materials (copper and silica) on the top (piezoelectri layer) and bottom (substrate layer) sides of the HBAR.} \label{dessus_dessous} \end{center} \end{figure} Both adlayer characteristics yield similar gravimetric sensitivities, complying with the perturbation requirement of independence of the result with the thin additional film material properties. However, the evolution of $S$ is radically different depending on which side of the HBAR is considered. In the first case in which the adlayer coats the substrate side, the gravimetric sensitivity decreases when the admittance is maximized and a tradeoff must be met between mode coupling and sensitivity: the gravimetric sensitivity is maximized at resonance frequencies below and above the piezoelectric thin film resonance frequency. The trend is opposite when coating the piezoelectric (top) side of the HBAR: in this case, both admittance and gravimetric sensitivity evole similarly. The gravimetric sensitivities at the resonances of the piezoelectric thin film are the same whether the coating is deposited on the bottom or top sides. While the gravimetric sensitivity remains constant within 10\% when loading the substrate side, it varies significantly -- in this case by a factor of 3 -- when loading the piezoelectric layer side: hence, the probed modes must be carefully selected for maximizing both sensitivity and signal to noise ratio through efficient electromechanical coupling. \subsection{Thick film condition} In order to match experimental conditions, we shall from now on only consider an adlayer deposited on the substrate side, opposite to the electrodes polarizing the piezoelectric thin film. This strategy is selected so that packaging issues are only related to liquid confinement over the HBAR sensing surface and no electrical insulation or shielding issues arise when operating with compounds in liquid media. Because on the wide range of operating frequencies, the perturbative assumption is hardly met at the higher frequency range, and based on the previous work presented by Mansfeld in \cite{mansfeld2000theory} we now focus on modeling the behaviour of thick absorbed films. ``Thick'' is defined as a film exhibiting significant departure from the behaviour predicted by Sauerbrey. Considering a thick film induces an uncertainty as to the definition of the initial condition when computing the sensitivity. On the one hand, the sensitivity is defined as an infinitesimal frequency variation due to an infinitesimal deposited mass: as such, the sensitivity is related to the derivate of the frequency versus adsorbed layer thickness. This case is closely related to the one studied in \cite{mansfeld2000theory} since a thick film acts as a gas absorbing layer and the sensitivity of the transducer coated with the thick film is considered. On the other hand, if the initial condition is considered to be the layer-free transducer, then the sensitivity is computed as the frequency variation due to a thick absorbed layer, no longer complying with the derivate approximation only valid for infinitesimal variations. The sensitivity computed by the latter approach is not only lower than the sensitivity derived from the derivate approach, but the thickness at which the sensitivity is maximized is not the same depending on the selected approach due to the curvature of the frequency v.s thickness curve, as shown in Fig. \ref{diffs}. Such conditions match our experimental assesment of the sensitivity by electro-depositing copper layers on the bare HBAR surface up to thicknesses matching the wavelength. In the following text, we consider the former approach as a thin film approach, even though we are considering a small increase of an already thick layer, while the latter will be called the thick film approach, \begin{figure}[h!tb] \includegraphics[width=\linewidth]{frequence_vs_epai_rho1_E_3} \caption{Typical curve exhibiting the evolution of the resonance frequency of one mode of the HBAR as a function of absorbed layer thickness: the sharp rise in the frequency v.s thickness slope is observed for deposited thicknesses equal to multiples of the quarter wavelength. The frequency variation due to an absorbed layer thickness $t$ depends on whether the initial condition is considered to be the bare transducer or the transducer already coated with a thick film. The latter approach always yields a larger estimate of the sensitivity than the former, as shown by the dotted lines representing the local slope of the frequency v.s thickness curve.} \label{diffs} \end{figure} Departure from the perturbative assumption is considered by modeling an adlayer thickness of the same order than the wavelength. The results in the thick film approach, for two working frequencies, 1324 and 4000~MHz corresponding to wavelengths of 2.1 and 0.68~$\mu$ m respectively, are presented in Fig. \ref{S(e)}. $S$ is calculated for thicknesses of an adlayer, assumed to meet the material properties of copper, ranging from 50~nm to 2.5~$\mu$ m. Both overtones exhibit oscillating gravimetric sensitivities as a function of adlayer thickness following the initial drop, with a period dependent on the overtone wavelength, yet the asymptotic sensitivity value remains the same at about 60~cm$^2$/g. \begin{figure}[h!tb] \includegraphics[width=\linewidth]{S_f-e_2_frequis} \caption{Gravimetric sensitivity calculated as a function of the thickness of the copper adlayer for 1324 (solid line) and 4000~MHz (circles) resonance frequencies.} \label{S(e)} \end{figure} The same analysis in the thin film approach provides a clearer view of the resonant confinement of the acoustic energy in the thick absorbed film, as shown in Fig. \ref{S(t)}. The frequency v.s thickness results are the same than those shown in Fig. \ref{S(e)} but here the initial state for computing the sensitivity value is selected as the infinitesimaly thinner layer, hence compatible with the derivate of the frequency v.s thickness computation. Not only are the thicknesses at which sensitivity is maximum closely equal to multiples of the wavelength, but the actual sensitivity values remain close to the thin film value at odd multiples of the half wavelength -- 120~cm$^2$/g -- as opposed to the thick film approach in which the sensitivity remained consistently lower than the perturbative layer sensitivity. \begin{figure}[h!tb] \includegraphics[width=\linewidth]{sensi_de_df_copper_1324_4000MHz} \caption{Thin film analysis of the gravimetric sensitivity using the same simulation results than those exhibited in Fig. \ref{S(e)}.} \label{S(t)} \end{figure} These results indicate that various overtones react differently to an adlayer of varying thickness due to the evolution of the energy distribution between the three layers -- adlayer, piezoelectric thin film and substrate -- in a coupled resonator context, making the wideband acoustic spectroscopy analysis non-trivial. An optimum operating frequency can be selected if the adlayer thickness is fixed and known in order to maximize $S$: such a conclusion was already reached in a previous analysis \cite{mansfeld2000theory}. However, Mansfeld \cite{mansfeld2000theory} determined theoretically and experimentally that the adlayer thickness maximizing $S$ would be $\lambda/4$: this conclusion is not validated in the present case. To investigate the cause of the differences, several kinds of adsorbed material (Tab. \ref{table_imp}) used as perturbative layer are considered to assess the dependence of this conclusion with adlayer properties (Fig. \ref{S_imp}). \begin{figure}[h!tb] \begin{center} \includegraphics[width=\linewidth]{with3} \caption{Gravimetric sensitivity as a function of adlayer thickness, calculated for a working frequency of 4~GHz. Silica, aluminum, chromium, silicon and gold are considered as perturbative layer materials when computing the the gravimetric sensitivity. The dashed line corresponds to the gravimetric sensitivity calculated by Sauerbrey's approximation, considering a silicon resonator with a thickness of 29.1~$\mu$ m, equal to the global thickness of the simulated HBAR.} \label{S_imp} \end{center} \end{figure} The validity of the approach is assessed by first considering a silicon adlayer -- the same material the HBAR is made off -- and checking that the resulting sensitivity is indeed equal to the value predicted by the Sauerbrey perturbation theory (Fig. \ref{S_imp}, dashed line and solid line) and independent of the adlayer thickness. For other adlayer materials (copper and gold) with an acoustic impedance higher than that of the silicon substrate, $S$ decreases with increasing thickness as was previously observed in Fig. \ref{S(e)}. For materials (silica and aluminum) with lower acoustic impedance than that of the substrate, the gravimetric sensitivity increases when the thickness increases. \begin{table}[h!tb] \caption{Acoustic impedances of the materials used for gravimetric sensitivity determination of a silicon HBAR.} \begin{tabular}{|c||c|c|c|} \hline \multirow{2}{*}{{materials}}& ~~~~~~$Z_{ac}$ ~~~~~~ & ~~~ velocity ~~~ \\ & (MRayl) & (m/s) \\ \hline Silica (SiO$_2$) & 13 & 5740 \\ \hline Aluminum (Al) & 14 & 5018 \\ \hline {\bf Silicon (Si) } & {\bf17} & 7483 \\ \hline Copper (Cu) & 24 & 2728 \\ \hline Gold (Au) & 29 & 1480 \\ \hline Aluminum nitride (AlN) & 30 & 11500\\ \hline YAG & 36 & 7801 \\ \hline \end{tabular} \label{table_imp} \end{table} These results demonstrate that the maximum of the gravimetric sensitivity depends on the relative acoustic impedances and adlayer thickness to wavelength ratio. Furthermore, an increase of the gravimetric sensitivity can be obtained with an adsorbed material with acoustic impedance lower than that of the substrate as is classically known from the Love wave configuration: such an approach will be discussed in section \ref{V}. \subsection{Comparison with Mansfeld's theory} Although numerical constants are not provided in \cite{mansfeld2000theory} for a direct comparison with these results, their use of YAG as a high acoustic impedance substrate \cite{mezeix2006comparison}, exhibiting a high acoustic velocity, as the HBAR substrate, and the organic layer acting as the adlayer, hints at a case in which a low impedance coating is deposited over a high impedance substrate. Such a stack matches the qualitative behaviour identified by our numerical simulation. The quantitative assesment of the layer thickness maximizing the gravimetric sensitivity however requires an in-depth analysis of the sensitivity dependence with material properties. Such considerations are demonstrated in Fig. \ref{S_mansfeld} which exhibits the acoustic wavelength (normalized to the layer thickness) at which the gravimetric sensitivity is maximized, as a function of the adlayer acoustic impedance. The gravimetric sensitivity is calculed using the thin film approach to be comparable with \cite{mansfeld2000theory} in which the resonant frequency variation is recorded for an infinitesimal thickness variation of the adlayer due to gas adosorption. The results presented here consider an adlayer material with a constant Young's modulus (13~GPa) and various densities and Poisson coefficients. The elastic constants (C$_{11}$, C$_{12}$, C$_{66}$) of the material are calculated for each density and Poisson coefficent values. Maximizing the sensitivity for a $\lambda/4$ thickness of the adlayer is consistent in some cases which present a low Poisson coefficient (less than 0.2) and different acoustic impedance of the adsorbed material. The BEM approach used here takes in account all the elastic constants of the materials and so exhibits more rigorous results than the analytical approach. \begin{figure}[h!tb] \begin{center} \includegraphics[width=\linewidth]{ratio_vs_impedance_blackwhite} \caption{Ratio of the acoustic wavelength in the adlayer to the adlayer thickness maximizing the gravimetric sensitivity. The solid line indicates the value cited by Mansfeld when considering an organic layer over a YAG substrate, the marked lines are the result of our simulation for varying acoustic impedance adlayers and using different poisson coefficients over a silicon HBAR.} \label{S_mansfeld} \end{center} \end{figure} The gravimetric sensitivity dependence with overtone number (and hence wavelength) and material property of the adlayer has been investigated through simulation, demonstrating a non-trivial link between these relations. A low impedance adlayer is predicted to magnify the gravimetric sensitivity. Moreover, results cited earlier in the literature could be modeled in detail during these investigations, whose results will now be confronted to experimental results. \section{Experimental results} Experimental assessment of the gravimetric sensitivity of HBARs is performed in two distinct steps: on the one hand the irreversible deposition of thin copper films in a cleanroom environment by sputtering, and on the other hand the reversible electrodeposition of copper in a wet environment. All depositions are performed on the substrate side of the HBAR, opposite to the electrodes deposited on the piezoelectric thin film. The admittance of the HBAR is monitored by a network analyzer, either after each deposition step in the case of the sputtering, or continuously during the electrochemical oxydation and reduction cycles. Since part of these experiments will be performed in a wet environment, only the lithium niobate over quartz HBAR propagating pure shear waves is considered. The HBAR is characterized at four different frequency ranges (280-310 ; 410-440 ; 670-700 ; 800-830~MHz). Each frequency range presents about seven resonances. As shown on table \ref{table_sensi}, the gravimetric sensitivity for each resonance is calculated by considering the initial resonant frequency as the resonance frequency obtained with the previous adlayer thickness (thin film approach). The acoustic wavelengths for each frequency range are close, so only the mean value of the gravimetric sensitivities is presented and all thicknesses are normalized to the acoustic wavelength in the adlayer (Fig. \ref{exp_vs_simu1}). \begin{table}[h!tb] \caption{Experimental gravimetric sensitivity mean value calculated for each frequency range and for each deposited copper thickness.} \footnotesize \hspace*{-0.6cm}\begin{tabular}{|c|c|c|c|c|} \hline Deposited & \multicolumn{4}{c|}{Mean of gravimetric sensitivity (cm$^2$/g)} \\ \cline{2-5} thickness (nm)& 280-310~MHz & 410-440~MHz & 670-700~MHz & 800-830~MHz \\ \hline 196 & 10.5 & 7.2 & 4.5 & 5.1 \\ 381 & 7.7 & 5.5 & 3.5 & 3.7 \\ 541 & 7.0 & 5.0 & 3.3 & 3.4 \\ 726 & 6.3 & 4.5 & 3.1 & 3.4 \\ 891 & 5.8 & 4.2 & 3.2 & 3.9 \\ 1099 & 5.2 & 3.8 & 3.5 & 4.6 \\ 1299 & 4.8 & 3.6 & 4.2 & 4.8 \\ 1514 & 4.4 & 3.4 & 4.6 & 4.5\\ \hline \end{tabular} \label{table_sensi} \end{table} Both experiemental and modeled (Fig. \ref{exp_vs_simu1}) dependences of the gravimetric sensitivity with the adlayer thickness hint at a starting value of about 10~cm$^2$/g and a secondary maximum. {\color{red}The discrepancy between the modeled and experimental results, yielding different adlayer thicknesses maximizing the sensitivity, is attributed to the use of bulk material constants which might not appropriately represent the thin copper film properties. } \begin{figure}[h!tb] \begin{center} \includegraphics[width=\linewidth]{fusion} \caption{Mean value of the measured gravimetric sensitivity for each frequency range (diamonds) as a function of the deposited thickness divided by the acoustic wavelength, and polynomial fit (solid line) as a guide for the eye. Circles: gravimetric sensitivity estimates resulting from modeling the lithium niobate over quartz HBAR stack considered in the experimental section: the different curves are associated with different overtones. Notice that the two charts do not share the same abscissa: the experimental data abscissa is given on top, the model abscissa is given on the bottom. The gravimetric sensitivity (ordinate) is properly modeled and shared by the two charts.} \label{exp_vs_simu1} \end{center} \end{figure} An alternative to cleanroom sputtering of copper is the use of electrochemical deposition on the sensing surface of an HBAR. This approach, already used to characterize the gravimetric sensitivity of QCM \cite{friedt2003simultaneous}, SAW \cite{avs} and HBAR \cite{rabus2012eight} devices, is attractive because it is reversible (allowing for multiple cycles for assessing the reproducibility of the result) and operates in liquid phase, hence being more representative of the behaviour of the sensor used for detecting compounds in aqueous solutions (e.g. biosensing). This method is only usable with devices propagating pure shear waves due to viscoelastic coupling of the propagating longitudinal waves. The chemical reaction is driven by a custom-made potentiostat included in the embedded electronics \cite{rabus2013high} designed to probe simultaneously multiple overtones of the HBAR. This electronics provides a measurement rate large enough to be compatible with the reaction kinetics. The gravimetric sensitivity of an overtone at 327~MHz of the lithium niobate/quartz HBAR is investigated: electrochemical deposition provides an independent estimate of the adlayer mass $m_{Cu}$ through Eq. \ref{sensi1}, assuming a 100\% yield, by considering the number of electrons involved in the reduction process as the integral of the current $i(t)$ flowing through the working electrode \begin{equation} m_{Cu} = \frac{M_{Cu}\times\Sigma i(t) \delta t}{N_A \times e \times n_e} \label{masse} \end{equation} where $M_{Cu}$ is the molar weight ($g/mol$) of the adlayer, $\Sigma i(t) \delta t$ the number of charges transferred during electro-deposition, considering that the charge of one mole of electrons ($N_A$) is $96440$~C, and $n_e$ the number of electrons transfered during the redox reaction (Eq. \ref{formule_reaction}) \begin{equation} Cu^{2+} + 2e^- \leftrightarrow Cu \label{formule_reaction} \end{equation} Fig. \ref{S_electro} exhibits the gravimetric sensitivity measured using the electro-deposition approach and the modeling of the used HBAR, both considered at the same working frequency. This working frequency is fixed and used as the inital resonant frequency (thick film approach). The thin film approach for calculating the gravimetric sensitivity could not be used in this case due the experimental set up which does not allow to have the resonant frequency between each adlayer thickness. Knowing the area $A$ of the sensing side of the HBAR over which the electrochemical reaction occurs, Eq. \ref{sensi1} and \ref{masse} allow for estimating the deposited thickness. Hence, the gravimetric sensitivity is plotted as a function of the deposited thickness. Experiment matches the modeled sensitivities for adlayer thicknesses above 1.2~$\mu$ m. Below this value, the calculated sensitivity is 3 to 6 times higher than the model results. The main cause of divergence of the two curves for thin adlayers is attributed to the inhomogeneous deposition which starts at the center of the HBAR sensing area. In such cases, the estimated adlayer thickness $\Delta e=m_{Cu}/A$ is under-estimated since $A$ is overestimated when using the geometrical area, and the experimental sensitivity is hence over-estimated. \begin{figure}[h!tb] \begin{center} \includegraphics[width=\linewidth]{new_comp_electro_simu} \end{center} \caption{Gravimetric sensitivity measured (stars) and modeled (solid line) for a lithium niobate over quartz HBAR operating at 327~MHz, as a function of the electrochemically deposited layer thickness.} \label{S_electro} \end{figure} Based on these considerations, the HBAR geometries considered so far exhibit sensitivities consistent with those of bulk QCMs and hence 10 to 20 times lower than SAW devices operating in the hundreds of MHz range. However, the low-acoustic impedance adlayer has been shown to increase the gravimetric sensitivity, so we consider whether an additional stack of material over the HBAR might bring some gravimetric sensitivity improvement, aiming at the hundreds of cm$^2$/g range classically found for Love-mode SAW devices \cite{avs}. \section{Gravimetric sensitivity improvement}\label{V} Two ways to improve the gravimetric sensitivity have been theoretically explored. Since the Sauerbrey gravimetric sensitivity depends on the working frequency which depends on the thickness of the QCM, reducing the overall sensor thickness will be considered. Based on this idea, the gravimetric sensitivity is calculated for lithium niobate over quartz HBARs when varying the substrate thickness from 56.25 to 450~$\mu$ m (Tab. \ref{table_S_quartz}). \begin{table}[h!tb] \caption{Calculated gravimetric sensitivity for different thicknesses of quartz substrate. Frequency ranges and the number of probed modes are also presented.} \label{table_S_quartz} \begin{tabular}{|c||c|c|c|c|c|c|c|c|} \hline Frequency & \multicolumn{4}{c|}{\multirow{2}{*}{50 - 150}}& \multicolumn{4}{c|}{\multirow{2}{*}{ 300 - 550}} \\ range (MHz) & \multicolumn{4}{c|}{} & \multicolumn{4}{c|}{}\\ \hline Quartz & \multirow{2}{*}{450}& \multirow{2}{*}{225}& \multirow{2}{*}{112.5}& \multirow{2}{*}{56.25}& \multirow{2}{*}{450}& \multirow{2}{*}{225}& \multirow{2}{*}{112.5}& \multirow{2}{*}{56.25} \\ thickness ($\mu$m) & & & & & & & & \\ \hline number of& \multirow{2}{*}{53}& \multirow{2}{*}{28}& \multirow{2}{*}{14}& \multirow{2}{*}{7}& \multirow{2}{*}{51}& \multirow{2}{*}{26}& \multirow{2}{*}{13}& \multirow{2}{*}{8} \\ probed modes & & & & & & & & \\ \hline avg. sensitivity & \multirow{2}{*}{9}& \multirow{2}{*}{18}& \multirow{2}{*}{37}& \multirow{2}{*}{80}& \multirow{2}{*}{8}& \multirow{2}{*}{16}& \multirow{2}{*}{31}& \multirow{2}{*}{59} \\ (cm$^2$/g) & & & & & & & & \\ \hline theoretical sens.& \multirow{2}{*}{8}& \multirow{2}{*}{17}& \multirow{2}{*}{34}& \multirow{2}{*}{67}& \multirow{2}{*}{8}& \multirow{2}{*}{17}& \multirow{2}{*}{33}& \multirow{2}{*}{67} \\ (Sauerbrey. cm$^2$/g) & & & & & & & & \\ \hline \end{tabular} \end{table} Although this approach trivially scales the sensitivity as the substrate thinning ratio, closely matching the Sauerbrey equation prediction, the transducer ruggedness is impacted and the solution is not satisfactory in reaching disadvantages of FBARs. A second investigated way of improvement is adding a well-chosen material on the sensitive surface of the HBAR. The gravimetric sensitivity depends on the impedance of the deposited materials (Fig. \ref{S_imp}). Following a strategy proven in the case of the Love-mode SAW transducer, an additional layer is designed to confine the acoustic wave near the sensing surface to improve the gravimetric sensitivity. The efficiency of this approach is assessed by modeling the sensitivity of an AlN over silicon HBAR, coated with an additional layer of silicon oxide. In this calculation, the gravimetric sensitivity is calculated by considering a 5~nm-thick copper adlayer on the silicon oxide (Fig. \ref{S_SiO2_Cu}). \begin{figure}[h!tb] \begin{center} \includegraphics[width=\linewidth]{ame_sensi_silice3} \caption{Gravimetric sensitivity calculated for an AlN over Si HBAR with (lines with markers) and without (solid line) silicon oxide layer.} \label{S_SiO2_Cu} \end{center} \end{figure} Different thicknesses of the silicon dioxide acoustic field confinement layer are considered (Fig. \ref{S_SiO2_Cu}): the affected overtone varies as a function of the silicon dioxide layer thickness, but in all cases a dramatic sensitivity enhancement is observed, with a doubling of the sensitivity with respect to the bare device. \section{Conclusions} The gravimetric sensitivity of composite HBAR resonators has been studied to determine their potential as direct detection sensors. Two architectures, aluminum nitride over silicon and lithium niobate over quartz, are considered as complementary since the former exhibits high sensitivity {\color{red} -- of the same order of magnitude as those found for 125~MHz Love mode SAW devices --} but propagates longitudinal waves incompatible {\color{red} with} sensing compounds in liquid phase, while the latter propagates pure shear waves {\color{red} yet only exhibits sensitivity with values around those exhibits by radiofrequency bulk acoustic resonators -- typically 10 times lower than the Love-mode value}. The multimode spectral characteristics of these transducers is considered best suited for wideband acoustic spectroscopy of adsorbed layers. However, the complex dependence of the gravimetric sensitivity of the various overtones yields non-trivial analysis considerations requiring accurate acoustic behaviour modeling of the coupled acoustic fields in the various layers. The poor gravimetric sensitivity of the bare device is theoretically improved by adding a low-acoustic impedance layer on the sensing area following a strategy reminiscent of the Love mode guided SAW device. Working on the electrode-free side of the HBAR solves the classical packaging issue of SAW devices since no structure needs to be located on the acoustic path while electrodes are prevented from being in contact with the medium containing the analyte being investigated. \section*{Acknowledgements} This work was supported by the French RENATECH network and its FEMTO-ST technological facility. Part of this work was funded by the French DGA through the ROHLEX grant and a Defense PhD funding, as well as by the European LOVEFOOD project (FP7-ICT-2011.3.2 grant). \bibliographystyle{IEEEtran}
1,116,691,501,051
arxiv
\section*{\bf \Large Asymptotic complexity} The asymptotic complexity analysis of the method in \cite{zha1999updating} is as follows. We need $O\left(ns^2 + nsk\right)$ FLOPs to form $(I_s-V_kV_k^H)E^H$ and compute its QR decomposition. The SVD of the matrix $Z^HAW$ requires $O\left((k+s)^3\right)$ FLOPs. Finally, the cost to form the approximation of matrices $\widehat{U}_k$ and $\widehat{V}_k$ is equal to $O\left(k^2(m+n) + nsk\right)$ FLOPs. The asymptotic complexity analysis for the “SV" variant of the method in \cite{vecharynski2014fast} is as follows. We need $O\left((\mathtt{nnz}(E)+nk)\delta_1+(n+s)\delta_1^2\right)$ FLOPs to approximate the $r$ leading singular triplets of $(I_s-V_kV_k^H)E^H$, where $\delta_1 \in \mathbb{Z}^*$ is greater than or equal to $r$ (i.e., $\delta_1$ is the number of Lanczos bidiagonalization steps). The cost to form and compute the SVD of the matrix $Z^HAW$ is equal to $(k+s)(k+r)^2 + \mathtt{nnz}(E)k+rs$ where the first term stands for the actual SVD and the rest of the terms stand for the formation of the matrix $Z^HAW$. Finally, the cost to form the approximation of matrices $\widehat{U}_k$ and $\widehat{V}_k$ is equal to $O\left(k^2(m+n) + nrk\right)$ FLOPs. The asymptotic complexity analysis of Algorithm \ref{alg1} is as follows. First, notice that Algorithm \ref{alg1} requires no effort to build $W$. For the case where $Z$ is set as in Proposition \ref{pro2}, termed as “Alg. \ref{alg1} (a)", we also need no FLOPs to build $Z$. The cost to solve the projected problem by unrestarted Lanczos is then equal to $O\left((\mathtt{nnz}(E)+nk)\delta_2+(k+s)\delta_2^2\right)$ FLOPs, where $\delta_2 \in \mathbb{Z}^*$ is greater than or equal to $k$ (i.e., $\delta_2$ is the number of unrestarted Lanczos steps). Finally, the cost to form the approximation of matrices $\widehat{U}_k$ and $\widehat{V}_k$ is equal to $O\left(k^2m+(\mathtt{nnz}(A)+n)k\right)$ FLOPs. For the case where $Z$ is set as in Proposition \ref{pro35}, termed as “Alg. \ref{alg1} (b)", we need \begin{equation*} \chi = O\left(\mathtt{nnz}(A)\delta_3 +m\delta_3^2\right) \end{equation*} FLOPs to build $X_{\lambda,r}$, where $\delta_3 \in \mathbb{Z}^*$ is greater than or equal to $k$ (i.e., $\delta_3$ is either the number of Lanczos bidiagonalization steps or the number of columns of matrix $R$ in randomized SVD). \begin{table}[ht] \small \centering \caption{\it Detailed asymptotic complexity of Algorithm \ref{alg1} and the schemes in \cite{zha1999updating} and \cite{vecharynski2014fast}. All $\delta$ variables are replaced by $k$. \label{table5}}\vspace{0.01in} \begin{tabular}{ l c c c c} \toprule \toprule Scheme & Building $Z$ & Building $W$ & Solving the projected problem & Other \\ \midrule \rowcolor{white!89.803921568627459!black} \cite{zha1999updating} & - & $ns^2 + nsk$ & $(k+s)^3$ & $k^2(m+n) + nsk$\\ \cite{vecharynski2014fast} & - & $(\mathtt{nnz}(E)+nk)k+(n+s)k^2$ & $(k+s)(k+r)^2 + \mathtt{nnz}(E)k+rs$ & $k^2(m+n) + nrk$\\ \rowcolor{white!89.803921568627459!black} Alg. \ref{alg1} (a) & - & - & $(\mathtt{nnz}(E)+nk)k+(k+s)k^2$ & $k^2m+(\mathtt{nnz}(A)+n)k$ \\ Alg. \ref{alg1} (b) & $\chi$ & - & $(\mathtt{nnz}(E)+(n+r)k)k+(k+r+s)k^2$ & $k^2m+(\mathtt{nnz}(A)+n)k$ \\ \bottomrule \bottomrule \end{tabular} \end{table} The above discussion is summarized in Table \ref{table5} where we list the asymptotic complexity of Algorithm \ref{alg1} and the schemes in \cite{zha1999updating} and \cite{vecharynski2014fast}. The complexities of the latter two schemes were also verified by adjusting the complexity analysis from \cite{vecharynski2014fast}. To allow for a practical comparison, we replaced all $\delta$ variables with $k$ since in practice these variables are equal to at most a small integer multiple of $k$. Consider now a comparison between Algorithm \ref{alg1} (a) and the method in \cite{zha1999updating}. For all practical purposes, these two schemes return identical approximations to $A_k$. Nonetheless, Algorithm \ref{alg1} (a) requires no effort to build $W$. Moreover, the cost to solve the projected problem is linear with respect to $s$ and cubic with respect to $k$, instead of cubic with respect to the sum $s+k$ in \cite{zha1999updating}. The only scenario where Algorithm \ref{alg1} can be potentially more expensive than \cite{zha1999updating} is when matrix $A$ is exceptionally dense, and both $k$ and $s$ are very small. Similar observations can be made for the relation between Algorithm \ref{alg1} (b) and the methods in \cite{vecharynski2014fast}, although the comparison is more involved. \section*{\bf \Large Proofs} \subsection*{Proof of Proposition \ref{pro0}} The scalar-vector pair $(\widehat{\sigma}_{i}^2,\widehat{u}^{(i)})$ satisfies the equation $(AA^H-\widehat{\sigma}_{i}^2 I_{m+s})\widehat{u}^{(i)}=0$. If we partition the $i$'th left singular vector as \[ \widehat{u}^{(i)} = \begin{pmatrix} \widehat{f}^{(i)} \\[0.3em] \widehat{y}^{(i)} \\[0.3em] \end{pmatrix}, \] we can write \begin{equation*} \begin{pmatrix} BB^H-\widehat{\sigma}_{i}^2 I_{m} & BE^H \\[0.3em] EB^H & EE^H-\widehat{\sigma}_{i}^2 I_{s} \\[0.3em] \end{pmatrix} \begin{pmatrix} \widehat{f}^{(i)} \\[0.3em] \widehat{y}^{(i)} \\[0.3em] \end{pmatrix}=0. \end{equation*} The leading $m$ rows satisfy $(BB^H-\widehat{\sigma}_{i}^2I_m)\widehat{f}^{(i)}=-BE^H\widehat{y}^{(i)}$. Plugging the expression of $\widehat{f}^{(i)}$ in the second block of rows and considering the full SVD $B=U \Sigma V^H$ leads to \begin{align*} 0 & = \left[EE^H -EB^H(BB^H-\widehat{\sigma}_{i}^2I_m)^{-1}BE^H-\widehat{\sigma}_{i}^2I_s\right] \widehat{y}^{(i)} \\ & = \left[E(I_s-B^H(BB^H-\widehat{\sigma}_{i}^2I_m)^{-1}B)E^H-\widehat{\sigma}_{i}^2I_s\right] \widehat{y}^{(i)}\\ & = \left[E(VV^H+V\Sigma^T(\widehat{\sigma}_i^2 I_m - \Sigma \Sigma^T)^{-1} \Sigma V^H)E^H-\widehat{\sigma}_{i}^2I_s\right]\widehat{y}^{(i)}\\ & = \left[EV(I_n+\Sigma^T\left(\widehat{\sigma}_i^2 I_m - \Sigma \Sigma^T\right)^{-1} \Sigma)V^HE^H-\widehat{\sigma}_{i}^2I_s\right]\widehat{y}^{(i)}. \end{align*} The proof concludes by noticing that \begin{equation*} I_n+\Sigma^T\left(\widehat{\sigma}_i^2 I_m - \Sigma \Sigma^T\right)^{-1} \Sigma = \begin{pmatrix} 1+\dfrac{\sigma_1^2}{\widehat{\sigma}_i^2-\sigma_1^2} & & \\[0.3em] & \ddots & \\[0.3em] & & 1+\dfrac{\sigma_{n}^2}{\widehat{\sigma}_i^2-\sigma_{n}^2} \\[0.3em] \end{pmatrix} = \begin{pmatrix} \dfrac{\widehat{\sigma}_i^2}{\widehat{\sigma}_i^2-\sigma_1^2} & & \\[0.3em] & \ddots & \\[0.3em] & & \dfrac{\widehat{\sigma}_{i}^2}{\widehat{\sigma}_i^2 -\sigma_{n}^2} \\[0.3em] \end{pmatrix}, \end{equation*} where for the case $m<n$, we have $\sigma_j=0$ for any $j=m+1,\ldots,n$. In case $\widehat{\sigma}_{i}=\sigma_{j}$, the Moore-Penrose pseudoinverse $(BB^H-\widehat{\sigma}_{i}^2I_m)^\dagger$ is considered instead. \subsection*{Proof of Proposition \ref{pro1}} Since the left singular vectors of $B$ span $\mathbb{R}^m$, we can write \begin{equation*} BE^H\widehat{y}^{(i)} = \sum\limits_{j=1}^m \sigma_{j} u^{(j)} \left(Ev^{(j)}\right)^H\widehat{y}^{(i)}. \end{equation*} The proof concludes by noticing that the top $m\times 1$ part of $\widehat{u}^{(i)}$ can be written as \begin{align*} \widehat{f}^{(i)} & = -(BB^H-\widehat{\sigma}_{i}^2I_m)^{-1}BE^H\widehat{y}^{(i)} \\ &= -U(\Sigma \Sigma^T - \widehat{\sigma}_i^2 I_m)^{-1}\Sigma \left(EV\right)^H\widehat{y}^{(i)}\\ &=-\sum\limits_{j=1}^{\mathtt{min}(m,n)} u^{(j)}\dfrac{\sigma_{j}}{\sigma_{j}^2-\widehat{\sigma}_{i}^2} \left(Ev^{(j)}\right)^H\widehat{y}^{(i)}\\ &=-\sum\limits_{j=1}^{\mathtt{min}(m,n)}u^{(j)} \dfrac{\sigma_{j}}{\sigma_{j}^2-\widehat{\sigma}_{i}^2} \left(Ev^{(j)}\right)^H\widehat{y}^{(i)}\\ &=\sum\limits_{j=1}^{\mathtt{min}(m,n)}u^{(j)} \chi_{j,i}. \end{align*} \subsection*{Proof of Proposition \ref{pro2}} We have \begin{align*} \mathtt{min}_{z\in \mathtt{range}(Z)} \|\widehat{u}^{(i)}-z\| & \leq \norm{ \begin{pmatrix} u^{(k+1)},\ldots,u^{(\mathtt{min}(m,n))} \\[0.3em] \\[0.3em] \end{pmatrix} \begin{pmatrix} \chi_{k+1,i} \\[0.3em] \vdots \\[0.3em] \chi_{\mathtt{min}(m,n),i} \\[0.3em] \end{pmatrix}} \\ & = \norm{\left( \begin{smallmatrix} \scalebox{2}{$0$}_{k,k} & & & & \\ & & \dfrac{\sigma_{k+1}}{\sigma_{k+1}^2-\widehat{\sigma}_{i}^2} & & \\ & & & \ddots & \\ & & & & \dfrac{\sigma_{\mathtt{min}(m,n)}}{\sigma_{\mathtt{min}(m,n)}^2-\widehat{\sigma}_{i}^2}\\ \end{smallmatrix} \right)V^HE^H\hat{y}^{(i)}} \\ & \leq \mathtt{max}\left\lbrace\left|\dfrac{\sigma_{j}}{\sigma_{j}^2-\widehat{\sigma}_{i}^2}\right|\right\rbrace_{j=k+1,\ldots,\mathtt{min}(m,n)} \norm{E^H\widehat{y}^{(i)}}. \end{align*} The proof follows by noticing that due to Cauchy's interlacing theorem we have $\sigma_{k+1}^2\leq \widehat{\sigma}_{i}^2,\ i=1,\ldots,k$, and thus $\left|\dfrac{\sigma_{k+1}}{\sigma_{k+1}^2-\widehat{\sigma}_{i}^2}\right| \geq \cdots \geq \left|\dfrac{\sigma_{\mathtt{min}(m,n)}} {\sigma_{\mathtt{min}(m,n)}^2-\widehat{\sigma}_{i}^2}\right|.$ \subsection*{Proof of Lemma \ref{lem1}} We can write \begin{equation*} \begin{aligned} B(\lambda) &=\left(I-U_k U_k^H\right) U \left(\begin{smallmatrix} \sigma_1^2-\lambda & &\\ & \ddots &\\ & & \sigma_m^2-\lambda \end{smallmatrix}\right)^{-1}U^H\\ &= U \left( \begin{smallmatrix} \scalebox{2}{$0$}_{k,k} & & & & \\ & & \dfrac{1}{\sigma_{k+1}^2-\lambda} & & \\ & & & \ddots & \\ & & & & \dfrac{1}{\sigma_m^2-\lambda}\\ \end{smallmatrix} \right)U^{H}, \end{aligned} \end{equation*} where $\sigma_j=0$ for any $j>\mathtt{min}(m,n)$. Let us now define the scalar $\gamma_{j,i} = \dfrac{\widehat{\sigma}_i^2-\lambda}{\sigma_j^2-\lambda}$. Then, \begin{equation*} B(\lambda)\left[(\widehat{\sigma}_i^2-\lambda)B(\lambda)\right]^\rho = U \begin{pmatrix} \scalebox{2}{$0$}_{k,k} & & & \\[0.3em] & \dfrac{\gamma_{k+1,i}^\rho}{\sigma_{k+1}^2-\lambda} & & \\[0.3em] & & \ddots & \\[0.3em] & & & \dfrac{\gamma_{m,i}^\rho}{\sigma_{m}^2-\lambda} \\[0.3em] \end{pmatrix} U^{H}. \end{equation*} Accounting for all powers $p=0,1,2,\ldots$, gives {\small \begin{equation*} B(\lambda)\sum_{\rho=0}^{\infty} \left[(\widehat{\sigma}_i^2-\lambda)B(\lambda)\right]^\rho = U \begin{pmatrix} \scalebox{2}{$0$}_{k,k} & & & \\[0.3em] & \dfrac{\sum_{\rho=0}^{\infty} \gamma_{k+1,i}^\rho}{\sigma_{k+1}^2-\lambda} & & \\[0.3em] & & \ddots & \\[0.3em] & & & \dfrac{\sum_{\rho=0}^{\infty} \gamma_{m,i}^\rho}{\sigma_{m}^2-\lambda} \\[0.3em] \end{pmatrix} U^{H}. \end{equation*}} Since $\lambda > \widehat{\sigma}_k^2 \geq \sigma_k^2$, it follows that for any $j>k$ we have $|\gamma_{j,i}| < 1$. Therefore, the geometric series converges and $\sum_{\rho=0}^{\infty} \gamma_{j,i}^\rho=\dfrac{1}{1-\gamma_{j,i}}= \dfrac{\sigma_j^2-\lambda}{\sigma_j^2-\widehat{\sigma}_i^2}$. It follows that $\dfrac{1}{\sigma_j^2-\lambda}\sum_{\rho=0}^{\infty} \gamma_j^\rho =\dfrac{1}{\sigma_j^2-\widehat{\sigma}_i^2}$. We finally have \begin{equation*} \begin{aligned} B(\lambda)\sum_{\rho=0}^{\infty} \left[(\widehat{\sigma}_i^2-\lambda)B(\lambda)\right]^\rho & = U \begin{pmatrix} \scalebox{2}{$0$}_{k,k} & & & & \\[0.3em] & \dfrac{1}{\sigma_{k+1}^2-\widehat{\sigma}_i^2} & & \\[0.3em] & & \ddots & \\[0.3em] & & & \dfrac{1}{\sigma_{m}^2-\widehat{\sigma}_i^2} \\[0.3em] \end{pmatrix} U^{H} \\ & = \left(I-U_k U_k^H\right)B(\widehat{\sigma}_i^2). \end{aligned} \end{equation*} This concludes the proof. \subsection*{Proof of Proposition \ref{pro34}} \label{proof1} First, notice that \begin{equation*} (BB^H-\widehat{\sigma}_i^2 I_m)^{-1} = U_k U_k^H(BB^H-\widehat{\sigma}_i^2 I_m)^{-1} + (I_m -U_k U_k^H) (BB^H-\widehat{\sigma}_i^2 I_m)^{-1}. \end{equation*} Therefore, we can write \[(BB^H-\widehat{\sigma}_i^2 I_m)^{-1}BE^H\widehat{y}^{(i)} = U_k(\Sigma_k^2-\widehat{\sigma}_i^2 I_k)^{-1} \Sigma_k (EV_k)^H\widehat{y}^{(i)} +(I_m -U_k U_k^H) (BB^H-\widehat{\sigma}_i^2 I_m)^{-1}BE^H\widehat{y}^{(i)}.\] The left singular vector $\widehat{u}^{(i)}$ can be then expressed as \begin{eqnarray*} \widehat{u}^{(i)} & = & \begin{pmatrix} -(BB^H-\widehat{\sigma}_i^2 I_m)^{-1}BE^H \\[0.3em] I_s \\[0.3em] \end{pmatrix}\widehat{y}^{(i)} \\ & = & \begin{pmatrix} u^{(1)},\ldots,u^{(k)} & \\[0.3em] & I_s \\[0.3em] \end{pmatrix} \begin{pmatrix} \chi_{1,i} \\[0.3em] \vdots \\[0.3em] \chi_{k,i} \\[0.3em] \widehat{y}^{(i)} \\[0.3em] \end{pmatrix} - \begin{pmatrix} B(\widehat{\sigma}_i^2)BE^H\widehat{y}^{(i)} \\[0.3em] \\[0.3em] \end{pmatrix}. \end{eqnarray*} The proof concludes by noticing that by Lemma \ref{lem1} we have $B(\widehat{\sigma}_i^2) =B(\lambda) \sum\limits_{\rho=0}^\infty \left[(\widehat{\sigma}_i^2-\lambda)B(\lambda)\right]^\rho$. \subsection*{Proof of Proposition \ref{pro35}} The proof exploits the formula \begin{equation*} (B(\widehat{\sigma}_i^2)-B(\lambda))BE^H = (I-U_kU_k^H)U\left[(\Sigma \Sigma^T-\widehat{\sigma}_i^2 I_m)^{-1}-(\Sigma \Sigma^T-\lambda I_m)^{-1}\right]U^HU\Sigma V^HE^H. \end{equation*} It follows \begin{eqnarray*} \mathtt{min}_{z\in \mathtt{range}(Z)} \|\widehat{u}^{(i)}-z\| & \leq & \norm{\begin{pmatrix} \left[B(\widehat{\sigma}_i^2)-B(\lambda)\right]BE^H\widehat{y}^{(i)}\\[0.3em] \\[0.3em] \end{pmatrix}}\\ & \leq & \norm{ \begin{pmatrix} \scalebox{2}{$0$}_{k,k} & & & & \\[0.3em] & \dfrac{\sigma_{k+1}(\widehat{\sigma}_i^2-\lambda)} {(\sigma_{k+1}^2-\widehat{\sigma}_{i}^2)\left(\sigma_{k+1}^2-\lambda\right)} & & \\[0.3em] & & \ddots & \\[0.3em] & & & \dfrac{\sigma_{\mathtt{min}(m,n)}(\widehat{\sigma}_i^2-\lambda)} {(\sigma_{\mathtt{min}(m,n)}^2-\widehat{\sigma}_{i}^2)\left(\sigma_{\mathtt{min}(m,n)}^2-\lambda\right)} \\[0.3em] \end{pmatrix}}\norm{E^H\widehat{y}^{(i)}}\\ & \leq & \mathtt{max}\left\lbrace\left|\dfrac{\sigma_{j}(\widehat{\sigma}_i^2-\lambda)} {(\sigma_{j}^2-\widehat{\sigma}_{i}^2)\left(\sigma_{j}^2-\lambda\right)} \right|\right\rbrace_{j=k+1,\ldots,\mathtt{min}(m,n)} \norm{E^H\widehat{y}^{(i)}}. \end{eqnarray*} \section{Introduction} In several applications it is often required to update the truncated (partial) Singular Value Decomposition (SVD) of matrices whose number of rows (or columns) is increasing dynamically. One such application is Latent Semantic Indexing (LSI) in which a partial SVD of the term-document matrix needs to be updated after a few new terms/documents have been added to the collection \cite{berry1995using,deerwester1990indexing,zha1999updating}. Another example is the recommendation of a rank listed of items to users in recommender systems (top-N recommendations) \cite{nikolakopoulos2019eigenrec,cremonesi2010performance}. Even when the matrix at hand is not updated over time, it might still be practical to process it one part at a time, therefore leading to the need to update the approximate truncated SVD each time a new part of the matrix is fetched from the system memory. The SVD updating problem can be formulated in terms of linear algebra as follows. Let $B \in \mathbb{C}^{m\times n}$ be a matrix for which we have access to its approximate rank-$k$ truncated SVD $B_k = \sum\limits_{j=1}^k \sigma_j u^{(j)} \left(v^{(j)})\right^H$, where $\sigma_j,\ j=1,\ldots,\mathtt{min}(m,n)$, denotes the $j$'th largest singular value, and $u^{(j)}$ and $v^{(j)}$ denote the corresponding left and right singular vectors, respectively. Our goal is to obtain an approximate rank-$k$ truncated SVD $A_k = \sum\limits_{j=1}^k \hat{\sigma}_j \hat{u}^{(j)} \left(\hat{v}^{(j)})\right^H$, of a matrix $A$ obtained after augmenting matrix $B$ with either a new set of rows or columns. In matrix notation this update can be written as \begin{equation*} A = \begin{pmatrix} B \\[0.3em] E \\[0.3em] \end{pmatrix},\ \ E \in \mathbb{C}^{s \times n} \end{equation*} for the case of new rows, and \begin{equation*} A = \begin{pmatrix} B & E \\[0.1em] \end{pmatrix},\ \ E \in \mathbb{C}^{m \times s}, \end{equation*} for the case of new columns. The standard approach to compute the $k$ leading singular triplets of matrix $A$ is by a call to an off-the-shelf SVD solver, i.e., either LAPACK's SVD solver (for dense $A$) \cite{anderson1999lapack}, or an implementation of the Golub-Kahan Lanczos Bidiagonalization procedure (for sparse $A$) \cite{wu2017primme_svds,larsen1998lanczos,hochstenbach2001jacobi,golub1965calculating,wu2015preconditioned,hernandez2005slepc}. The standard approach can take advantage of mature, high-performance, SVD software, however it becomes becomes increasingly impractical as multiple row/column updates take place over time. Additionally, the standard approach approaches is mostly geared towards providing highly accurate solutions, a property that is rarely needed in practice. Since we already have information regarding the $k$ leading singular triplets of $B$, one can think of trying and exploiting the latter to approximate the $k$ leading singular triplets of $A$. This idea has been explored extensively, e.g., to update the full SVD \cite{brand2003fast,moonen1992singular,Gu94astable}, or update partial SVD computations in the context of LSI \cite{vecharynski2014fast,zha1999updating,berry1995using}. The algorithms proposed in this paper are based on Rayleigh-Ritz projections. More specifically, the problem of updating the SVD is recasted into one of computing eigenvalues and eigenvectors (eigenpairs) of either matrix $AA^H$ (for the case where rows are added) or matrix $A^HA$ (for the case where columns are added). \subsection{Notation} The (full) SVD of matrix $B$ is denoted as $B=U\Sigma V^H$ where $U\in \mathbb{C}^{m\times m}$ and $V\in \mathbb{C}^{n\times n}$ are unitary matrices whose $j$'th column is equal to the left singular vector $u^{(j)}$ and right singular vector $v^{(j)}$, respectively. The matrix $\Sigma \in \mathbb{R}^{m\times n}$ has non-zero entries only along its main diagonal. These entries are equal to the singular values $\sigma_1\geq\cdots\geq \sigma_{\mathtt{min}(m,n)}$. Moreover, we define the matrices $U_{j} = \left[u^{(1)},\ldots,u^{(j)}\right]$, $V_{j} = \left[v^{(1)},\ldots,v^{(j)}\right]$, and $\Sigma_{j} = \mathtt{diag}\left(\sigma_1, \ldots,\sigma_j\right)$. The rank-$k$ truncated SVD of matrix $B$ can be then written as $B_k=U_k\Sigma_kV_k^H =\sum\limits_{j=1}^k \sigma_j u^{(j)} \left(v^{(j)}\right)^H$. We follow the same notation for matrix $A$ with the addition of a “hat" on top of each variable, i.e., we denote the singular triplets of matrix $A$ by $(\hat{\sigma}_j,\hat{u}^{(j)}, \hat{v}^{(j)})$, where $\hat{\sigma}_j$ denotes the $j$'th largest singular value, and $\hat{u}^{(j)}$ and $\hat{v}^{(j)}$ denote the corresponding left and right singular vectors, respectively. The routines $\mathtt{numRows}(K)$ and $\mathtt{nnz}(K)$ return the number of rows of matrix and non-zero entries of matrix $K$, respectively. \subsection{Updating the SVD of matrices in LSI} The problem of updating the SVD of a matrix has been considered extensively in application-specific fields such as LSI. We next discuss the method of Zha and Simon \cite{zha1999updating}, which is essentially based on the assumption that the rank of matrix $B$ is equal to $k$, i.e., the rank-$k$ truncated SVD of $B$ is also the compact SVD of $B$. The method of Zha and Simon can be also seen as a more accurate but more expensive version of the algorithm in \cite{berry1995using}. Consider first the case the case $A = \begin{pmatrix} B \\[0.3em] E \\[0.3em] \end{pmatrix}$, and let $(I-V_kV_k^H)E^H=QR$ such that $Q$ is orthonormal and $R$ is upper triangular. Matrix $A$ can be then written as \begin{equation*} \begin{align*} \begin{pmatrix} B \\[0.3em] E \\[0.3em] \end{pmatrix}= \begin{pmatrix} U_k\Sigma_kV_k^H \\[0.3em] E \\[0.3em] \end{pmatrix} & = \begin{pmatrix} U_k & \\[0.3em] & I \\[0.3em] \end{pmatrix} \begin{pmatrix} \Sigma_k & \\[0.3em] EV_k & R^H \\[0.3em] \end{pmatrix} \begin{pmatrix} V_k & Q \\[0.3em] \end{pmatrix}^H \\ &= \left(\begin{pmatrix} U_k & \\[0.3em] & I \\[0.3em] \end{pmatrix}F\right)\Theta \left(\begin{pmatrix} V_k & Q \\[0.3em] \end{pmatrix}G\right)^H \end{align*}, \end{equation*} where the matrix product $F\Theta G^H$ denotes the compact SVD of the matrix $\begin{pmatrix} \Sigma_k & \\[0.3em] EV_k & R^H \\[0.3em] \end{pmatrix}$. The above scheme can be also applied to the case $A=\begin{pmatrix}B&E\end{pmatrix}$. Indeed, if matrices $Q$ and $R$ are now determined as $(I-U_kU_k^H)E=QR$, we can write \begin{equation*} \begin{align*} \begin{pmatrix} B & E \end{pmatrix}= \begin{pmatrix} U_k\Sigma_kV_k^H &E \end{pmatrix} & = \begin{pmatrix} U_k & Q \end{pmatrix} \begin{pmatrix} \Sigma_k & U_k^HE \\[0.3em] & R \\[0.3em] \end{pmatrix} \begin{pmatrix} V_k^H & \\[0.3em] & I \\[0.3em] \end{pmatrix}\\ &= \left(\begin{pmatrix} U_k & Q \end{pmatrix} F\right)\Theta\left( \begin{pmatrix} V_k & \\[0.3em] & I \\[0.3em] \end{pmatrix}G\right)^H \end{align*}, \end{equation*} where the matrix product $F\Theta G$ denotes the compact SVD of the matrix $\begin{pmatrix} \Sigma_k & U_k^HE \\[0.3em] & R \\[0.3em] \end{pmatrix}$. For general updating problems, replacing $B$ by $B_k$ leads only to an approximation of the $k$ leading singular triplets of $A$. Nonetheless, it has been shown that the above scheme can still provide an exact truncated SVD if $A$ satisfies a “low-rank plus shift" structure \cite{zha2000matrices}. In \cite{vecharynski2014fast} it was shown that the complexity of the Zha-Simon scheme is cubical with respect to the update size (number of rows/columns of $E$) and was suggested to approximate matrices $(I-V_kV_k^H)E^H$ and $(I-U_kU_k^H)E$ by a product of matrices resulting from applying a Golub-Kahan Lanczos bidiagonalization procedure \cite{golub1965calculating}. \section{The eigenvalue viewpoint} \label{Sec2} The $i$'th singular triplet $(\hat{\sigma}_i,\hat{u}^{(i)},\hat{v}^{(i)})$ of matrix $A$ satisfies the equations \begin{equation*} AA^H \hat{u}^{(i)} = \hat{\sigma}_i^2\hat{u}^{(i)},\ \ A^HA \hat{v}^{(i)} = \hat{\sigma}_i^2\hat{v}^{(i)}, \end{equation*} i.e., computing the $i$'th left or right singular vector of $A$ is equivalent to computing the the $i$'th eigenvector of a matrix product involving $A$ and its conjugate transpose. The above is generalized to the case of rank-$k$ truncated SVD: \begin{equation*} AA^H \hat{U}_k = \hat{U}_k\hat{\Sigma}_k^2,\ \ A^HA \hat{V}_k = \hat{V}_k\hat{\Sigma}_k^2. \end{equation*} In practice we need to compute $\hat{\Sigma}_k$ but only one of the matrices $\hat{U}_k$ and $\hat{V}_k$, since these are related by the equations \begin{equation*} \hat{U}_k=A\hat{V}_k\hat{\Sigma}_k^{-1},\ \ \hat{V}_k=A^H\hat{U}_k\hat{\Sigma}_k^{-1}. \end{equation*} The algorithms proposed in this paper are based on the following convention: \begin{enumerate} \item If $ A = \begin{pmatrix} B \\[0.3em] E \\[0.3em] \end{pmatrix}$, we determine $(\hat{\Sigma}_k,\hat{U}_k)$ by computing the $k$ leading eigenpairs of matrix $AA^H$, and set $\hat{V}_k=A^H\hat{U}_k\hat{\Sigma}_k^{-1}$. \item If $ A = \begin{pmatrix} B & E \\[0.3em] \end{pmatrix}$, we determine $(\hat{\Sigma}_k,\hat{V}_k)$ by computing the $k$ leading eigenpairs of matrix $A^HA$, and set $\hat{U}_k=A\hat{V}_k\hat{\Sigma}_k^{-1}$. \end{enumerate} \subsection{The Rayleigh-Ritz method} \label{SecRR} The standard approach to compute a few leading eigenpairs of a symmetric matrix $K$ is by applying the Rayleigh-Ritz (RR) technique onto an ansatz subspace ${\cal Z}$ which (ideally) includes the sought invariant subspace of interest \cite{parlett1998symmetric}. More specifically, let matrix $Z$ represent an orthogonal basis of the subspace ${\cal Z}$. The RR method produces approximate eigenpairs (also known as Ritz pairs) of the form $(\theta_i,Zg_i)$, where the scalar-vector pair $(\theta_i,g_i)$ denotes the $i$'th eigenpair of the matrix $Z^TKZ$. For Hermitian eigenvalue problems, the RR procedure retains several optimality properties, e.g., see \cite{li2015rayleigh}. The convergence of Ritz pairs has been considered in \cite{jia2001analysis}, while bounds on the accuracy of Ritz values are discussed in \cite{knyazev2010rayleigh,saad2011numerical}. Informally, we expect that if the eigenvectors associated with the leading eigenvalues of matrix $K$ are well captured by ${\cal Z}$, then the leading Ritz pairs will be good approximations of the leading eigenpairs of $K$. \begin{algorithm} \caption{RR-SVD (“$AA^H$" version). \label{alg1}} \begin{algorithmic}[1] \STATE {\bf Input:} $B,E,U_k,\Sigma_k,V_k,E,Z$ \STATE {\bf Output:} $\hat{U}_k,\hat{\Sigma}_k,\hat{V}_k$ \STATE Solve $[\Theta_k,G_k]=\mathtt{eigs}((A^HZ)^HA^HZ)$ \STATE Set $\hat{U}_k=ZG_k$ and $\hat{\Sigma}_k=\sqrt{\Theta_k}$ \STATE Set $\hat{V}_k = A^H\hat{U}_k \hat{\Sigma}_k^{-1}$ \end{algorithmic} \end{algorithm} Algorithm \ref{alg1} sketches the proposed numerical procedure to approximate the rank-$k$ truncated SVD of matrix $A=\begin{pmatrix} B \\[0.3em] E \\[0.3em] \end{pmatrix}$ by means of a Rayleigh-Ritz projection. Step 3 computes the $k$ leading eigenpairs of the matrix $(A^HZ)^HA^HZ$, and Step 4 forms the associated Ritz pairs. Finally, the $k$ leading right singular vectors are computed by means of a matrix multiplication with matrix $A^H$ in Step 5. Steps 4 and 5 run at $2\mathtt{nnz}(Z)k$ and $(\mathtt{nnz}+n)k$ Floating Point Operations (FLOPS), respectively. The cost of Step 3 will generally depend by the convergence of the eigenvalue solver. Since the matrix $(A^HZ)^HA^HZ$ is symmetric, its $k$ leading eigenpairs can be computed by the Lanczos algorithm \cite{saad2011numerical}. Under the mild assumption that Lanczos performs $\gamma k,\ \gamma \in \mathbb{Z}^*$, iterations, a rough estimate of the total computational cost of Step 3 is $4 \mathtt{nnz}(Z^H)\mathtt{nnz}(A)\gamma k+2\mathtt{numRows}(Z^H)(\gamma k)^2$ FLOPS. \begin{algorithm} \caption{RR-SVD (“$A^HA$" version). \label{alg2}} \begin{algorithmic}[1] \STATE {\bf Input:} $B,E,U_k,\Sigma_k,V_k,E,Z$ \STATE {\bf Output:} $\hat{U}_k,\hat{\Sigma}_k,\hat{V}_k$ \STATE Solve $[\Theta_k,G_k]=\mathtt{eigs}((AZ)^HAZ)$ \STATE Set $\hat{V}_k=ZG_k$ and $\hat{\Sigma}_k=\sqrt{\Theta_k}$ \STATE Set $\hat{U}_k = A\hat{V}_k \hat{\Sigma}_k^{-1}$ \end{algorithmic} \end{algorithm} Algorithm can be adapted to approximate the rank-$k$ truncated SVD of matrix $A=\begin{pmatrix} B & E \end{pmatrix}$ by applying the RR procedure to matrix $A^HA$, followed by a computation of the approximate $k$ left singular vectors. The complete procedure is summarized in Algorithm \ref{alg2}. \section{Building the projection matrix $Z$} Throughout the rest of this section we focus in the updating $A=\begin{pmatrix} B \\[0.3em] E \\[0.3em] \end{pmatrix}$, but the discussion extends to updates of the form $A = \begin{pmatrix} B & E \end{pmatrix}$ in a straightforward manner. Following the discussion in Section \ref{Sec2}, our goal is to take advantage of the information in the truncated SVD of $B$ and build a projection matrix $Z$ such that the distance between the subspace ${\cal Z}$ and the left singular vectors $\hat{u}^{(1)},\ldots,\hat{u}^{(k)}$ is small. In this section we describe and analyze two different choices for $Z$. Throughout the rest of this paper we will assume that matrix $E$ has $s$ rows. \subsection{Exploiting the left singular vectors of $B$} \label{choiceZ1} The following proposition presents a closed-form expression of the $i$'th eigenvector of matrix $AA^H$. \begin{proposition}\label{pro0} The left singular vector $\hat{u}^{(i)}$ associated with singular value $\hat{\sigma}_i$ is equal to \begin{equation*} \hat{u}^{(i)}= \begin{pmatrix} -(BB^H-\hat{\sigma}_{i}^2I_m)^{-1}BE^H\hat{y}^{(i)} \\[0.3em] \hat{y}^{(i)} \\[0.3em] \end{pmatrix}, \end{equation*} where $\hat{y}^{(i)}$ satisfies the equation \[\left[E\left(\sum\limits_{j=1}^{\mathtt{min}(m,n)} v^{(j)} \left(v^{(j)}\right)^H \dfrac{\hat{\sigma}_{i}^2}{\hat{\sigma}_{i}^2-\sigma_{j}^2} \right)E^H-\hat{\sigma}_{i}^2I_s\right]\hat{y}^{(i)}=0.\] \end{proposition} \begin{proof} The scalar-vector pair $(\hat{\sigma}_{i}^2,\hat{u}^{(i)})$ satisfies the equation $(AA^H-\hat{\sigma}_{i}^2 I_{m+s})\hat{u}^{(i)}=0$. If we partition the $i$'th left singular vector as \[ \hat{u}^{(i)} = \begin{pmatrix} \hat{f}^{(i)} \\[0.3em] \hat{y}^{(i)} \\[0.3em] \end{pmatrix}, \] we can write \begin{equation*} \begin{pmatrix} BB^H-\hat{\sigma}_{i}^2 I_{m} & BE^H \\[0.3em] EB^H & EE^H-\hat{\sigma}_{i}^2 I_{s} \\[0.3em] \end{pmatrix} \begin{pmatrix} \hat{f}^{(i)} \\[0.3em] \hat{y}^{(i)} \\[0.3em] \end{pmatrix}=0. \end{equation*} The leading $m$ rows satisfy $(BB^H-\hat{\sigma}_{i}^2I_m)\hat{f}^{(i)}=-BE^H\hat{y}^{(i)}$, from which we can determine $\hat{f}^{(i)}$ assuming $\hat{y}^{(i)}$ is provided. Note that in case $\hat{\sigma}_{i}=\sigma_{j}$, the Moore-Penrose pseudoinverse $(BB^H-\hat{\sigma}_{i}^2I_m)^\dagger$ is considered instead. Plugging the expression of $\hat{f}^{(i)}$ in the second block of rows and considering the full SVD $B=U\Sigma V^H$ leads to \begin{equation*} \begin{align} 0 & = \left[EE^H -EB^H(BB^H-\hat{\sigma}_{i}^2I_m)^{-1}BE^H-\hat{\sigma}_{i}^2I_s\right]\hat{y}^{(i)}\\ & = \left[E(I_s-B^H(BB^H-\hat{\sigma}_{i}^2I_m)^{-1}B)E^H-\hat{\sigma}_{i}^2I_s\right]\hat{y}^{(i)}\\ & = \left[E(VV^H+V\Sigma^T(\hat{\sigma}_i^2 I_m - \Sigma \Sigma^T)^{-1} \Sigma V^H)E^H-\hat{\sigma}_{i}^2I_s\right]\hat{y}^{(i)}\\ & = \left[EV(I_n+\Sigma^T\left(\hat{\sigma}_i^2 I_m - \Sigma \Sigma^T\right)^{-1} \Sigma)V^HE^H-\hat{\sigma}_{i}^2I_s\right]\hat{y}^{(i)} \end{align} \end{equation*} The proof concludes by noticing that $\Sigma \Sigma^T$ is a $m \times m$ diagonal matrix with its non-zero entries located in its leading principal submatrix of size $\mathtt{min}(m,n)$. \end{proof} As we show next, the left singular vector $\hat{u}^{(i)}$ can be also written as a linear combination of the columns of matrix $\begin{pmatrix} u^{(1)},\ldots,u^{(\mathtt{min}(m,n))} & \\[0.3em] & I_s \\[0.3em] \end{pmatrix}$. \begin{proposition} \label{pro1} The left singular vector $\hat{u}^{(i)}$ associated with singular value $\hat{\sigma}_i$ is equal to \begin{equation*} \hat{u}^{(i)} = \begin{pmatrix} u^{(1)},\ldots,u^{(k)} & \\[0.3em] & I_s \\[0.3em] \end{pmatrix} \begin{pmatrix} \chi_{1,i} \\[0.3em] \vdots \\[0.3em] \chi_{k,i} \\[0.3em] y^{(i)} \\[0.3em] \end{pmatrix} + \begin{pmatrix} u^{(k+1)},\ldots,u^{(\mathtt{min}(m,n))} \\[0.3em] \\[0.3em] \end{pmatrix} \begin{pmatrix} \chi_{k+1,i} \\[0.3em] \vdots \\[0.3em] \chi_{\mathtt{min}(m,n),i} \\[0.3em] \end{pmatrix}, \end{equation*} where \[\chi_{j,i}=-\left(Ev^{(j)}\right)^H\hat{y}^{(i)} \dfrac{\sigma_{j}}{\sigma_{j}^2-\hat{\sigma}_{i}^2}.\] \end{proposition} \begin{proof} Since the left singular vectors of $B$ span $\mathbb{R}^m$, we can write \begin{equation*} BE^H\hat{y}^{(i)} = \sum\limits_{j=1}^m \sigma_{j} u^{(j)} \left(Ev^{(j)}\right)^H\hat{y}^{(i)}. \end{equation*} The proof concludes by noticing that the top $m\times 1$ part of $\hat{u}^{(i)},\ \hat{f}^{(i)}$, is equal to \begin{equation*} \begin{align} \hat{f}^{(i)} & = -(BB^H-\hat{\sigma}_{i}^2I_m)^{-1}BE^H\hat{y}^{(i)} \\ &= -U(\Sigma \Sigma^T - \hat{\sigma}_i^2 I_m)^{-1}\Sigma \left(EV\right)^H\hat{y}^{(i)}\\ &=-\sum\limits_{j=1}^m u^{(j)}\dfrac{\sigma_{j}}{\sigma_{j}^2-\hat{\sigma}_{i}^2} \left(Ev^{(j)}\right)^H\hat{y}^{(i)}\\ &=-\sum\limits_{j=1}^{\mathtt{min}(m,n)}u^{(j)} \dfrac{\sigma_{j}}{\sigma_{j}^2-\hat{\sigma}_{i}^2} \left(Ev^{(j)}\right)^H\hat{y}^{(i)}\\ &=\sum\limits_{j=1}^{\mathtt{min}(m,n)}u^{(j)} \chi_{j,i}. \end{align} \end{equation*} \end{proof} Proposition \ref{pro1} suggests that if $\hat{\sigma}_{k}\gg \sigma_{k+1}$, then it might be possible to approximate $\hat{u}^{(i)}$ by linear combinations of the columns of matrix $\begin{pmatrix} u^{(1)},\ldots,u^{(k)} & \\[0.3em] & I_s \\[0.3em] \end{pmatrix}.$ As we show next, the distance between $\hat{u}^{(i)}$ and the range space of the latter matrix is at most proportional to $\dfrac{\sigma_{k+1}}{\sigma_{k+1}^2-\hat{\sigma}_{i}^2}$. \begin{proposition} \label{pro2} Let matrix $Z$ in Algorithm \ref{alg1} be defined as \begin{equation*}\label{Zpro2} Z = \begin{pmatrix} u^{(1)},\ldots,u^{(k)} & \\[0.3em] & I_s \\[0.3em] \end{pmatrix}, \end{equation*} and set $\Omega = \sqrt{\mathtt{min}(m,n)-k}\norm{E^H\hat{y}^{(i)}}$. Then, \begin{equation*} \mathtt{min}_{z\in \cal{Z}} \|\hat{u}^{(i)}-z\| \leq \Omega \left|\dfrac{\sigma_{k+1}}{\sigma_{k+1}^2-\hat{\sigma}_{i}^2}\right|. \end{equation*} \end{proposition} \begin{proof} We have \begin{equation*} \begin{align*} \mathtt{min}_{z\in \cal{Z}} \|\hat{u}^{(i)}-z\| & \leq \norm{ \begin{pmatrix} u^{(k+1)},\ldots,u^{(\mathtt{min}(m,n))} \\[0.3em] \\[0.3em] \end{pmatrix} \begin{pmatrix} \chi_{k+1,i} \\[0.3em] \vdots \\[0.3em] \chi_{\mathtt{min}(m,n),i} \\[0.3em] \end{pmatrix}} \\ & = \norm{\begin{pmatrix} \chi_{k+1,i} \\[0.3em] \vdots \\[0.3em] \chi_{\mathtt{min}(m,n),i} \\[0.3em] \end{pmatrix}} \\ & \leq \sum_{j=k+1}^{\mathtt{min}(m,n)} \left|\dfrac{\sigma_{j}}{\sigma_{j}^2-\hat{\sigma}_{i}^2} \right| \norm{E^H\hat{y}^{(i)}}. \end{align*} \end{equation*} The proof follows by noticing that due to Cauchy's interlacing theorem we have $\sigma_{k+1}^2\leq \hat{\sigma}_{i}^2,\ i=1,\ldots,k$, and thus $\left|\dfrac{\sigma_{k+1}}{\sigma_{k+1}^2-\hat{\sigma}_{i}^2}\right| \geq \cdots \geq \left|\dfrac{\sigma_{\mathtt{min}(m,n)}} {\sigma_{\mathtt{min}(m,n)}^2-\hat{\sigma}_{i}^2}\right|.$ \end{proof} Proposition \ref{pro2} implies that left singular vectors associated with larger singular values of $A$ might be approximated to higher accuracy. \subsubsection{The structure of matrix $(A^HZ)A^HZ$} Setting the projection matrix $Z$ as in Proposition \ref{pro2} gives $Z^HZ=I$ and \begin{equation*} (A^HZ)A^HZ = \begin{pmatrix} \Sigma_k V_k^H \\[0.3em] E \\[0.3em] \end{pmatrix} \begin{pmatrix} V_k\Sigma_k & E^H \\[0.3em] \end{pmatrix}. \end{equation*} Each MV product with matrix $(A^HZ)A^HZ$ requires two MV products with matrices $\Sigma_k,\ V_k$ and $E$, for a total cost of $4(nk+\mathtt{nnz}(E))$ FLOPS. Moreover, we have $\mathtt{numRows} (Z^H)=s+k$, and thus a rough estimate of the cost of Step 3 in Algorithm \ref{alg1} is $4(nk+\mathtt{nnz}(E))\gamma k + 2(s+k)(\gamma k)^2$ FLOPS. \subsection{Exploiting left singular vectors of $B$ and resolvent expansions} \label{choiceZ2} The choice of $Z$ presented in Section \ref{choiceZ1} computes the exact rank-$k$ truncated SVD of $A$ provided the rank of $B$ is exactly $k$. Nonetheless, when the rank of $B$ is larger than $k$ and the magnitude of the singular values $\sigma_{k+1},\ldots,\sigma_{\mathtt{min}(m,n)}$ is not sufficiently smaller than $\sigma_k$, the overall accuracy returned by Algorithm \ref{alg1} might not be high enough. This section presents an enhanced projection matrix $Z$ so that Algorithm \ref{alg1} returns more accurate approximations. \begin{lemma} \label{lem1} Let $B(\lambda)=(I_m-U_kU_k^H)(BB^H-\lambda I_m)^{-1}$ such that $\lambda \geq \hat{\sigma}_k^2$. Then, for any $i=1,\ldots,k$, we have: \[B(\hat{\sigma}_i^2) =B(\lambda) \sum\limits_{\rho=0}^\infty \left[(\hat{\sigma}_i^2-\lambda)B(\lambda)\right]^\rho.\] \end{lemma} \begin{proof} We can write \begin{equation*} \begin{aligned} B(\lambda) &=\left(I-U_k U_k^H\right) U \left(\begin{smallmatrix} \sigma_1^2-\lambda & &\\ & \ddots &\\ & & \sigma_m^2-\lambda \end{smallmatrix}\right)^{-1}U^H\\ &= U \left( \begin{smallmatrix} \scalebox{2}{$0$}_{k,k} & & & & \\ & & \dfrac{1}{\sigma_{k+1}^2-\lambda} & & \\ & & & \ddots & \\ & & & & \dfrac{1}{\sigma_m^2-\lambda}\\ \end{smallmatrix} \right)U^{H}, \end{aligned} \end{equation*} where $\sigma_j=0$ for any $j>\mathtt{min}(m,n)$. Let us now define the scalar $\gamma_{j,i} = \dfrac{\hat{\sigma}_i^2-\lambda}{\sigma_j^2-\lambda}$. We can write \begin{equation*} B(\lambda)\left[(\hat{\sigma}_i^2-\lambda)B(\lambda)\right]^\rho = U \begin{pmatrix} \scalebox{2}{$0$}_{k,k} & & & \\[0.3em] & \dfrac{\gamma_{k+1,i}^\rho}{\sigma_{k+1}^2-\lambda} & & \\[0.3em] & & \ddots & \\[0.3em] & & & \dfrac{\gamma_{m,i}^\rho}{\sigma_{m}^2-\lambda} \\[0.3em] \end{pmatrix} U^{H}. \end{equation*} Accounting for all powers $p=0,1,2,\ldots$, gives {\small \begin{equation*} B(\lambda)\sum_{\rho=0}^{\infty} \left[(\hat{\sigma}_i^2-\lambda)B(\lambda)\right]^\rho = U \begin{pmatrix} \scalebox{2}{$0$}_{k,k} & & & \\[0.3em] & \dfrac{\sum_{\rho=0}^{\infty} \gamma_{k+1,i}^\rho}{\sigma_{k+1}^2-\lambda} & & \\[0.3em] & & \ddots & \\[0.3em] & & & \dfrac{\sum_{\rho=0}^{\infty} \gamma_{m,i}^\rho}{\sigma_{m}^2-\lambda} \\[0.3em] \end{pmatrix} U^{H}. \end{equation*}} Since $\lambda \geq \hat{\sigma}_k^2 \geq \sigma_k^2$, it follows that for any $j>k$ we have $|\gamma_{j,i}| < 1$. Therefore, the geometric series converges and $\sum_{\rho=0}^{\infty} \gamma_{j,i}^\rho=\dfrac{1}{1-\gamma_{j,i}}= \dfrac{\sigma_j^2-\lambda}{\sigma_j^2-\hat{\sigma}_i^2}$. It follows that $\dfrac{1}{\sigma_j^2-\lambda}\sum_{\rho=0}^{\infty} \gamma_j^\rho =\dfrac{1}{\sigma_j^2-\hat{\sigma}_i^2}$. We finally have \begin{equation*} \begin{aligned} B(\lambda)\sum_{\rho=0}^{\infty} \left[(\hat{\sigma}_i^2-\lambda)B(\lambda)\right]^\rho & = U \begin{pmatrix} \scalebox{2}{$0$}_{k,k} & & & & \\[0.3em] & \dfrac{1}{\sigma_{k+1}^2-\hat{\sigma}_i^2} & & \\[0.3em] & & \ddots & \\[0.3em] & & & \dfrac{1}{\sigma_{m}^2-\hat{\sigma}_i^2} \\[0.3em] \end{pmatrix} U^{H} \\ & = \left(I-U_k U_k^H\right)B(\hat{\sigma}_i^2). \end{aligned} \end{equation*} This concludes the proof. \end{proof} \begin{proposition} The left singular vector $\hat{u}^{(i)}$ associated with singular value $\hat{\sigma}_i$ is equal to \begin{equation*} \hat{u}^{(i)} = \begin{pmatrix} u^{(1)},\ldots,u^{(k)} & \\[0.3em] & I_s \\[0.3em] \end{pmatrix} \begin{pmatrix} \chi_{1,i} \\[0.3em] \vdots \\[0.3em] \chi_{k,i} \\[0.3em] y^{(i)} \\[0.3em] \end{pmatrix} - \begin{pmatrix} B(\lambda) \sum\limits_{\rho=0}^\infty \left[(\hat{\sigma}_i^2-\lambda)B(\lambda)\right]^\rho \BE^H\hat{y}^{(i)} \\[0.3em] \\[0.3em] \end{pmatrix}. \end{equation*} \end{proposition} \begin{proof} First, notice that $(BB^H-\hat{\sigma}_i^2 I_m)^{-1} = U_k U_k^H(BB^H-\hat{\sigma}_i^2 I_m)^{-1} + (I_m -U_k U_k^H) (BB^H-\hat{\sigma}_i^2 I_m)^{-1}$. Therefore, we have \[(BB^H-\hat{\sigma}_i^2 I_m)^{-1}BE^H\hat{y}^{(i)} = U_k(\Sigma_k^2-\hat{\sigma}_i^2 I_k)^{-1} \Sigma_k (EV_k)^H\hat{y}^{(i)} +(I_m -U_k U_k^H) (BB^H-\hat{\sigma}_i^2 I_m)^{-1}BE^H\hat{y}^{(i)}.\] The left singular vector $\hat{u}^{(i)}$ can be then written as \begin{equation*} \begin{align} \hat{u}^{(i)} & = \begin{pmatrix} -(BB^H-\hat{\sigma}_i^2 I_m)^{-1}BE^H \\[0.3em] I_s \\[0.3em] \end{pmatrix}y^{(i)} \\ & = \begin{pmatrix} u^{(1)},\ldots,u^{(k)} & \\[0.3em] & I_s \\[0.3em] \end{pmatrix} \begin{pmatrix} \chi_{1,i} \\[0.3em] \vdots \\[0.3em] \chi_{k,i} \\[0.3em] y^{(i)} \\[0.3em] \end{pmatrix} - \begin{pmatrix} B(\hat{\sigma}_i^2)BE^H\hat{y}^{(i)} \\[0.3em] \\[0.3em] \end{pmatrix}. \end{align} \end{equation*} The proof concludes by noticing that by Lemma \ref{lem1} we have $B(\hat{\sigma}_i^2) =B(\lambda) \sum\limits_{\rho=0}^\infty \left[(\hat{\sigma}_i^2-\lambda)B(\lambda)\right]^\rho$. \end{proof} The above representation suggests an approach to enhance the Rayleigh-Ritz projection subspace. Indeed, we can write \begin{equation*} \hat{u}^{(i)} = \begin{pmatrix} u^{(1)},\ldots,u^{(k)} & -B(\lambda)BE^H & \\[0.3em] & & I_s \\[0.3em] \end{pmatrix} \begin{pmatrix} \chi_{1,i} \\[0.3em] \vdots \\[0.3em] \chi_{k,i} \\[0.3em] y^{(i)} \\[0.3em] y^{(i)} \\[0.3em] \end{pmatrix} - \begin{pmatrix B(\lambda) \sum\limits_{\rho=1}^\infty \left[(\hat{\sigma}_i^2-\lambda)B(\lambda)\right]^\rho BE^H\hat{y}^{(i)} \\[0.3em] \\[0.3em] \end{pmatrix}. \end{equation*} \begin{proposition} \label{lem1} Let matrix $Z$ in Algorithm \ref{alg1} be defined as \begin{equation*} Z = \begin{pmatrix} u^{(1)},\ldots,u^{(k)} & -B(\lambda)BE^H & \\[0.3em] & & I_s \\[0.3em] \end{pmatrix} \end{equation*} where $\lambda > \sigma_{k+1}^2$, and set $\Omega = \sqrt{\mathtt{min}(m,n)-k}\norm{E^H\hat{y}^{(i)}}$. Then, \begin{equation*} \mathtt{min}_{z\in \cal{Z}} \|\hat{u}^{(i)}-z\| \leq \Omega\left|\dfrac{\sigma_{k+1}(\hat{\sigma}_i^2-\lambda)} {(\sigma_{k+1}^2-\hat{\sigma}_{i}^2)\left(\sigma_{k+1}^2-\lambda\right)}}\right|. \end{equation*} \end{proposition} \begin{proof} We have \begin{equation*} \begin{align*} \mathtt{min}_{z\in \cal{Z}} \|\hat{u}^{(i)}-z\| & \leq \norm{\begin{pmatrix} (B(\hat{\sigma}_i^2)-B(\lambda))BE^H\hat{y}^{(i)}\\[0.3em] \\[0.3em] \end{pmatrix}}\\ & \leq \norm{ \begin{pmatrix} \scalebox{2}{$0$}_{k,k} & & & & \\[0.3em] & \dfrac{\sigma_{k+1}(\hat{\sigma}_i^2-\lambda)} {(\sigma_{k+1}^2-\hat{\sigma}_{i}^2)\left(\sigma_{k+1}^2-\lambda\right)} & & \\[0.3em] & & \ddots & \\[0.3em] & & & \dfrac{\sigma_{\mathtt{min}(m,n)}(\hat{\sigma}_i^2-\lambda)} {(\sigma_{\mathtt{min}(m,n)}^2-\hat{\sigma}_{i}^2)\left(\sigma_{\mathtt{min}(m,n)}^2-\lambda\right)} \\[0.3em] \end{pmatrix}V^H}\norm{E^Hy^{(i)}}\\ & \leq \sum\limits_{j=k+1}^{\mathtt{min}(m,n)} \left|\dfrac{\sigma_{j}(\hat{\sigma}_i^2-\lambda)} {(\sigma_{j}^2-\hat{\sigma}_{i}^2)\left(\sigma_{j}^2-\lambda\right)}\right| \norm{E^\hat{y}^{(i)}} \\ & \leq \sqrt{\mathtt{min}(m,n)-k}\ \mathtt{max}_{j=k+1,\ldots,\mathtt{min}(m,n)} \left|\dfrac{\sigma_{j}(\hat{\sigma}_i^2-\lambda)} {(\sigma_{j}^2-\hat{\sigma}_{i}^2)\left(\sigma_{j}^2-\lambda\right)}\right| \norm{E^\hat{y}^{(i)}}. \end{align*} \end{equation*} \end{proof} \subsubsection{Truncating $B(\lambda)BE^H$} Setting the matrix $Z$ as in Lemma \ref{lem1} requires the computation of the matrix $-B(\lambda)BE^H$. This can be achieved by applying a block iterative solver for the solution of the consistent linear system of multiple right-hand sides \begin{equation}\label{eqBl1} -(I_m-U_kU_k^H)(BB^H-\lambda I_m)X = (I_m-U_kU_k^H)BE^H. \end{equation} Notice that for any $\lambda>\hat{\sigma}_k^2$, the eigenvalues of the matrix $-(I_m-U_kU_k^H)(BB^H-\lambda I_m)$ are non-negative and thus it is possible to use Conjugate Gradient-type approaches, e.g., see \cite{stathopoulos2010computing,kalantzis2018scalable,kalantzis2013accelerating}. In practice, solving the linear system in (\ref{eqBl1}) might be too costly. An alternative is to consider $-B(\lambda)BE^H\approx X_{\lambda,r}S_{\lambda,r}Y_{\lambda,r}^H$, where $X_{\lambda,r}S_{\lambda,r}Y_{\lambda,r}^H$ denotes the rank-$r$ truncated SVD of matrix $-B(\lambda)BE^H$. The matrix $-B(\lambda)BE^H$ can be then replaced by the matrix $X_{\lambda,r}$, since our goal is to build a subspace for a RR projection and $\mathtt{range}\left(X_{\lambda,r}S_{\lambda,r}Y_{\lambda,r}^H\right) \subseteq\mathtt{range}\left(X_{\lambda,r}\right)$. The $r$ leading left singular vectors of matrix $B(\lambda)BE^H$ can be obtained by Lanczos bidiagonalization. \subsubsection{The RR pencil $((A^HZ)A^HZ,Z^HZ)$} Setting the basis matrix $Z$ as \begin{equation*} Z = \begin{pmatrix} u^{(1)},\ldots,u^{(k)} & X_{\lambda,r} & \\[0.3em] & & I_s \\[0.3em] \end{pmatrix} \end{equation*} leads to \begin{equation*} (A^HZ)A^HZ = \begin{pmatrix} \Sigma_k V_k^H \\[0.3em] X_{\lambda,r}^HB \\[0.3em] E \\[0.3em] \end{pmatrix} \begin{pmatrix} V_k\Sigma_k & B^H X_{\lambda,r} & E^H \end{pmatrix}. \end{equation*} Each MV product with matrix $(A^HZ)A^HZ$ requires two MV products with matrices $\Sigma_k,\ V_k,\ E$ and $B^HX_{\lambda,r}$, for a total cost of $2(nk+nr+\mathtt{costMV}(E))$. Moreover, we have $\mathtt{numRows}(Z^H)=k+r+s$, and thus a rough estimate of Step 3 in Algorithm \ref{alg1} is $4(nk+nr+\mathtt{nnz}(E))\gamma k + 2(s+k+r)(\gamma k)^2$ FLOPS. Note that $Z^HZ=I$ since $U_k^HB(\lambda)BE^H=0$ and $\mathtt{range}(X_{\lambda,r}) \subseteq \mathtt{range}(B(\lambda)BE^H)$. \section{Experiments} Our experiments were conducted in a Matlab environment (version R2020a), using 64-bit arithmetic, on a single core of a computing system equipped with an Intel Haswell E5-2680v3 processor and 32 GB of system memory. \bibliographystyle{siam} \section{Introduction} This paper considers the update of the truncated SVD of a sparse matrix subject to additions of new rows and/or columns. More specifically, let $B \in \mathbb{C}^{m\times n}$ be a matrix for which its rank-$k$ (truncated) SVD $B_k$ is available. Our goal is to obtain an approximate rank-$k$ SVD $A_k$ of matrix \begin{equation*} \label{eq200} A = \begin{pmatrix} B \\[0.3em] E \\[0.3em] \end{pmatrix},\ \ {\rm or}\ \ A = \begin{pmatrix} B & E \end{pmatrix}, \end{equation*} where $E$ denotes the matrix of newly added rows or columns. This process can be repeated several times, where at each instance matrix $A$ becomes matrix $B$ at the next level. Note that a similar problem, not explored in this paper, is to approximate the rank-$k$ SVD of $B$ after modifying its (non-)zero entries, e.g., see \cite{zha1999updating}. Matrix problems such as the ones above hold an important role in several real-world applications. One such example is Latent Semantic Indexing (LSI) in which the truncated SVD of the current term-document matrix needs to be updated after a few new terms/documents have been added to the collection \cite{berry1995using,deerwester1990indexing,zha1999updating}. Another example is the update of latent-factor-based models of user-item rating matrices in top-N recommendation \cite{cremonesi2010performance,nikolakopoulos2019eigenrec,sarwar2002incremental}. Additional applications in geostatistical screening can be found in \cite[Chapter 6]{horesh2015reduced}. The standard approach to compute $A_k$ is to disregard any previously available information and apply directly to $A$ an off-the-shelf, high-performance, SVD solver \cite{baglama2005augmented,hernandez2005slepc,wu2015preconditioned,halko2011finding,ubaru2019}. This standard approach might be feasible when the original matrix is updated only once or twice, however becomes increasingly impractical as multiple row/column updates take place over time. Therefore, it becomes crucial to develop algorithms which return a reasonable approximation of $A_k$ while taking advantage of $B_k$. Such schemes have already been considered extensively for the case of full SVD \cite{brand2003fast,Gu94astable,moonen1992singular} and rank-$k$ SVD \cite{berry1995using,sarwar2002incremental,vecharynski2014fast,zha1999updating}. Nonetheless, for general-purpose matrices it is rather unclear how to enhance their accuracy. \subsection{Contributions.} \begin{enumerate}\itemsep 0pt \item We propose and analyze a projection scheme to update the rank-$k$ SVD of evolving matrices. Our scheme uses a right singular projection subspace equal to $\mathbb{C}^n$, and only determines the left singular projection subspace. \item We propose and analyze two different options to set the left singular projection subspace. A complexity analysis is also presented. \item We present experiments performed on matrices stemming from applications in LSI and recommender systems. These experiments demonstrate the numerical behavior of the proposed scheme and showcase the various tradeoffs in accuracy versus complexity. \end{enumerate} \section{Background and notation} The (full) SVD of matrix $B$ is denoted as $B=U\Sigma V^H$ where $U\in \mathbb{C}^{m\times m}$ and $V\in \mathbb{C}^{n\times n}$ are unitary matrices whose $j$'th column is equal to the left singular vector $u^{(j)}$ and right singular vector $v^{(j)}$, respectively. The matrix $\Sigma \in \mathbb{R}^{m\times n}$ has non-zero entries only along its main diagonal, and these entries are equal to the singular values $\sigma_1\geq\cdots\geq \sigma_{\mathtt{min}(m,n)}$. Moreover, we define the matrices $U_{j} = \left[u^{(1)},\ldots,u^{(j)}\right]$, $V_{j} = \left[v^{(1)},\ldots,v^{(j)}\right]$, and $\Sigma_{j} = \mathtt{diag}\left(\sigma_1, \ldots,\sigma_j\right)$. The rank-$k$ truncated SVD of matrix $B$ can then be written as $B_k=U_k\Sigma_kV_k^H =\sum_{j=1}^k \sigma_j u^{(j)} \left(v^{(j)}\right)^H$. We follow the same notation for matrix $A$ with the exception that a circumflex is added on top of each variable, i.e., $A_k=\widehat{U}_k\widehat{\Sigma}_k\widehat{V}_k^H =\sum_{j=1}^k \widehat{\sigma}_j \widehat{u}^{(j)} \left(\widehat{v}^{(j)}\right)^H$, with $\widehat{U}_{j} = \left[\widehat{u}^{(1)},\ldots,\widehat{u}^{(j)}\right]$, $\widehat{V}_{j} = \left[\widehat{v}^{(1)},\ldots,\widehat{v}^{(j)}\right]$, and $\widehat{\Sigma}_{j} = \mathtt{diag}\left(\widehat{\sigma}_1, \ldots,\widehat{\sigma}_j\right)$. The routines $\mathtt{nr}(K)$ and $\mathtt{nnz}(K)$ return the number of rows of matrix and non-zero entries of matrix $K$, respectively. Throughout this paper $\|\cdot\|$ will stand for the $\ell_2$ norm when the input is a vector, and the spectral norm when the input is a matrix. Moreover, the term $\mathtt{range}(K)$ will denote the column space of matrix $K$, while $\mathtt{span}(\cdot)$ will denote the linear span of a set of vectors. The identity matrix of size $n$ will be denoted by $I_n$. \subsection{Related work.} The problem of updating the SVD of an evolving matrix has been considered extensively in the context of LSI. Consider first the case $A = \begin{pmatrix} B \\ E \\ \end{pmatrix}$, and let $(I-V_kV_k^H)E^H=QR$ such that $Q$ is orthonormal and $R$ is upper trapezoidal. The scheme in \cite{zha1999updating} writes \setlength\arraycolsep{1.4pt}% {\small \begin{eqnarray*} \begin{pmatrix} B \\[0.3em] E \\[0.3em] \end{pmatrix} \approx \begin{pmatrix} U_k\Sigma_kV_k^H \\[0.3em] E \\[0.3em] \end{pmatrix} & = & \begin{pmatrix} U_k & \\[0.3em] & I_s \\[0.3em] \end{pmatrix} \begin{pmatrix} \Sigma_k & \\[0.3em] EV_k & R^H \\[0.3em] \end{pmatrix} \begin{pmatrix} V_k & Q \\[0.3em] \end{pmatrix}^H \\ &=& \left(\begin{pmatrix} U_k & \\[0.3em] & I_s \\[0.3em] \end{pmatrix}F\right)\Theta \left(\begin{pmatrix} V_k & Q \\[0.3em] \end{pmatrix}G\right)^H \end{eqnarray*}} where the matrix product $F\Theta G^H$ denotes the compact SVD of the matrix $\begin{pmatrix} \Sigma_k & \\ EV_k & R^H \\ \end{pmatrix}$. The above idea can be also applied to $A=\begin{pmatrix}B&E\end{pmatrix}$. Indeed, if matrices $Q$ and $R$ are now determined as $(I-U_kU_k^H)E=QR$, we can approximate {\small \begin{eqnarray*} \begin{pmatrix} B & E \end{pmatrix}&\approx& \begin{pmatrix} U_k\Sigma_kV_k^H &E \end{pmatrix} \\ & = & \begin{pmatrix} U_k & Q \end{pmatrix} \begin{pmatrix} \Sigma_k & U_k^HE \\[0.3em] & R \\[0.3em] \end{pmatrix} \begin{pmatrix} V_k^H & \\[0.3em] & I_s \\[0.3em] \end{pmatrix}\\ &= & \left(\begin{pmatrix} U_k & Q \end{pmatrix} F\right)\Theta\left( \begin{pmatrix} V_k & \\[0.3em] & I_s \\[0.3em] \end{pmatrix}G\right)^ \end{eqnarray*}} where the matrix product $F\Theta G$ now denotes the compact SVD of the matrix $\begin{pmatrix} \Sigma_k & U_k^HE \\ & R \\ \end{pmatrix}$. When $B_k$ coincides with the compact SVD of $B$, the above schemes compute the exact rank-$k$ SVD of $A$, and no access to matrix $B$ is required. Nonetheless, the application of the method in \cite{zha1999updating} can be challenging. For general updating problems, or problems where $A$ does not satisfy a ``low-rank plus shift'' structure \cite{zha2000matrices}, replacing $B$ by $B_k$ might not lead to a satisfactory approximation of $A_k$. Moreover, the memory/computational cost associated with the computation of the QR and SVD decompositions in each one of the above two scenarios might be prohibitive. The latter was recognized in \cite{vecharynski2014fast} where it was proposed to adjust the method in \cite{zha1999updating} by replacing matrices $(I-V_kV_k^H)E^H$ and $(I-U_kU_k^H)E$ with a low-rank approximation computed by applying the Golub-Kahan Lanczos bidiagonalization procedure \cite{golub1965calculating}. Similar ideas have been suggested in \cite{yamazaki2017sampling} and \cite{ubaru2019sampling} where the Golub-Kahan Lanczos bidiagonalization procedure was replaced by randomized SVD \cite{halko2011finding,ubaru2015low} and graph coarsening \cite{ubaru2019sampling}, respectively. \section{The projection viewpoint} \label{Sec2} The methods discussed in the previous section can be recognized as instances of a Rayleigh-Ritz projection procedure and can be summarized as follows \cite{vecharynski2014fast,yamazaki2017sampling}: \begin{enumerate}\itemsep 0pt \item Compute matrices $Z$ and $W$ such that $\mathtt{range}(Z)$ and $\mathtt{range}(W^H)$ approximately capture $\mathtt{range}(\widehat{U}_k)$ and $\mathtt{range}(\widehat{V}_k^H)$, respectively. \item Compute $[\Theta_k,F_k,G_k] = \mathtt{svd}(Z^HAW)$ where $\Theta_k,\ F_k$, and $G_k$ denote the $k$ leading singular values and associated left and right singular vectors of $Z^HAW$, respectively. \item Approximate $A_k$ by the product $(ZF_k)\Theta_k (WG_k)^H$. \end{enumerate} Ideally, the matrices $Z$ and $W$ should satisfy \begin{eqnarray*} \mathtt{span}\left(\widehat{u}^{(1)},\ldots,\widehat{u}^{(k)}\right) &\subseteq& \mathtt{range}(Z),\ \ \rm{and} \\ \mathtt{span}\left(\widehat{v}^{(1)},\ldots,\widehat{v}^{(k)}\right) &\subseteq& \mathtt{range}(W). \end{eqnarray*} Moreover, the size of matrix $Z^HAW$ should be as small as possible to avoid high computational costs during the computation of $[\Theta_k,F_k, G_k] = \mathtt{svd}(Z^HAW)$. Table \ref{table0} summarizes a few options to set matrices $Z$ and $W$ for the row updating problem. The method in \cite{vecharynski2014fast} considers the same matrix $Z$ as in \cite{zha1999updating} but sets $W=[V_k,X_r]$ where $X_r$ denotes the $r\in \mathbb{Z}^*$ leading left singular vectors of $(I-V_kV_k^H)E^H$. The choice of matrices $Z$ and $W$ listed under the option “Algorithm \ref{alg1}" is explained in the next section. Note that the first variant of Algorithm \ref{alg1} uses the same $Z$ as in \cite{zha1999updating} and \cite{vecharynski2014fast} but different $W$. This choice leads to similar or higher accuracy than the scheme in \cite{zha1999updating} and this is also achieved asymptotically faster. A detailed comparison is deferred to the Supplementary Material. The second variant of Algorithm \ref{alg1} is a more expensive but also more accurate version of the first variant. \begin{table} \centering \caption{{\it Different options to set the projection matrices $Z$ and $W$ for the row updating problem.}} \label{table0} \vspace{0.05in} \begin{tabular}{ l c c } \toprule \toprule Method\phantom{eigenrecAA} & $Z$ & $W$ \\ \midrule \rowcolor{white!89.803921568627459!black} \cite{berry1995using} & & $V_k$ \\ \cite{zha1999updating} & {\small $Z = \begin{pmatrix} U_k & \\ & I_s \\ \end{pmatrix}$} & $[V_k,Q]$\\ \rowcolor{white!89.803921568627459!black} \cite{vecharynski2014fast} & & $[V_k,X_r]$\\ \midrule Alg. \ref{alg1} & {\small $Z = \begin{pmatrix} U_k & \\ & I_s \\ \end{pmatrix}$} & $I_n$ \\ \rowcolor{white!89.803921568627459!black} Alg. \ref{alg1} & {\small $Z = \begin{pmatrix} U_k,X_{\lambda,r} & \\ & I_s \\ \end{pmatrix}$} & $I_n$ \\ \bottomrule \bottomrule \end{tabular} \vspace*{-0.3cm} \end{table} \subsection{The proposed algorithm.} Consider again the SVD update of matrix $A = \begin{pmatrix} B \\ E \\ \end{pmatrix}$, with $E\in \mathbb{C}^{s\times n}$. The right singular vectors of $A$ trivially satisfy $\widehat{v}^{(i)} \subseteq \mathtt{range}(I_n),\ i=1,\ldots,n$. Therefore, we can simply set $W=I_n$ and compute the $k$ leading singular triplets $\left(\theta_i,f^{(i)},g^{(i)}\right)$ of the matrix $Z^HAW = Z^HA$. Indeed, this choice of $W$ is ideal in terms of accuracy while it also removes the need to compute an approximate factorization of matrix $(I-V_kV_k^H)E^H$. On the other hand, the number of columns in matrix $Z^HAW$ is now equal to $n$ instead of $k + s$ in \cite{zha1999updating} and $k+l,\ l\ll s$, in \cite{vecharynski2014fast,yamazaki2017sampling}. This difference can be important when the full SVD of $Z^HAW$ is computed as in \cite{vecharynski2014fast,yamazaki2017sampling,zha1999updating}. Our approach is to compute the singular values of $Z^HA$ in a matrix-free fashion while also skipping the computation of the right singular vectors $G_k$. Indeed, the matrix $G_k$ is only needed to approximate the $k$ leading singular vectors $\widehat{V}_k$ of $A$. Assuming that an approximation $\overline{U}_k$ and $\overline{\Sigma}_k$ of the matrices $\widehat{U}_k$ and $\widehat{\Sigma}_k$ is available, $\widehat{V}_k$ can be approximated as $\overline{V}_k = A^H\overline{U}_k \overline{\Sigma}_k^{-1}$. \begin{algorithm} \caption{RR-SVD (“$AA^H$" version). \label{alg1}} \begin{algorithmic}[1] \State {\bf Input:} $B,U_k,\Sigma_k,V_k,E,Z$ \State {\bf Output:} $\overline{U}_k\approx \widehat{U}_k,\overline{\Sigma}_k\approx \widehat{\Sigma}_k,\overline{V}_k \approx \widehat{V}_k$ \State Solve $[\Theta_k,F_k]=\mathtt{svd}_k(Z^HA)$ \State Set $\overline{U}_k=ZF_k$ and $\overline{\Sigma}_k=\Theta_k$ \State Set $\overline{V}_k = A^H\overline{U}_k \overline{\Sigma}_k^{-1}$ \end{algorithmic} \end{algorithm} \vspace*{-0.3cm} The proposed method is sketched in Algorithm \ref{alg1}. In terms of computational cost, Steps 4 and 5 require approximately $2\mathtt{nnz}(Z)k$ and $(2\mathtt{nnz}(A)+n)k$ Floating Point Operations (FLOPs), respectively. The complexity of Step 3 will generally depend on the algorithm used to compute the matrices $\Theta_k$ and $F_k$. We assume that these are computed by applying the unrestarted Lanczos method to matrix $Z^HAA^HZ$ in a matrix-free fashion \cite{saad2011numerical}. Under the mild assumption that Lanczos performs $\delta$ iterations for some $\delta \in \mathbb{Z}^*$ which is greater than or equal to $k$, a rough estimate of the total computational cost of Step 3 is $4\left(\mathtt{nnz}(Z^H)+\mathtt{nnz}(A)\right)\delta +2\mathtt{nr}(Z^H)\delta^2$ FLOPs. The exact complexity of Lanczos will depend on the choice of matrix $Z$. A detailed asymptotic analysis of the complexity of Algorithm \ref{alg1} and comparisons with other schemes are deferred to the Supplemental. \begin{algorithm} \caption{RR-SVD (“$A^HA$" version). \label{alg2}} \begin{algorithmic}[1] \State {\bf Input:} $B,U_k,\Sigma_k,V_k,E,Z$ \State {\bf Output:} $\overline{U}_k\approx \widehat{U}_k,\overline{\Sigma}_k\approx \widehat{\Sigma}_k,\overline{V}_k \approx \widehat{V}_k$ \State Solve $[\Theta_k,G_k]=\mathtt{svd}_k(Z^HA^H)$ \State Set $\overline{V}_k=ZG_k$ and $\overline{\Sigma}_k=\Theta_k$ \State Set $\overline{U}_k = A\overline{V}_k \overline{\Sigma}_k^{-1}$ \end{algorithmic} \end{algorithm} Algorithm \ref {alg1} can be adapted to approximate $A_k$ for matrices of the form $A=\begin{pmatrix} B & E \end{pmatrix}$. The complete procedure is summarized in Algorithm \ref{alg2}. Note that by combining Algorithms \ref{alg1} and \ref{alg2} we can approximate the $k$ leading singular triplets of matrices in which we add both new rows and columns. Throughout the remainder of this paper we focus in updating the rank-$k$ SVD of matrix $A=\begin{pmatrix} B \\ E \\ \end{pmatrix}$ by Algorithm \ref{alg1}. The discussion extends trivially to updates of matrix $A = \begin{pmatrix} B & E \end{pmatrix}$ by Algorithm \ref{alg2}. \section[Building the projection matrix Z]{Building the projection matrix $Z$} The accuracy of Step 5 in Algorithm \ref{alg1} depends on the accuracy of the approximate leading singular values and associated left singular vectors from Step 3. In turn, these quantities depend on how well $\mathtt{range}(Z)$ captures the singular vectors $\widehat{u}^{(1)},\ldots,\widehat{u}^{(k)}$ \cite{jia2001analysis,nakatsukasa2017accuracy}. Therefore, our focus lies in forming $Z$ such that the distance between the subspace $\mathtt{range}(Z)$ and the left singular vectors $\widehat{u}^{(1)},\ldots,\widehat{u}^{(k)}$ is as small as possible. \subsection[Exploiting the left singular vectors of B]{Exploiting the left singular vectors of $B$.} \label{choiceZ1} The following proposition presents a closed-form expression of the $i$'th left singular vector of matrix $A=\begin{pmatrix} B \\ E \\ \end{pmatrix}$. \begin{proposition}\label{pro0} The left singular vector $\widehat{u}^{(i)}$ associated with singular value $\widehat{\sigma}_i$ is equal to \begin{equation*} \widehat{u}^{(i)}= \begin{pmatrix} -(BB^H-\widehat{\sigma}_{i}^2I_m)^{-1}BE^H\widehat{y}^{(i)} \\[0.3em] \widehat{y}^{(i)} \\[0.3em] \end{pmatrix}, \end{equation*} where $\widehat{y}^{(i)}$ satisfies the equation {\small \[\left[E\left(\sum\limits_{j=1}^{n} v^{(j)} \left(v^{(j)}\right)^H \dfrac{\widehat{\sigma}_{i}^2}{\widehat{\sigma}_{i}^2-\sigma_{j}^2} \right)E^H-\widehat{\sigma}_{i}^2I_s\right]\widehat{y}^{(i)}=0,\]} and $\sigma_j=0$ for any $j=m+1,\ldots,n$ (when $n>m$). \end{proposition} \begin{proof} Deferred to the Supplementary Material. \end{proof} The above representation of $\widehat{u}^{(i)}$ requires the solution of a nonlinear eigenvalue problem to compute $\widehat{y}^{(i)}$. Alternatively, we can express $\widehat{u}^{(i)}$ as follows. \begin{proposition} \label{pro1} The left singular vector $\widehat{u}^{(i)}$ associated with singular value $\widehat{\sigma}_i$ is equal to \begin{equation*} \widehat{u}^{(i)} = \begin{pmatrix} u^{(1)},\ldots,u^{(\mathtt{min}(m,n))} & \\[0.3em] & I_s \\[0.3em] \end{pmatrix} \begin{pmatrix} \chi_{1,i} \\[0.3em] \vdots \\[0.3em] \chi_{\mathtt{min}(m,n),i} \\[0.3em] \widehat{y}^{(i)} \\[0.3em] \end{pmatrix}, \end{equation*} where the scalars $\chi_{j,i}$ are equal to \[\chi_{j,i}=-\left(Ev^{(j)}\right)^H\widehat{y}^{(i)} \dfrac{\sigma_{j}}{\sigma_{j}^2-\widehat{\sigma}_{i}^2}.\] \end{proposition} \begin{proof} Deferred to the Supplementary Material. \end{proof} Proposition \ref{pro1} suggests that setting $Z=\begin{pmatrix} u^{(1)},\ldots,u^{(\mathtt{min}(m,n))} & \\ & I_s \\ \end{pmatrix}$ should lead to an exact (in the absence of round-off errors) computation of $\widehat{u}^{(i)}$. In practice, we only have access to the $k$ leading left singular vectors of $B$, $u^{(1)}, \ldots,u^{(k)}$. The following proposition suggests that the distance between $\widehat{u}^{(i)}$ and the range space of $Z=\begin{pmatrix} u^{(1)},\ldots,u^{(k)} & \\ & I_s \\ \end{pmatrix}$ is at worst proportional to the ratio $\dfrac{\sigma_{k+1}}{\sigma_{k+1}^2-\widehat{\sigma}_{i}^2}$. \begin{proposition} \label{pro2} Let matrix $Z$ in Algorithm \ref{alg1} be defined as \begin{equation*}\label{Zpro2} Z = \begin{pmatrix} u^{(1)},\ldots,u^{(k)} & \\[0.3em] & I_s \\[0.3em] \end{pmatrix}, \end{equation*} and set $\gamma = O\left(\norm{E^H\widehat{y}^{(i)}}\right)$. Then, for any $i=1,\ldots,k$: \begin{equation*} \mathtt{min}_{z\in \mathtt{range}(Z)} \|\widehat{u}^{(i)}-z\| \leq \left|\dfrac{\gamma\sigma_{k+1}}{\sigma_{k+1}^2-\widehat{\sigma}_{i}^2}\right|. \end{equation*} \end{proposition} \begin{proof} Deferred to the Supplementary Material. \end{proof} Proposition \ref{pro2} implies that left singular vectors associated with larger singular values of $A$ are likely to be approximated more accurately. \subsubsection[The structure of matrix ZH A]{The structure of matrix $Z^HA$.} Setting the projection matrix $Z$ as in Proposition \ref{pro2} gives \begin{equation*} Z^HA = \begin{pmatrix} V_k \Sigma_k & E^H \end{pmatrix}^H. \end{equation*} Therefore, each Matrix-Vector (MV) product with matrix $Z^HAA^HZ$ requires two MV products with matrices $\Sigma_k,\ V_k$ and $E$, for a total cost of about $4(nk+\mathtt{nnz}(E))$ FLOPs. Moreover, we have $\mathtt{nr}(Z^H)=s+k$, and thus a rough estimate of the cost of Step 3 in Algorithm \ref{alg1} is $4(nk+\mathtt{nnz}(E))\delta + 2(s+k)\delta^2$ FLOPs. \subsection{Exploiting resolvent expansions.} \label{choiceZ2} The choice of $Z$ presented in Section \ref{choiceZ1} can compute the exact $A_k$ provided that the rank of $B$ is exactly $k$. Nonetheless, when the rank of $B$ is larger than $k$ and the singular values $\sigma_{k+1},\ldots,\sigma_{\mathtt{min}(m,n)}$ are not small, the accuracy of the approximate $A_k$ returned by Algorithm \ref{alg1} might be poor. This section presents an approach to enhance the projection matrix $Z$. Recall that the top part of $\widehat{u}^{(i)}$ is equal to $\widehat{f}^{(i)}=-(BB^H-\widehat{\sigma}_i^2I_m)^{-1}BE^H\widehat{y}^{(i)}$. In practice, even if we knew the unknown quantities $\widehat{\sigma}_i^2$ and $\widehat{y}^{(i)}$, the application of matrix $(BB^H-\widehat{\sigma}_i^2I_m)^{-1}$ for each $i=1,\ldots,k$, is too costly. The idea presented in this section considers the approximation of $(BB^H-\widehat{\sigma}_i^2I_m)^{-1},\ i=1,\ldots,k$, by $(BB^H-\lambda I_m)^{-1}$ for some fixed scalar $\lambda \in \mathbb{R}$. \begin{lemma} \label{lem1} Let \begin{equation*} B(\lambda)=(I_m-U_kU_k^H)(BB^H-\lambda I_m)^{-1} \end{equation*} such that $\lambda > \widehat{\sigma}_k^2$. Then, we have that for any $i=1,\ldots,k$: \[B(\widehat{\sigma}_i^2) =B(\lambda) \sum\limits_{\rho=0}^\infty \left[(\widehat{\sigma}_i^2-\lambda)B(\lambda)\right]^\rho.\] \end{lemma} \begin{proof} Deferred to the Supplementary Material. \end{proof} Clearly, the closer $\lambda$ is to $\widehat{\sigma}_i^2$, the more accurate the approximation in Lemma \ref{lem1} should be. We can now provide an expression for $\widehat{u}^{(i)}$ similar to that in Proposition \ref{pro1}. \begin{proposition} \label{pro34} The left singular vector $\widehat{u}^{(i)}$ associated with singular value $\widehat{\sigma}_i$ is equal to \begin{eqnarray*} \widehat{u}^{(i)} & = & \begin{pmatrix} u^{(1)},\ldots,u^{(k)} & \\[0.3em] & I_s \\[0.3em] \end{pmatrix} \begin{pmatrix} \chi_{1,i} \\[0.3em] \vdots \\[0.3em] \chi_{k,i} \\[0.3em] \widehat{y}^{(i)} \\[0.3em] \end{pmatrix} \\ & & - \begin{pmatrix} B(\lambda) \sum\limits_{\rho=0}^\infty \left[(\widehat{\sigma}_i^2-\lambda)B(\lambda)\right]^\rho BE^H\widehat{y}^{(i)} \\[0.3em] \\[0.3em] \end{pmatrix}. \end{eqnarray*} \end{proposition} \begin{proof} Deferred to the Supplementary Material. \end{proof} Proposition \ref{pro34} suggests a way to enhance the projection matrix $Z$ shown in Proposition \ref{pro2}. For example, we can approximate $B(\lambda) \sum\limits_{\rho=0}^\infty \left[(\widehat{\sigma}_i^2-\lambda)B(\lambda)\right]^\rho$ by $B(\lambda)$, which gives the following bound for the distance of $\widehat{u}^{(i)}$ from $\mathtt{range}(Z)$. \begin{proposition} \label{pro35} Let matrix $Z$ in Algorithm \ref{alg1} be defined as \begin{equation*} Z = \begin{pmatrix} u^{(1)},\ldots,u^{(k)} & -B(\lambda)BE^H & \\[0.3em] & & I_s \\[0.3em] \end{pmatrix} \end{equation*} and set $\gamma = O\left(\norm{E^H\widehat{y}^{(i)}}\right)$. Then, for any $\lambda\geq \hat{\sigma}_1^2$ and $i=1,\ldots,k$: \begin{equation*} \mathtt{min}_{z\in \mathtt{range}(Z)} \|\widehat{u}^{(i)}-z\| \leq \left|\dfrac{\gamma\sigma_{k+1}(\widehat{\sigma}_i^2-\lambda)} {(\sigma_{k+1}^2-\widehat{\sigma}_{i}^2)\left(\sigma_{k+1}^2-\lambda\right)}\right|. \end{equation*} \end{proposition} \begin{proof} Deferred to the Supplementary Material. \end{proof} Compared to the bound shown in Proposition \ref{pro2}, the bound in Proposition \ref{pro35} is multiplied by $\dfrac{\widehat{\sigma}_i^2-\lambda}{\sigma_{k+1}^2-\lambda}$. In practice, due to cost considerations, we choose a single value of $\lambda$ that is more likely to satisfy the above consideration, e.g., $\lambda \geq \widehat{\sigma}_1$. \subsubsection[Computing the matrix B(lambda)BE H]{Computing the matrix $B(\lambda)BE^H$.} The construction of matrix $Z$ shown in Lemma \ref{pro35} requires the computation of the matrix $-B(\lambda)BE^H$. The latter is equal to the matrix $X$ that satisfies the equatio \begin{equation}\label{eqBl1} -(BB^H-\lambda I_m)X = (I_m-U_kU_k^H)BE^H. \end{equation} The eigenvalues of the matrix $-(BB^H-\lambda I_m)$ are equal to $\{\lambda-\widehat{\sigma}_i^2\}_{i=1,\ldots,m}$, and for any $\lambda > \widehat{\sigma}_1^2$, the matrix $-(BB^H-\lambda I_m)$ is positive definite. It is thus possible to compute $X$ by repeated applications of the Conjugate Gradient method. \begin{proposition} \label{pro46} Let $K=-(BB^H-\lambda I_m)$ and $\|e_j\|_K$ denote the $K$-norm of the error after $j$ iterations of the Conjugate Gradient method applied to the linear system $-(BB^H-\lambda I_m)x = b$, where $b\in \mathtt{range} \left((I_m-U_kU_k^H)BE^H\right)$. Then, \begin{equation*} \|e_j\|_K \leq 2 \left(\dfrac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1}\right)^j \|e_0\|_K, \end{equation*} where $\kappa = \dfrac{\sigma_{\mathtt{min}(m,n)}^2-\lambda}{\sigma_{k+1}^2-\lambda}$ and $\lambda > \widehat{\sigma}_1^2$. \end{proposition} \begin{proof} Since $b\in \mathtt{range}((I_m-U_kU_k^H)BE^H)$, the vector $x$ satisfies the equation \begin{equation}\label{eqBl2} -(I_m-U_kU_k^H)(BB^H-\lambda I_m)(I_m-U_kU_k^H)x = b. \end{equation} The proof can then be found in \cite{saad2000deflated}. \end{proof} \begin{corollary} The effective condition number satisfies the inequality $\kappa \leq \dfrac{\lambda}{\lambda-\sigma_{k+1}^2}$. \end{corollary} Proposition \ref{pro46} applies to each one of the $s$ right-hand sides in (\ref{eqBl1}). Assuming that the matrix $(I_m-U_kU_k^H)BE^H$ can be formed and stored, the effective condition number can be reduced even further. For example, solving (\ref{eqBl1}) by the block Conjugate Gradient method leads to an effective condition number $\kappa=\dfrac{\sigma_{\mathtt{min}(m,n)}^2-\lambda}{\sigma_{k+s+1}^2-\lambda}$ \cite{o1980block}. Additional techniques to solve linear systems with multiple right-hand sides can be found in \cite{kalantzis2013accelerating,kalantzis2018scalable,stathopoulos2010computing}. Finally, notice that as $\lambda$ increases, the effective condition number decreases. Thus from a convergence viewpoint, it is better to choose $\lambda\gg \widehat{\sigma}_{i}^2$. On the other hand, increasing $\lambda$ leads to worse bounds in Proposition \ref{pro35}. \subsection[Truncating the matrix B(lambda)BE H]{Truncating the matrix $B(\lambda)BE^H$.} \label{trunc} When the number of right-hand sides in (\ref{eqBl1}), i.e., number of rows in matrix $E$, is too large, an alternative is to consider $-B(\lambda)BE^H\approx X_{\lambda,r}S_{\lambda,r} Y_{\lambda,r}^H$, where $X_{\lambda,r}S_{\lambda,r}Y_{\lambda,r}^H$ denotes the rank-$r$ truncated SVD of matrix $-B(\lambda)BE^H$. We can then replace $-B(\lambda)BE^H$ by $X_{\lambda,r}$, since $\mathtt{range}\left(X_{\lambda,r}S_{\lambda,r}Y_{\lambda,r}^H\right) \subseteq\mathtt{range}\left(X_{\lambda,r}\right)$. The matrix $X_{\lambda,r}$ can be approximated in a matrix-free fashion by applying a few iterations of Lanczos bidiagonalization to matrix $B(\lambda)BE^H$. Each iteration requires two applications of Conjugate Gradient to solve linear systems of the same form as in (\ref{eqBl2}). A second approach is to apply randomized SVD as described in \cite{halko2011finding,clarkson2009numerical}. In practice, this amounts to computing the SVD of the matrix $B(\lambda)BE^HEB^HB(\lambda)R$ where $R$ is a real matrix with at least $r$ columns whose entries are i.i.d. Gaussian random variables of zero mean and unit variance. \subsubsection[The structure of matrix \texorpdfstring{$Z^HA$}.]{The structure of matrix $Z^HA$.} Setting the basis matrix $Z$ as in Proposition \ref{pro35} leads to \begin{equation*} Z^HA = \begin{pmatrix} V_k \Sigma_k & B^HX_{\lambda,r} & E^H \end{pmatrix}^H. \end{equation*} Each MV product with matrix $Z^HAA^HZ$ then requires two MV products with matrices $\Sigma_k,\ V_k,\ E$ and $B^HX_{\lambda,r}$, for a total cost of $4(n(k+r)+\mathtt{nnz}(E))$. Moreover, we have $\mathtt{nr} (Z^H)=k+r+s$, and thus a rough estimate of the cost of Step 3 in Algorithm \ref{alg1} is $4(n(k+r)+\mathtt{nnz}(E))\delta + 2(s+k+r)\delta^2$ FLOPs. \section{Evaluation} Our experiments were conducted in a Matlab environment (version R2020a), using 64-bit arithmetic, on a single core of a computing system equipped with an Intel Haswell E5-2680v3 processor and 32 GB of system memory. \begin{table}[ht] \centering \caption{\it Properties of the test matrices used throughout this section. \label{table1}}\vspace{0.05in} \begin{tabular}{ l c c c c} \toprule \toprule Matrix & rows & columns & $nnz(A)$/rows & Source \\ \midrule \rowcolor{white!89.803921568627459!black} MED & 5,735& 1,033 & 8.9 & \cite{berrydata} \\ CRAN & 4,563 & 1,398 & 17.8 & \cite{berrydata} \\ \rowcolor{white!89.803921568627459!black} CISI & 5,544 & 1,460 & 12.2 & \cite{berrydata} \\ ML1M & 6,040 & 3,952 & 165.6 & \cite{harper2015movielens}\\ \bottomrule \bottomrule \end{tabular} \end{table} \begin{figure}[ht] \centering \includegraphics[width=0.89\linewidth]{FIGS/plot_singvals.eps} \caption{{\it Leading $k=100$ singular values.}}\label{fig:15} \end{figure} \begin{figure*}[!ht] \centering \includegraphics[width=0.24\linewidth]{FIGS/med_1b.eps} \includegraphics[width=0.24\linewidth]{FIGS/cran_1b.eps} \includegraphics[width=0.24\linewidth]{FIGS/cisi_1b.eps} \includegraphics[width=0.24\linewidth]{FIGS/1m_1b.eps} \includegraphics[width=0.24\linewidth]{FIGS/med_2b.eps} \includegraphics[width=0.24\linewidth]{FIGS/cran_2b.eps} \includegraphics[width=0.24\linewidth]{FIGS/cisi_2b.eps} \includegraphics[width=0.24\linewidth]{FIGS/1m_2b.eps} \caption{{\it Approximation of the leading $k=50$ singular triplets for the single update case. From left to right: MED, CRAN, CISI, and 1M.}}\label{fig:16} \end{figure*} Table \ref{table1} lists the test matrices considered throughout our experiments along with their dimensions and source from which they were retrieved. The first three matrices come from LSI applications and represent term-document matrices, while the last matrix comes from recommender systems and represents a user-item rating matrix. The $k=100$ leading singular values of each matrix listed in Table \ref{table1} are plotted in Figure \ref{fig:15}. Throughout this section we focus on accuracy and will be reporting: a) the relative error in the approximation of the $k$ leading singular values of $A$, and b) the norm of the residual $A\widehat{v}^{(i)}-\widehat{\sigma}_i\widehat{u}^{(i)}$, scaled by $\widehat{\sigma}_i$. The scalar $\lambda$ is set as $\lambda = 1.01 \widehat{\sigma}_1^2$ where the latter singular value is approximated by a few iterations of Lanczos bidiagonalization. \subsection{Single update.} In this section we consider the approximation of the $k=50$ leading singular triplets of $A = \begin{pmatrix} B \\ E \\ \end{pmatrix}$ where $B=A(1:\ceil{m/2},:)$, i.e., the size of matrices $B$ and $E$ is about half the size of $A$. We run Algorithm \ref{alg1} and set $Z$ as in Propositions \ref{pro2} and \ref{pro35}. For the enhanced matrix $Z$, the matrix $X_{\lambda,r}$ is computed by randomized SVD where $r=k$ and the number of columns in matrix $R$ is equal to $2k$ (recall the discussion in Section \ref{trunc}). The associated linear system with $2k$ right-hand sides is solved by block Conjugate Gradient. Figure \ref{fig:16} plots the relative error and residual norm in the approximation of the $k=50$ leading singular triplets of $A$. As expected, enhancing the projection matrix $Z$ by $X_{\lambda,r}$ leads to higher accuracy. This is especially true for the approximation of those singular triplets with corresponding singular values $\widehat{\sigma}_i\approx \lambda$. In all of our experiments, the worst-case (maximum) relative error and residual norm was achieved in the approximation of the singular triplet $(\hat{\sigma}_{50},\hat{u}^{(50)},\hat{v}^{(50)})$. Table \ref{table2} lists the relative error and residual norm associated with the approximation of the singular triplet $(\hat{\sigma}_{50},\hat{u}^{(50)},\hat{v}^{(50)})$ as $r$ varies from ten to fifty in increments of ten. As a reference, we list the same quantity for the case {\footnotesize $Z = \begin{pmatrix} U_k & \\ & I_s \\ \end{pmatrix}$}. As expected, enhancing the projection matrix $Z$ by $X_{\lambda,r}$ leads to higher accuracy, especially for higher values of $r$. \begin{table*}[t] \centering \caption{\it Relative error and residual norm associated with the approximation of the singular triplet $(\widehat{\sigma}_{50},\widehat{u}^{(50)},\widehat{v}^{(50)})$. \label{table2}} \vspace{0.05in} \begin{tabular}{l @{\hskip 0.3in} c c c c c c c c c c c c} \toprule \toprule \multirow{2}{*}{} & & \multicolumn{2}{c}{\textbf{MED}} & & \multicolumn{2}{c}{\textbf{CRAN}} & & \multicolumn{2}{c}{\textbf{CISI}} & & \multicolumn{2}{c}{\textbf{ML1M}}\\ \cmidrule[0.4pt](lr{0.125em}){3-4 \cmidrule[0.4pt](lr{0.125em}){6-7 \cmidrule[0.4pt](lr{0.125em}){9-10 \cmidrule[0.4pt](lr{0.125em}){12-13 & $r$ & err. & res. & & err. & res. & & err. & res. & & err. & res. \\ \midrule \multirow{5}{*}{{\footnotesize $Z = \begin{pmatrix} U_k & X_{\lambda,r} & \\ & & I_s \\ \end{pmatrix}$}} & \tikzmarkin[hor=style mygrey]{r10}$r=10$ & 0.036 & 0.234 & & 0.026 & 0.176 & & 0.025 & 0.214 & & 0.031 & 0.156\tikzmarkend{r10} \\ & $r=20$ & 0.031 & 0.184 & & 0.021 & 0.155 & & 0.023 & 0.189 & & 0.012 & 0.143 \\ & \tikzmarkin[hor=style mygrey]{r30}$r=30$ & 0.021 & 0.114 & & 0.017 & 0.134 & & 0.017 & 0.161 & & 0.008& 0.121\tikzmarkend{r30}\\ & $r=40$ & 0.009 & 0.091 & & 0.013 & 0.111 & & 0.012 & 0.134 & & 0.005 & 0.112 \\ & \tikzmarkin[hor=style mygrey]{r50}$r=50$ & 0.004 & 0.053 & & 0.007 & 0.098 & & 0.007 & 0.081 & & 0.003 & 0.076\tikzmarkend{r50} \\ \midrule {\footnotesize $Z = \begin{pmatrix} U_k & \\ & I_s \\ \end{pmatrix}$} & N/A & 0.045 & 0.269 & & 0.045 & 0.199 & & 0.287 & 0.250 & & 0.041& 0.173\\ \bottomrule \bottomrule \end{tabular} \end{table*} \begin{figure*}[ht] \centering \includegraphics[width=0.24\linewidth]{FIGS/med_3b.eps} \includegraphics[width=0.24\linewidth]{FIGS/cran_3b.eps} \includegraphics[width=0.24\linewidth]{FIGS/cisi_3b.eps} \includegraphics[width=0.24\linewidth]{FIGS/1m_3b.eps} \includegraphics[width=0.24\linewidth]{FIGS/med_5b.eps} \includegraphics[width=0.24\linewidth]{FIGS/cran_5b.eps} \includegraphics[width=0.24\linewidth]{FIGS/cisi_5b.eps} \includegraphics[width=0.24\linewidth]{FIGS/1m_5b.eps} \caption{{\it Relative error in the approximation of the $k=50$ leading singular values of $A$ for the multiple updates case. From left to right: MED, CRAN, CISI, and 1M.}}\label{fig:17} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=0.24\linewidth]{FIGS/med_4b.eps} \includegraphics[width=0.24\linewidth]{FIGS/cran_4b.eps} \includegraphics[width=0.24\linewidth]{FIGS/cisi_4b.eps} \includegraphics[width=0.24\linewidth]{FIGS/1m_4b.eps} \includegraphics[width=0.24\linewidth]{FIGS/med_6b.eps} \includegraphics[width=0.24\linewidth]{FIGS/cran_6b.eps} \includegraphics[width=0.24\linewidth]{FIGS/cisi_6b.eps} \includegraphics[width=0.24\linewidth]{FIGS/1m_6b.eps} \caption{{\it Residual norm of the approximation of the $k=50$ leading singular triplets of $A$ for the multiple updates case. From left to right: MED, CRAN, CISI, and 1M.}}\label{fig:18} \end{figure*} \subsection{Sequence of updates.} In this experiment the rows of matrix $E$ are now added in batches, i.e., we first approximate the $k$ leading singular triplets of matrix $A^{(0)} = \begin{pmatrix} B \\ A(\ceil{m/2}+1:\ceil{m/2}+t,:) \\ \end{pmatrix}$, then of matrix $A^{(1)} = \begin{pmatrix} A^{(0)} \\ A(\ceil{m/2}+t+1:\ceil{m/2}+2t,:) \\ \end{pmatrix}$, etc. Here, $t=\ceil{m/2}/\phi$ denotes the step-size and $\phi \in \mathbb{Z}^*$ denotes the total number of updates. Note that after the first update, the matrices $U_k$ and $V_k$ no longer denote the exact $k$ leading left and right singular vectors of the $B\equiv A^{(j-1)}$ submatrix of matrix $A^{(j)}$. We set $\phi=12$ and plot the accuracy achieved after one, six, and twelve updates, in Figures \ref{fig:17} and \ref{fig:18}. Notice that enhancing $Z$ by $X_{\lambda,r}$ leads to similar accuracy for all updates, while in the opposite case accuracy deteriorates as the updates accumulate. On a separate note, the accuracy of the $k$ leading singular triplets of $A$ is higher when matrix $E$ is added to $B$ in batches rather than in a single update as in the previous section. \begin{table*}[!t] \centering \caption{\it Maximum relative error and residual norm of the approximation of the $k$ leading singular triplets of $A$ for the multiple updates case as $k$ varies. \label{table3}} \vspace{0.05in} \begin{tabular}{l @{\hskip 0.3in} r c c c c c c c c c c c} \toprule \toprule \multirow{2}{*}{} & & \multicolumn{2}{c}{\textbf{MED}} & & \multicolumn{2}{c}{\textbf{CRAN}} & & \multicolumn{2}{c}{\textbf{CISI}} & & \multicolumn{2}{c}{\textbf{ML1M}}\\ \cmidrule[0.4pt](lr{0.125em}){3-4 \cmidrule[0.4pt](lr{0.125em}){6-7 \cmidrule[0.4pt](lr{0.125em}){9-10 \cmidrule[0.4pt](lr{0.125em}){12-13 & Method & err. & res. & & err. & res. & & err. & res. & & err. & res. \\ \midrule \multirow{2}{*}{$k=10$} & \cite{zha1999updating} & 0.046 & 0.172 & & 0.043 & 0.192 & & 0.054 & 0.274 & & 0.002 & 0.058\\ & \tikzmarkin[hor=style mygrey]{el10} Alg. \ref{alg1} & 0.001 & 0.045 & & 0.008 & 0.090 & & 0.002 & 0.054 & & $3.0\mathtt{e}$-5 & 0.007 \tikzmarkend{el10} \\ \midrule \multirow{2}{*}{$k=20$} & \cite{zha1999updating} & 0.067 & 0.212 & & 0.064 & 0.255 & & 0.075 & 0.224 & & 0.022 & 0.131 \\ & \tikzmarkin[hor=style mygrey]{el20} Alg. \ref{alg1} & 0.004 & 0.073 & & 0.005 & 0.076 & & 0.003 & 0.053 & & 0.002 & 0.040 \tikzmarkend{el20}\\ \midrule \multirow{2}{*}{$k=30$} & \cite{zha1999updating} & 0.076 & 0.384 & & 0.060 & 0.290 & & 0.084 & 0.330 & & 0.023 & 0.123 \\ & \tikzmarkin[hor=style mygrey]{el30} Alg. \ref{alg1} & 0.006 & 0.067 & & 0.008 & 0.088 & & 0.004 & 0.070 & & 0.001 & 0.041 \tikzmarkend{el30}\\ \bottomrule \bottomrule \end{tabular} \end{table*} Table \ref{table3} lists relative error and residual norm associated with the approximation of the singular triplet $(\widehat{\sigma}_{50},\widehat{u}^{(50)},\widehat{v}^{(50)})$ by Algorithm \ref{alg1} and the method in \cite{zha1999updating}. The number of sought singular triplets $k$ was varied from ten to thirty. Comparisons against the method in \cite{vecharynski2014fast} were also performed but not reported since the latter was always less accurate than \cite{zha1999updating}. Overall, Algorithm \ref{alg1} provided higher accuracy, especially for those singular triplets whose corresponding singular value was closer to $\lambda$. \section{Conclusion} This paper presented an algorithm to update the rank-$k$ truncated SVD of evolving matrices. The proposed algorithm undertakes a projection viewpoint and aims on building a pair of subspaces which approximate the linear span of the $k$ leading singular vectors of the updated matrix. Two different options to set these subspaces were considered. Experiments performed on matrices stemming from applications in LSI and recommender systems verified the effectiveness of the proposed scheme in terms of accuracy. \vfill\eject \bibliographystyle{siam}
1,116,691,501,052
arxiv
\section{Introduction} Horizontal cylindrical rotating tumblers that are partially filled with granular materials are considered a canonical system for the study of flowing granular materials. They are commonly used in industrial processes for mixing, coating, and granulation. Although granular flow in cylindrical tumblers has long been studied \cite{GDRMidi04,OttinoKhakhar00,MeierLueptow07}, much research has focused on quasi-two-dimensional circular tumblers \cite{OrpeKhakhar04,JainOttino02,OrpeKhakhar07,ClementRajchenbach95,FelixFalk02,FelixFalk07}, where friction at the endwalls plays an important role \cite{PohlmanOttino06}. Even in longer tumblers, the flow near the flat endwalls of the tumbler is strongly affected by the friction between the endwalls and the flowing particles \cite{PohlmanOttino06,SantomasoOlivi04,ChenOttino08,DOrtonaThomas18}. Moreover, the endwall boundary affects the mixing of monodisperse particles \cite{SantomasoOlivi04} and plays a role in initializing axial segregation bands of bidisperse particles \cite{BridgwaterSharpe69,HillKakalios94,FiedorOttino03} near the endwalls. In this paper, we consider the role of wall roughness of both the cylindrical wall and the endwalls for granular flow in cylindrical tumblers. In long cylindrical tumblers with geometrically smooth but frictional walls, not only is the streamwise velocity near the tumbler endwall slower than that at the center of the tumbler\ \cite{ManevalHill05}, but the resulting reduced mass transport induces a local axial flow near the endwalls \cite{SantomasoOlivi04,ChenOttino08}. Particles near the endwall flow down the slope more slowly than particles far from the endwall. As a consequence, to conserve mass they flow axially away from the endwall in the upper portion of the flowing layer and back toward the endwall in the downstream portion of the flowing layer \cite{ManevalHill05,PohlmanOttino06,PohlmanMeier06,DOrtonaThomas18}. For a half-filled tumbler, the region that is affected by endwall friction (where particle trajectories are curved) extends about a radius of the tumbler from the endwall \cite{PohlmanMeier06,ChenOttino08}. A second effect of the endwall friction is the existence of a pair of recirculation cells next to the tumbler endwalls, and, if the tumbler is long enough, a second pair of counter-rotative cells in between the endwall cells and the midlength of the tumbler \cite{DOrtonaThomas18}. These cells are called ``central cells," even though for very long tumblers they do not extend to the center of the tumbler but remain adjacent to the endwall cells. Analogous recirculation cells also appear in spherical and double-cone tumblers, except that only one recirculation cell appears on either side of the equator \cite{ZamanDOrtona13,DOrtonaThomas15}. Recent studies of granular flow in a spherical tumbler indicate that the wall roughness, either a geometrically smooth wall or a wall made up of particles, can strongly influence the recirculation cells for monodisperse particles flows \cite{DOrtonaThomas15} and the segregation pattern for bidisperse particles flows \cite{ChenLueptow10,DOrtonaThomas16}. Regardless of wall roughness, particles drift axially toward the pole near the surface of the flowing layer with a return flow toward the equator that occurs deeper in the flowing layer resulting in a global circulation of granular material. The recirculation cells are quite difficult to observe. The axial drift induced by the recirculation cells in a 0.14~m diameter tumbler with 2~mm flowing particles is typically only about 1-2 millimeters each time a particle passes through the flowing layer. Consequently, the axial drift and resulting recirculation cells are buried in the noise related collisional diffusion as particles flow. Nevertheless, axial drift affects segregation pattern formation in spherical tumblers with size-bidisperse particles \cite{DOrtonaThomas16,YuLueptow20}. Both experiments and DEM simulations indicate that the recirculation cells in spherical and conical tumblers result from the combination of wall friction and tumbler geometry. Specifically, the roughness of the tumbler wall, varied by comparing a geometrically smooth wall with a wall constructed of particles, affects the degree of axial drift and the thickness of the flowing layer near the walls \cite{DOrtonaThomas15}. The shape of the wall (spherical versus conical) also alters the axial drift as a result of the variation in the length of the flowing layer \cite{ZamanDOrtona13} and the resulting variation in flux injected into the flowing layer from the bed of particles in solid body rotation. In other flow geometries, secondary flows organized as recirculation cells also occur. In a horizontal channel with moving lateral walls, two longitudinal counter-rotating recirculation cells form \cite{KrishnarajNott15}. Due to these cells, the particles rise at the center of the channel and move down next to the wall. In a granular cylindrical Couette cell, only one recirculation cell appears, and it is localized next to the inner cylinder. The particles move down along the inner cylindrical wall for both smooth and rough walls \cite{KrishnarajNott15}. In an inclined channel with smooth walls, two or four recirculation cells are obtained depending on the thickness of the flow and the tilt angle \cite{BroduRichard13,BroduDelannay15}. For wide channels and steeper inclines (more than 30$^\circ$), recirculation cells appear without any lateral wall \cite{ForterrePouliquen01,ForterrePouliquen02,BorzsonyiEcke09}. Finally, for a bidisperse granular flow of particles with different sizes and densities down an incline, segregation and the Rayleigh-Taylor instability combine to induce recirculation cells analogous to Rayleigh-B\'enard convection cells \cite{DOrtonaThomas20}. In this paper, we study recirculation cells in a cylindrical tumbler with different degrees of wall roughness using the discrete element method (DEM) \cite{CundallStrack79,Ristow00,SchaferDippel96}. In the cylindrical tumbler geometry, the flux of particles entering and leaving the flowing layer is nearly constant along the entire length of the tumbler, so the impact of its variation on the recirculation cell that occurs in spherical and double-cone tumblers is eliminated. In this way, the effect of wall roughness alone can be clarified. Our goal is to examine the effect of wall roughness on the axial drift and the resulting recirculation cells, which is likely crucial to understand the mechanisms of mixing of monodisperse particles \cite{SantomasoOlivi04} and the initiation of axial bands of segregated bidisperse particles \cite{BridgwaterSharpe69,DonaldRoseman62,DasguptaKhakhar91,Nakagawa94,HillKakalios94,HillKakalios95,HillCaprihan97,FiedorOttino03} in the endwall regions. \section{DEM Simulations} For the DEM simulations, a standard linear-spring and viscous damper force model \cite{ChenOttino08,SchaferDippel96,Ristow00,CundallStrack79} is used to calculate the normal force between two contacting particles: ${\bm F}_n^{ij}=[k_n\delta - 2 \gamma_n m_{\rm eff} ({\bm V}_{ij} \cdot {\bm{\hat r}_{ij}})]{\bm{\hat r}_{ij}}$, where $\delta$ and $\bm V_{ij}$ are the particle overlap and the relative velocity $(\bm V_i - \bm V_j)$ of contacting particles $i$ and $j$ respectively; $\bm{\hat r}_{ij}$ is the unit vector in the direction between particles $i$ and $j$; $m_{\rm eff} = m_i m_j/(m_i + m_j)$ is the reduced mass of the two particles; $k_n = m_{\rm eff} [( \pi/\Delta t )^2 + \gamma^2_n]$ is the normal stiffness and $\gamma_n = \ln e/\Delta t$ is the normal damping, where $\Delta t$ is the collision time and $e$ is the restitution coefficient \cite{ChenOttino08,Ristow00}. A standard tangential force model \cite{SchaferDippel96,CundallStrack79} with elasticity is implemented: $\bm F^t_{ij}= -\min(|\mu F^n_{ij}|,|k_s\zeta|){\rm sgn}(V^s_{ij})\,\bm{\hat s}$, where $V^s_{ij}$ is the relative tangential velocity of two particles \cite{Rapaport02}, $k_s$ is the tangential stiffness, $\mu$ the Coulomb friction coefficient, $\zeta(t) = \int^t_{t_0} V^s_{ij} (t') dt'$ is the net tangential displacement after contact is first established at time $t = t_0$, and ${\bm{\hat s}}$ is the unit vector in the tangential direction. The velocity-Verlet algorithm \cite{Ristow00,AllenTildesley02} is used to update the position, orientation, and linear and angular velocity of each particle. Tumbler walls (cylindrical wall and endwalls) are modeled either as smooth frictional surfaces (smooth wall) or as a monolayer of bonded particles of different diameters to vary the roughness (rough walls). Both wall conditions have infinite mass for calculation of the collision force between the tumbling particles and the wall. The horizontal cylindrical tumblers considered here have a radius $R=0.07$~m and lengths varying from $L=0.07$ to 0.42~m{\color{black}, corresponding to $R/d=35$ and $L/d=35$ to 210, where $d$ is the particle diameter, and $L/2R=0.5$ to 3}. They are filled to volume fractions (fill levels) from 20\% to 50\% with $d=2$~mm particles with particle properties that correspond to cellulose acetate: density $\rho =$ 1308~kg~m$^{-3}$, restitution coefficient $e = 0.87$ \cite{DrakeShreve86,FoersterLouge94,SchaferDippel96}. The particles are initially randomly distributed in the tumbler with a total number of particles ranging from about $2 \times 10^4$ to $2.7 \times 10^6$. To avoid a close-packed structure, the particles have a uniform size distribution ranging from 0.95$d$ to 1.05$d$. The friction coefficient between particles and between particles and walls is set to $\mu = 0.7$. Gravitational acceleration is $g$ = 9.81~m~s$^{-2}$, the collision time is $\Delta t$ =10$^{-4}$ s, consistent with previous simulations \cite{TaberletNewey06,ChenLueptow11,ZamanDOrtona13} and sufficient for modeling hard spheres \cite{Ristow00,Campbell02,SilbertGrest07}. These parameters correspond to a stiffness coefficient $k_n = 7.32\times 10^4$ (N m$^{-1}$) \cite{SchaferDippel96} and a damping coefficient $\gamma_n = 0.206 $~kg~s$^{-1}$. The integration time step is $\Delta t/50 = 2\times 10^{-6}$~s to meet the requirement of numerical stability \cite{Ristow00}. \begin{figure}[htbp] \includegraphics[width=0.99\linewidth]{figure01} \caption{(a) Granular flow in a 0.14~m long {\color{black}($L/d=70$, $L/2R=1$)} and 0.14~m diameter cylindrical tumbler {\color{black}($R/d=35$)} filled to 30\% by volume with {\color{black}$d=2$}~mm particles randomly colored red and gray. The tumbler has smooth frictional walls, and the rotation speed is 15~rpm {\color{black}($Fr=\omega^2 R/g= 0.018$)}. (b-c) Top view of the two recirculation cells obtained in the left half of the tumbler. The vertical dashed line indicates the center of the tumbler and the horizontal dotted line denotes the axis of rotation. The endwall cell trajectory (green) is integrated over 200~s and corresponds to about 3 orbits through the cell, while for the central adjacent cell (blue), 500~s are required for only one orbit. {\color{black} In (b), the horizontal axis of the endwall recirculation cell is stretched compared to the vertical axis to clearly show the recirculation cells. In (c), two passes in the flowing layer are highlighted in red for the outer trajectory and in black for the inner trajectory. The corresponding arrows in (b, c) show the recirculation direction. (Only the arrows are shown for the central recirculation cell.)}} \label{figintro} \end{figure} {\color{black}The tumbler typically rotates at 15 rpm, corresponding to a Froude number $Fr=\omega^2 R/g= 0.018$, although a range of rotation speeds, 2.5 to 30 rpm ($0.0005\le Fr\le 0.070$), are also considered. This range of $Fr$ corresponds to the continous-flow rolling regime for tumbler flow, characterized by a steady flowing layer with a surface that is essentially flat~\cite{MeierLueptow07}.} Figure \ref{figintro}(a) shows a typical simulation of monodisperse granular flow. Particles are randomly colored red and gray to improve the visualization. The $x$-axis is the axis of rotation with the $z$-axis opposite to $\bm g$ and the $y$-axis perpendicular to $x$ and $z$. The origin is at the midlength of the tumbler, though it is shown as offset in Fig.~\ref{figintro}(a). The velocity throughout the entire domain is obtained by binning particles in a 3D grid and averaging their velocity over 50~s of physical time (2.5 $\times$ 10$^7$ integration time steps) to assure an adequately smooth velocity field. The first 20~s of simulation are omitted to assure that the flow is steady before averaging. Mean particle trajectories are obtained by integrating the particle velocity based on the velocity field or by averaging particles trajectories based on particle positions stored every 0.1s. The two methods provide similar results {\color{black} and are consistent with measurements using an x-ray system to track the location of a single x-ray opaque tracer particle in an analogous experimental setup \cite{DOrtonaThomas15}}. Figure~\ref{figintro}(b) shows two trajectories integrated for 50 tumbler rotations (green-left) or 125 rotations (blue-right) corresponding to the two left recirculation cells among the four cells of the tumbler \cite{DOrtonaThomas18}. For the endwall trajectory (green), the particle drift is toward the endwall for particles near the flowing layer surface (outer spiral of the cell trajectory), and it is toward the midlength of the tumbler for particles deeper in the flowing layer (inner spiral of the cell trajectory at the core of the cell). The trajectory is opposite in the central cell (blue). The recirculation cells have previously been described in detail \cite{DOrtonaThomas18} {\color{black} and confirmed experimentally using colored bands of particles and colored tracer particles in spherical tumblers \cite{ZamanDOrtona13}.} {\color{black} By considering individual particle trajectories, it is possible to compute the trajectory diffusion. For a typical system (0.14~m long tumbler filled to 30\% and rotating at 15~rpm), the standard deviation of the drift is 3.5~mm while the maximal measured drift is around 2.5~mm and the maximal curvature is around 10~mm.} \section{Effect of wall roughness} \subsection{Particle trajectories and wall roughness} \label{link2} To study the axial drift of particles along the length of the tumbler, mean trajectories are integrated from the velocity field obtained from the DEM simulations {\color{black} for cases where both the endwalls and cylindrical wall are smooth or both are rough}. \begin{figure}[htbp] \includegraphics[width=0.95\linewidth]{figure02} \caption{Mean trajectories of 2~mm particles {\color{black} starting from the positions marked with a *, which are} equally spaced every 5~mm along a 0.14~m {\color{black}($L/2R=1$) tumbler with} (a) smooth {\color{black} endwalls and cylindrical} wall or (b) 2~mm rough {\color{black} endwalls and cylindrical} wall filled to 30\% and rotating at 15~rpm {\color{black}($Fr= 0.018$)} (top view). Green trajectories have drift toward the endwall and blue toward the center. Only the left half of the tumbler is shown. Note that the horizontal and vertical axes are scaled differently. All trajectories are integrated for 2.5~s to assure that they start and end in the static zone. {\color{black} The red arrow indicates the curvature, and the black arrow indicates the drift.}} \label{cyl14traj} \end{figure} Figure~\ref{cyl14traj} shows average particle trajectories for one pass through the flowing layer for different initial axial locations. The trajectories start from the static zone where particles are in solid body rotation (marked with a star), 3~mm above the bottom cylindrical wall, and in the $y=0$ plane that includes the axis of rotation. Particles starting in this location will follow a trajectory such that they are very near the surface when in the flowing layer. While in the static zone, particle trajectories follow a straight vertical line until they reach the flowing layer (uppermost point) and then flow down the slope following a curved trajectory, except at the center of the tumbler ($x=0$), which is a symmetry plane. When they reach the static zone (bottommost point), they again follow a straight vertical line in the static zone. The {\sl curvature} of the trajectory is defined as the maximum axial displacement from the starting point (red arrow in Fig~\ref{cyl14traj}(a)). It is important to note that the start {\color{black} (marked by a *)} and end points for a single pass through the flowing layer do not coincide. For particles near the endwall (green trajectories), the particles have a net axial displacement, or {\sl drift}, toward the endwall {\color{black}(black arrow in Fig.~\ref{cyl14traj}(a))}, while particles further from the endwall of the tumbler (blue trajectories) drift toward the center (midlength) of the tumbler. The trajectories in Fig.~\ref{cyl14traj} correspond to the motion of particles near the surface of the flowing layer. Particles deeper in the flowing layer have a net drift in the opposite direction to conserve mass. That is, particles deep in the flowing layer near the endwall drift toward the center of the tumbler, while particles near the center of the tumbler drift toward the endwall. In both cases, the axial drift of particles deep in the flowing layer balances the drift of particles near the surface resulting in the recirculation cells shown in Fig.~\ref{figintro}(b) \cite{DOrtonaThomas18}. The general pattern of the curvature of the particle trajectories and the axial drift are similar, regardless of whether the tumbler walls (cylindrical wall and endwall) are smooth (Fig.~\ref{cyl14traj}(a)) or both the cylindrical wall and endwalls are formed from a monolayer of 2~mm particles (Fig.~\ref{cyl14traj}(b)). In both cases the trajectories for particles near the surface of the flowing layer are curved with a maximum curvature near the endwalls, concave to the endwalls. The main difference between rough and smooth wall tumblers is the location of the transition from trajectories drifting toward the endwall (green) and drifting toward the center (blue). The point where the drift changes sign indicates the boundary between the two recirculation cells at the surface. It is clear that the endwall recirculation cells are smaller for 2~mm rough walls than for a tumbler with smooth walls. To characterize more precisely the dependence of the drift and the curvature on the roughness of the wall, both are measured along the length of a 0.14~m long tumbler with wall roughnesses varying from smooth wall (sw) to 4~mm rough walls (cylindrical wall and endwalls made of 4~mm particles). The drift is strongly affected by the roughness of the wall (Fig.~\ref{driftcurve14}a). Each $x$-axis crossing corresponds to the boundary between 2 recirculation cells with the middle $x$-axis crossing corresponding to the symmetry plane at $x=0$. \begin{figure}[htbp] \includegraphics[width=0.95\linewidth]{figure03} \caption{(a) Axial drift near the surface and (b) trajectory curvature measured along the length of a 30\% full 0.14~m {\color{black}($L/2R=1$)} long tumbler rotating at 15~rpm {\color{black}($Fr=0.018$)} for particles starting 3~mm from the cylindrical wall, and thereby close to the free surface while in the flowing zone. The walls are smooth (sw) or rough made of a monolayer 1, 2 or 4~mm particles.} \label{driftcurve14} \end{figure} The axial length of the endwall cell decreases with increasing roughness of the wall, while the central cell axial length increases. The maximum amplitude of the drift is approximately proportional the size of the recirculation cell. The curvature of the trajectory, measured as the maximum axial displacement only slightly increases with the roughness of the walls (Fig.~\ref{driftcurve14}b). This is different from a spherical tumbler where both drift and curvature are strongly dependent on the wall roughness \cite{DOrtonaThomas15}. \begin{figure}[htbp] \includegraphics[width=0.95\linewidth]{figure04} \caption{Axial drift near the surface measured along tumblers of increasing length ($L=0.07$, 0.105, 0.14, 0.21 and 0.28~m, labeled in cm in the figure{\color{black}, corresponding to $L/2R=0.5$, 0.75, 1, 1.5, 2)}) with a fill level of 30\%. The tumblers have (a) smooth walls or (b) 2 mm rough walls and rotate at 15 rpm {\color{black}($Fr= 0.018$)}. In the left part of both figures, the corresponding drift in 0.14~m long tumbler (dashed blue) after shifting it to the left wall is shown for comparison.} \label{driftvslength} \end{figure} Both the length of the tumbler and the wall roughness impact the axial drift and the size of the recirculation cells (Fig.~\ref{driftvslength}). For either wall roughness, the magnitude of the drift and the size of the endwall cells are similar regardless of the tumbler length (endwall cell size is the distance from the endwall to the first zero value of the drift). However rough walls result in reduced drift for the endwall cells and larger axial drift for the central cells compared to smooth walls. The endwall cells are also surprisingly shorter in length for the rough walls compared to smooth walls. For both roughnesses, the longest tumbler ($L=0.28$~m) has a cell at each endwall, with an adjacent counter rotative central cell, and a zone having negligible axial drift at the center of the tumbler (Fig.~\ref{driftvslength}). We have considered even longer tumblers ($L=0.35$ and 0.42~m), but no additional cells appear near the center of the tumbler. Instead, the central zone with no drift widens as the tumbler length increases and the central counter-rotating cells in Fig.~\ref{figintro} remain next to the endwall cells, with a nearly constant size as the tumbler length increases. For all long tumblers, the endwall cells are similar, independent of the tumbler length. This is made evident by shifting the axial drift plot for the 0.14~m long tumbler (blue dashed curve) toward the left endwall so it overlays the 0.28~m long tumbler drift plot. The two curves perfectly coincide from the endwalls to the first crossing with the $x$-axis and, in the case of the rough wall, even further toward the center of the tumbler. Thus, for very long tumblers, the character of the central cells is independent of the tumbler length. As the length of the tumbler decreases, the central cell is affected more than the endwall cell. First, the central zone with no drift disappears (Fig.~\ref{driftvslength}). Then, the size of the central cells decreases and, for very short tumblers, the drift amplitude in the central cell decreases and its size is constrained. The central recirculation cell disappears altogether for the 0.07~m and 0.105~m long smooth tumblers, while it persists for rough walls, though with substantially decreased drift and shorter length. For the 0.105~m long smooth tumbler, there is a short region with no drift around the center of the tumbler, corresponding to an intermediate case between 2 cells (as for 0.07~m long) and 4 cells (as for 0.14~m long). \begin{figure}[htbp] \includegraphics[width=0.95\linewidth]{figure05} \caption{Trajectory curvature for tumblers of increasing length ($L=0.07$ to 0.28~m{\color{black}, $L/2R=0.5$ to 2}) with a fill level of 30\% and rotating at 15~rpm {\color{black}($Fr=0.018$)} with a smooth wall (dashed curves) and 2~mm rough wall (solid curves).} \label{curvelong} \end{figure} The curvature of the trajectories is similar between smooth and rough walls, though it is slightly higher for rough walls (Fig.~\ref{curvelong}). It is generally understood that the trajectory curvature is induced by the endwall friction \cite{PohlmanOttino06,SantomasoOlivi04,ChenOttino08,ManevalHill05,PohlmanMeier06,DOrtonaThomas18}. Endwall friction reduces the flux of particles along the endwall. To accommodate this flux reduction, flowing particles must curve away from the endwall in the upper part of their trajectory and back toward the endwall in the lower part. \begin{figure}[htbp] \includegraphics[width=0.95\linewidth]{figure06} \caption{Profile of the streamwise flux of material in a tumbler with rough (solid) and smooth (dashed) walls. The flux is measured either in the static zone or in the flowing layer. The tumbler is 0.14~m {\color{black}($L/2R=1$)} long, filled to 30\%, and rotates at 15~rpm {\color{black}($Fr=0.018$)}.} \label{flux} \end{figure} In light of this mechanism, what is surprising is that a much rougher endwall (2~mm rough wall) only increases the curvature slightly. To investigate this further, Fig.~\ref{flux} compares the streamwise flux in the flowing layer to that in the static zone in solid body rotation. The flux (and streamwise velocity) profiles have two local maxima that seem to be related to the existence of 4 cells. Note that short tumblers with no central cells have a velocity profile with a single maximum, although the tumbler length for the transition between 2 and 4 cells does not exactly coincide with the transition between one and two maxima (see Fig.~S1 in Supplemental Material \cite{Supplemental}). With rough walls, the flux in the flowing layer is reduced near the endwalls and increased at about 0.03~m away from the endwalls. At the same time, the flux in the solid body rotation zone increases near the endwalls, as the thickness of the flowing zone decreases (as is evident in Fig.~\ref{displacementmap}, discussed shortly). To adapt to the flux difference, particles have to follow curved trajectories. Thus, the curvature increases, as the flux difference increases, but only slightly because fluxes in the static zone and in the flowing layer are quite similar for rough and smooth endwalls (Fig.~\ref{flux}). This result is different from the case of a rotating spherical tumbler in which the curvature substantially increases with the wall roughness (by a factor of 3). Of course, in a sphere, the fluxes are influenced both by the wall friction and the wall geometry \cite{DOrtonaThomas15}. \subsection{Geometry of the recirculation cells} Further information about the nature of the recirculation cells is gained from a vector map of particle displacement between two successive passes through a plane perpendicular to the free surface and including the rotation axis (Fig.~\ref{displacementmap}). Particle mean trajectories start from this plane, make a complete circuit through the flowing layer and static zone and then cross the plane again. Each arrow in the displacement vector map is drawn to show the direction (but not magnitude) of the displacement between the starting and the ending position. The color intensity indicates the magnitude of the displacement. \begin{figure}[htbp] \includegraphics[width=0.99\linewidth]{figure07} \caption{Displacement maps for (a) a smooth wall tumbler and (b) a 2~mm rough wall tumbler that is 0.14 m {\color{black}($L/2R=1$)} long, filled to 30\%, and rotates at 15 rpm {\color{black}($Fr=0.018$)}. The dashed red lines indicate the boundary between the endwall recirculation cell and the central recirculation cell. The upper horizontal curve (blue) shows the free surface based on a volume concentration of 0.3, and the bottom horizontal curve (green) indicates the lower boundary of the flowing layer based on a null velocity in the laboratory reference frame. All vectors have the same length. The color map gives the displacement amplitude in meters.} \label{displacementmap} \end{figure} Only the flowing layer is shown because the displacement vector map of a recirculation cell in the solid body rotation zone is simply a stretched mirror of that cell in the flowing layer (see Fig.~S2 in Supplemental Material for complete displacement maps \cite{Supplemental}). Regardless of wall roughness, the displacement map is consistent with the endwall cell trajectory in Fig.~\ref{figintro}, with a surface drift toward the endwall and a drift deeper in the flowing layer toward the center of the tumbler. The boundary between endwall and central cells is vertical for smooth walls but tilted for rough walls. This tilt significantly reduces the size of the endwall cell at the surface of the flowing layer, consistent with the decrease of the endwall cell length measured near the surface (Fig.~\ref{driftcurve14}(a)). The thickness of the flowing layer very near the endwall decreases with increasing wall roughness (the thickness is taken between the blue curve indicating the top of the flowing layer based on volume concentration and the green curve indicating the bottom based on a null velocity). Furthermore, the displacement amplitude is greater near the endwall for the rough than for the smooth endwalls. This comes about because the thickness of the static zone in solid body rotation (below the green curve in Fig.~\ref{displacementmap}) is increased near the endwall, especially for rough walls, with the consequence of a higher flux in the solid body rotation near the endwall. Although it is not shown here, the dynamic angles of repose are nearly identical for both smooth and rough walls in spite of the differences in the two displacement maps. \subsection{Rotation speed} \label{vitesse} \begin{figure}[htbp] \includegraphics[width=0.95\linewidth]{figure08} \caption{Mean particle trajectories (top view) starting from two initial positions (marked by a star), close to the left endwall, and for 6 different rotation speeds (2.5, 5, 10, 15, 20 and 30 rpm{\color{black}, corresponding to $Fr=0.0005$ to 0.070}). The 0.14~m {\color{black}($L/2R=1$)} long tumblers have (a) smooth walls or (b) 2~mm rough walls. The fill level is 30\%.} \label{cyltrajvsvit} \end{figure} Rotation speed is an important parameter in tumbler flows. For bidisperse flow in cylindrical tumblers, below some velocity threshold, axial segregation may disappear \cite{HillKakalios94,HillKakalios95}. In spherical tumblers, increasing the rotation speed may (depending on the fill level) induce a transition in the segregation patterns \cite{ChenLueptow09,DOrtonaThomas16} as a result of the curvature of the particle trajectories. For monodisperse flows in a spherical tumbler, increasing the rotation speed increases the trajectory curvature \cite{DOrtonaThomas15,DOrtonaThomas16}. Figure~\ref{cyltrajvsvit} shows mean trajectories of particles starting at two initial locations near the left endwall of a 14~cm long tumbler and for rotation speeds ranging from 2.5 to 30~rpm. For both tumbler roughnesses and both initial positions, the trajectory curvature increases with increasing rotation speed. In the case of a smooth wall tumbler, the drift increases with increasing rotation speed while it decreases in the case of a 2~mm rough tumbler. Considering the drift along the entire length of a 28~cm long tumbler (to avoid any influence of the tumbler length on the results) in Fig.~\ref{driftvsvit} demonstrates a consistent change with rotation speed for the smooth wall tumbler, with an increase of the endwall cell size with increasing rotation speed. \begin{figure}[htbp] \includegraphics[width=0.95\linewidth]{figure09} \caption{Drift measured along the length of the left half of a 28~cm {\color{black}($L/2R=2$)} tumbler for various rotation speeds (5 to 30 rpm{\color{black}, $Fr=0.002$ to 0.070}). The tumbler has (a) smooth walls or (b) 2~mm rough walls and is filled at 30\%. See Fig.~S3 in Supplemental Material for 14~cm long tumblers \cite{Supplemental}.} \label{driftvsvit} \end{figure} The rough wall tumbler displays slightly less variation with rotation speed, though there is an opposite tendency to that of the smooth tumbler, as discussed in section \ref{link}. Results for 14~cm long tumblers are very similar to that shown in Fig.~\ref{driftvsvit} for 28~cm long tumblers, except that there is no central zone with no drift and the central cells are slightly reduced in size due to the symmetry plane in the center of the tumbler ($x=0$) for the smooth case (see Fig.~S3 in Supplemental Material \cite{Supplemental}). \begin{figure}[htbp] \includegraphics[width=0.95\linewidth]{figure10} \caption{Curvature measured along the 28~cm {\color{black}($L/2R=2$)} tumbler for various rotation speed (5 to 30 rpm{\color{black}, $Fr=0.002$ to 0.070}). The tumbler has (a) smooth walls or (b) 2~mm rough walls and is filled at 30\%. Curvature for only the left half of the tumbler is shown. See Fig.~S4 in Supplemental Material for 14~cm long tumblers \cite{Supplemental}.} \label{curvevsvit} \end{figure} The curvature is also affected by the rotation speed (Fig.~\ref{curvevsvit}). For both roughnesses, increasing the rotation speed increases the curvature everywhere in the tumbler. As observed in Figs.~\ref{driftcurve14}(b) and \ref{curvelong}, the curvature decreases more rapidly moving toward the center $x=0$ for the rough tumbler cases. For brevity, the effect of rotation speed on displacement maps is only discussed here with reference to Supplemental Material Figs.~S5 and S6 \cite{Supplemental}. For both smooth and 2~mm rough tumblers, the displacement maps are similar to those in Fig.~\ref{displacementmap} except for the dependence of the boundary between the endwall recirculation cell and the central recirculation cell. Increasing the rotation speed tilts the boundary (dashed thick lines in Fig.~\ref{displacementmap}) progressively further clockwise. In the smooth wall tumbler (and left half-tumbler), the tilt is leftward for lower rotation speeds and rightward for higher speeds than that shown in Fig.~\ref{displacementmap}. Thus, increasing the rotation speed induces the enlargement of the endwall cell at the free surface, consistent with Fig.~\ref{driftvsvit}. In the rough wall case, the tilt is slightly more leftward for lower rotation speeds and slightly closer to vertical for higher rotation speeds although it never becomes vertical even at 30 rpm. In addition, the tilt is accompanied by a small displacement of the boundary toward the endwall thereby reducing the endwall cell size at the free surface. Far from the endwall, drift and curvature seem directly linked (Figs.~\ref{driftvsvit} and \ref{curvevsvit}). In the rough wall case, the drift and curvature vanish around $x\simeq -0.05$~m. For the smooth wall tumbler, they vanish closer to the middle of the tumbler, between $x=-0.04$~m and -0.02~m. \subsection{Fill level} Finally, we consider the effect of the fill level. In a spherical tumbler, the curvature of trajectories is increased by a factor of 2 in a smooth sphere and by 50\% in a rough sphere when reducing the fill level from 50\% to 25\% and the axial drift is also strongly modified \cite{DOrtonaThomas15}. Furthermore, the segregation pattern for bidisperse particles in a spherical tumbler is completely reversed by changing the fill level \cite{ChenLueptow09,DOrtonaThomas16}. \begin{figure}[htbp] \vspace{2mm} \includegraphics[width=0.95\linewidth]{figure11} \caption{Trajectory curvature measured along 28~cm {\color{black}($L/2R=2$)} tumbler for various fill levels ranging from 25\% to 50\%. The tumbler has (a) smooth or (b) 2~mm rough walls and rotates at 15~rpm {\color{black}($Fr=0.018$)}. Only the left half of the tumbler is shown.} \label{curvefill} \end{figure} \begin{figure}[htbp] \vspace{2mm} \includegraphics[width=0.95\linewidth]{figure12} \caption{Drift measured along 28~cm tumbler {\color{black}($L/2R=2$)} for various fill levels ranging from 25\% to 50\%. The tumbler has (a) smooth or (b) 2~mm rough walls and rotates at 15~rpm {\color{black}($Fr=0.018$)}. Only the left half of the tumbler is shown.} \label{driftvsfill} \end{figure} In the cylindrical tumbler, the fill level also modifies the particle trajectories, but in a much weaker way. In a smooth tumbler, the maximum curvature is reduced by nearly 25\% near the endwall when increasing the fill level from 25\% to 50\% (Fig.~\ref{curvefill}). The reduction in curvature is somewhat smaller in a rough tumbler. Along the length of the tumbler, the dependence of the curvature on the fill level is more complex. For the rough wall case in Fig.~\ref{curvefill}(b), the curvature curves cross each other so that in the region (-~0.11~m~$<x<-0.04$~m), the largest curvature is associated with the greatest fill level, opposite that near the endwall. For the smooth wall case, the curvature curves simply merge moving closer to the center of the tumbler [Fig.~\ref{curvefill}(a)]. For a smooth tumbler, increasing the fill level has little effect on the drift, except to slightly shorten the endwall cell and slightly reduce the magnitude of the drift (Fig.~\ref{driftvsfill}). By contrast, for a 2-mm rough tumbler, the drift varies more and in the opposite way: the endwall cell increases in size with the fill level. The differing dependence on fill level will be discussed in section \ref{link}. Far from endwall ($x\gtrsim-0.08$~m), drift and curvature vary with $x$ in a similar way for smooth and rough walls. Similar results for the curvature and drift are obtained for 14~cm long tumblers (Supplemental Material, Figs.~S7 to S9 \cite{Supplemental}). In addition, the boundary between cells in the displacement maps rotate in opposite directions with increasing fill level, reflecting the different dependence of the cells themselves on fill level for smooth and rough walls (Supplemental Material, Figs.~S10 and S11 \cite{Supplemental}). \section{Endwall versus cylindrical wall roughness} Clearly, wall roughness has a surprisingly strong effect on the flow in a cylindrical tumbler. To better understand the relative effects of the roughness of the endwall and cylindrical wall, the roughness of each wall can be modified independently. Several studies have modified the sense of rotation or the friction of the tumbler endwalls, but the cylindrical wall remains smooth in all cases \cite{ChenOttino08,ChenLueptow11,HuangLiu13}. Here four combinations of cylindrical wall and endwall roughnesses are considered in order to tease out how wall roughness affects the flow. \subsection{Mixed wall roughness} Figure~\ref{trajmixed} shows four typical trajectories starting from the same location in tumblers with the four possible combinations of wall roughnesses. Several differences are immediately evident. First, endwall roughness favors a large curvature whereas cylindrical wall roughness decreases curvature. Second, the drift is mainly controlled by the cylindrical wall roughness. In the case of a rough cylindrical wall, trajectories differ depending on the endwall roughness, but finish with exactly the same drift. A similar result occurs for the smooth cylindrical wall, where the axial drifts are nearly the same, but are noticeably larger than those for the rough cylindrical wall. \begin{figure}[htbp] \includegraphics[width=0.95\linewidth]{figure13} \caption{Mean trajectories of particles (top view) starting from an initial position (marked by a star) 3~mm from the cylindrical wall, close to the left cylindrical wall, for 4 different combinations of wall roughnesses: smooth (red), 2~mm rough (blue), mixed smooth cylindrical and rough endwalls (green), and mixed rough cylindrical and smooth endwalls (black). The tumbler is 0.14~m {\color{black}($L/2R=1$)} long and rotates at 15 rpm {\color{black}($Fr=0.018$)}. The $x$ axis is stretched compared to the $y$ axis.} \label{trajmixed} \end{figure} \begin{figure}[htbp] \includegraphics[width=0.95\linewidth]{figure14} \caption{Drift measured along 0.14~m {\color{black}($L/2R=1$)} long tumblers for 6 different wall roughness combinations of smooth, 0.75~mm rough, and 2~mm rough walls. The tumblers rotate at 15 rpm {\color{black}($Fr=0.018$)} and are filled to 30\%.} \label{driftmixed} \end{figure} The drift along the entire length of the tumbler is nearly identical for the same cylindrical wall roughness regardless of the endwall roughness (Fig.~\ref{driftmixed}). The drift for an intermediate roughness (cylindrical wall made of 0.75 mm particles) falls between the smooth and 2~mm rough wall case. Consistent with Fig.~\ref{driftcurve14}, the amplitude of the drift decreases in the endwall cell with increased cylindrical wall roughness but increases in the central cell. \begin{figure}[htbp] \includegraphics[width=0.93\linewidth]{figure15} \caption{Curvature measured along 0.14~m {\color{black}($L/2R=1$)} long tumblers for 4 combinations of wall roughnesses: smooth (red), 2 mm rough (blue), 2 mm rough endwalls and smooth cylindrical wall (green) and smooth endwalls and 2 mm rough cylindrical wall (black). The tumblers rotate at 15 rpm {\color{black}($Fr=0.018$)} and are filled to 30\%.} \label{curvaturehyb} \end{figure} On the other hand, the curvature of particle trajectories differs for the four wall roughness combinations shown in Fig.~\ref{curvaturehyb}. In fact, the roughness of the endwalls and the cylindrical wall have opposite effects. Rough endwalls and smooth cylindrical walls result in more curvature. Near the endwalls the greatest curvatures are obtained with rough endwalls, consistent with Fig.~\ref{driftcurve14}(b). Moving toward the center, rough cylindrical walls induce a slightly larger decrease of the curvature. As we will show later, the role of the endwall roughness is to induce trajectory curvature near the endwall, while the role of the cylindrical wall roughness is to reduce the curvature moving away from the endwalls. The curves for 0.75~mm cylindrical walls are not shown in Fig.~\ref{curvaturehyb}, but they fall between the smooth and 2~mm rough wall cases as expected. \begin{figure}[htbp] \includegraphics[width=0.95\linewidth]{figure16} \caption{Displacement map for tumblers (a) with smooth endwalls and a 2~mm-rough cylindrical wall and (b) 2~mm-rough endwalls and a smooth cylindrical wall. {\color{black} The tumblers are 0.14 m ($L/2R=1$) long, filled to 30\%, and rotate at 15 rpm ($Fr=0.018$)}. The dashed lines indicates the boundary between the endwall cell and the central cell. All vectors have the same length. The color map gives the displacement map amplitude in meters and colors are identical to Fig.~\ref{displacementmap} and have been completed up to 0.008~m with a darker red.} \label{maphybbyhsurf} \end{figure} Comparing the displacement maps for the four combinations (Figs.~\ref{displacementmap} and~\ref{maphybbyhsurf}) provides additional information about the character of the recirculation cells in these cases. The shape of the recirculation cell clearly depends almost entirely on the cylindrical wall roughness, as indicated by the orientation of the boundary between cells (thick dashed lines), which is vertical for a smooth cylindrical wall and angled for a rough cylindrical wall, regardless of the endwall roughness. The cylindrical wall roughness controls the cells through the thickness of the flowing layer, not just at the surface. However, the amplitude of the vertical displacement near the endwall, only depends on endwall roughness (visible as the color near the left endwall in Figs.~\ref{displacementmap} and \ref{maphybbyhsurf}). \subsection{Connecting wall roughness and drift} \label{link} It is quite clear that the cylindrical wall and endwall roughnesses alter the particle trajectories, but the question is how these are linked. The answer appears to lie in the topography of the free surface. Figure~\ref{freesurfmix} shows sections of the free surface in several $x-z'$ planes perpendicular to the free surface, where $(x,y',z')$ is a reference frame tilted to be aligned with the free surface at the center of the tumbler at $x=0$. The angles at which the reference frame is tilted for the four roughness configurations differ by less than 0.4 degrees. To provide context for the subsequent discussion, the surface topography at several $x-z'$ sections is shown for a 14 cm long rough tumbler in Fig.~\ref{freesurfmix} as an example. Most notable are bumps in the topography near the endwalls for upstream locations ($y'>0$). Further note that even though the profile at $y'=0.01$~m presents small bumps, the depressions at the endwalls have a larger amplitude. Further downstream, the bumps do not occur at all and instead the surface level simply decreases near the endwall. When comparing corresponding curves (for example $y'=-0.03$~m and $y'=0.03$~m), it is evident that near the endwall ($x\lesssim -0.06$~m) each depression has a larger amplitude than its corresponding bump. Both the upstream surface bumps and the downstream surface depressions diminish moving away from the endwall. {\color{black} This topography comes about due to friction at the endwall. The same number of particles enter the flowing layer along the entire length of the tumbler, but particles near the endwall flow down the slope more slowly than particles far from the endwall \cite{ManevalHill05,PohlmanOttino06,PohlmanMeier06,DOrtonaThomas18}. As a result, particles pile up near the endwall in the upstream portion of the flowing layer before they can flow axially away from the endwall causing the slight bump. In the downstream portion of the flowing layer, particles near the endwall are depleted before they can flow back toward the endwall, resulting in the depression.} \begin{figure}[htbp] \includegraphics[width=0.95\linewidth]{figure17} \caption{Free surface along a 14~cm {\color{black}($L/2R=1$)} long rough tumbler in $x-z'$ planes perpendicular to the free surface. The tumbler is filled at 30\% and rotates at 15~rpm {\color{black}($Fr=0.018$)}.} \label{freesurfmix} \end{figure} \begin{figure}[htbp] \includegraphics[width=0.95\linewidth]{figure18} \caption{Topography of the free surface at the center ($x=0$~m) and near the left endwall ($x=-0.067$~m) in a $y'-z'$ plane where the $y'$ axis is tilted such that it is parallel to the free surface at the center of the tumbler. The inset indicates the slicing planes. The tumblers are 0.14~m {\color{black}($L/2R=1$)} long and have 4 different wall roughness combinations: smooth (red), 2 mm rough (black), 2 mm rough endwalls and a smooth cylindrical wall (green), and smooth endwalls and a 2 mm rough cylindrical wall (black). The tumbler is filled at 30\% and rotates at 15 rpm {\color{black}($Fr=0.018$)}.} \label{surflib30hyb} \end{figure} The difference in the topography of the free surface at the center of the tumbler and at the endwalls is shown more clearly in Fig.~\ref{surflib30hyb}, which presents $y'-z'$ cross sections of the free surface for the four different roughness conditions at the center of the tumbler ($x=0$) and very near the left endwall ($x=-0.067$~m). In the tilted reference frame in Fig.~\ref{surflib30hyb} the surface topography is horizontal and nearly identical at the center of the tumbler for all four combinations of surface roughness. However, near the endwall, the average angle of free surface is steeper. {\color{black} The steep free surface near the endwall again comes about due to friction at the endwall. The bumps near the endwall correspond to the topography near the endwall being above that for $x=0$ in the upstream portion, and the depressions correspond to the topography being below that for $x=0$ in the downstream portion. The steepness is greater} for tumblers with rough endwalls than for smooth endwalls. As already observed in Fig.~\ref{freesurfmix}, near the endwall ($x=-0.067$~m), the depression at the bottom of the trajectory is larger than the bump at the top. Axial variations of this topography induce axial displacements of particles: toward the tumbler center in the upper portion of the flowing layer since the surface is higher than at the center of the tumbler, and toward the endwall in the downstream portion of the flowing layer since the surface is lower at the endwall than at the center. Since the difference between the surface heights at the endwalls and the center is smaller in the upstream portion of the flowing layer than in the downstream portion, there is a drift directed toward the endwall, which we denote $D_{ew}$. Consistent with the convention in Fig.~\ref{driftcurve14}, this drift is negative near the left endwall, positive near the right endwall, and negligible near the center for long tumblers. Comparing the four tumbler roughness conditions in Fig.~\ref{surflib30hyb}, it is clear that the topography curves superimpose for similar endwall roughnesses. Thus, the endwall roughness is primarily responsible for the surface topography near the endwall. Only a small difference is induced by the cylindrical wall roughness and that is limited to the very upper part of the flowing layer ($y'>0.04$~m in Fig.~\ref{surflib30hyb}). To disentangle the effect of the endwall roughness and cylindrical wall roughness on the trajectory drift, we hypothesize that the drift due to the endwall $D_{ew}$ is linked to the topography variation along the length of the tumbler. To quantify the dependence of this topography on position along the length of the tumbler $x$, we fit the difference of the free surface heights taken at $y'=0.05$~m and $y'=-0.02$~m to a hyperbolic cosine $a\cosh(bx)+c$. This is most easily thought of as the difference between the curves for these two values of $y'$ in Fig.~\ref{freesurfmix}. The variation of topography difference along the length of the tumbler, which we call the `topography gradient,' is the $x$-derivative, $-a\,b\sinh(bx)$. We further assume that $D_{ew}(x)$ has a variation with $x$ {\color{black} proportional} to that of this topography gradient. We also assume that the cylindrical wall has a separate and additional effect on the drift because it affects the axial displacement at the top and bottom of the flowing layer. The impact of the cylindrical wall roughness on the drift is evident in Fig.~\ref{trajmixed} where the particle trajectories are directed further toward the endwall for a smooth cylindrical wall than for a rough wall. Cylindrical wall roughness opposes axial displacement in the lower part of the trajectory because the trajectories are directed toward the cylindrical wall at a slight angle due to the trajectory curvature. The effect of cylindrical wall roughness in the upper part of the trajectory is more subtle, but in the upper part of particles trajectories in Fig.~\ref{trajmixed}, the particles move a small amount axially toward the center of the tumbler while still rising in the bed of particles for a smooth cylindrical wall, whereas, a rough cylindrical wall prevents any axial displacement before entering the flowing layer. Although the effects of cylindrical wall roughness in the upper and lower portions of the flowing layer are opposite in direction, the stronger effect is at the lower portion of the trajectory. The combination of the upper and lower axial displacements creates a drift related to the cylindrical wall in the same direction as the curvature, i.e. toward the center of a cylindrical tumbler (e.g., compare axial drift for smooth and rough cylindrical walls but the same endwall roughness in Fig.~\ref{trajmixed} recalling that at this $x$ location the endwall drift is dominant). We call this drift $D_{cyl}$, which is positive in the left part of the tumbler, and negative in the right part. $D_{cyl}$ acts opposite to the drift associated with the endwall friction $D_{ew}$. Large cylindrical wall roughness favors a large $D_{cyl}$. Since friction is enhanced by more tangential trajectories, we assume that $D_{cyl}$ is proportional to the curvature with a coefficient $R$ depending on the cylindrical wall roughness. $D_{cyl}$ may also depends on other parameters including the rotation speed or the fill level. The value of this coefficient $R$ is unknown and needs to be estimated. With these definitions for $D_{ew}$ and $D_{cyl}$, we now assume that the total drift is $D=D_{cyl}+D_{ew}$. Near the endwall, $D_{ew}$ is dominant, and $D$ is directed toward the endwall. Moving toward the center of the tumbler, $D_{ew}$ decreases more rapidly than $D_{cyl}$, so that there is a position $x$ where $D$ is zero, which corresponds to the boundary between the 2 cells. Moving further toward the center, $D_{cyl}$ is dominant and $D$ is directed toward the center. \begin{figure}[htbp] \includegraphics[width=\linewidth]{figure19} \caption{Drift, free surface topography, its hyperbolic cosine fit (left $y$-axis), $R\,\times$~curvature, and topography gradient (right $y$-axis) for 28~cm {\color{black}($L/2R=2$)} long tumblers with (a) smooth and (b) 2~mm rough walls. The tumbler is filled at 30\% and rotates at 15~rpm {\color{black}($Fr=0.018$)}. Arrows indicate the vertical axis associated with the curve.} \label{topocosh} \end{figure} Figure~\ref{topocosh} illustrates this approach in the case of 28~cm long smooth and 2~mm-rough tumblers. A long tumbler is preferred to avoid the influence of the central symmetry plane. The total drift, $D$, the free surface topography, and its hyperbolic cosine fit to the topography should be read on the left $y$-axis. On the right $y$-axis, we represent the measured curvature multiplied by the coefficient $R$ (adjustable){\color{black}, lumping together the two proportionality coefficients linking $D_{cyl}$ to curvature and linking $D_{ew}$ to topography. We also represent the $x$-derivative of the topography difference (topography gradient) on the right $y$-axis. These two curves correspond to the evolution of $D_{cyl}$ and $-D_{ew}$, respectively.} To obtain the coefficient $R$, we assume that the endwall drift $D_{ew}$ exactly counterbalances the cylindrical wall drift $D_{cyl}$ at the $x$-location of the boundary between the endwall cell and the central cell. Thus, the two curves cross at the location where $D=D_{cyl}+D_{ew}=0$, thereby setting the value for $R$. For the two cases shown in Fig.~\ref{topocosh}, the resulting value of $R$ is larger for the 2~mm rough wall tumbler ($R=27.0$) than for the smooth cylindrical wall tumbler ($R=13.1$). The value of $R$ is mainly linked to the cylindrical wall roughness, but, as we will show later, $R$ also depends on fill level and rotation speed. Thus, $R$ should be viewed as the combined effect of cylindrical wall roughness and the way particle trajectories interact with the cylindrical wall (velocity, inclination, etc.). \begin{figure}[htbp] \includegraphics[width=\linewidth]{figure20} \caption{Drift, topography fit, topography gradient, and $R$ times curvature in the case of 28~cm {\color{black}($L/2R=2$)} long tumblers with a smooth cylindrical wall and increasing endwall roughnesses: 1, 2, 3, 4~mm. The tumbler is filled at 30\% and rotates at 15~rpm {\color{black}($Fr=0.018$)}. Arrows indicate the y-axis associated to the curve.} \label{topoCourb} \end{figure} Based on the previous assumptions (and same rotation speed, fill level, etc.), mixed roughness tumblers having the same cylindrical wall roughness but different endwall roughness should have the same value of the coefficient $R$. To demonstrate this, Figure~\ref{topoCourb} shows $D$, $D_{cyl}$ ($R$ times curvature), and $D_{ew}$ (topography gradient) for four tumblers having the same smooth cylindrical wall and endwalls with increasing roughness. The values of $R$ are 13.6, 13.1, 14.2 and 14.2 for the endwall roughness 1, 2, 3 and 4~mm, respectively. In addition, $R=12.9$ and 13.6 were measured for 0.5 and 1.5~mm rough endwalls, respectively, while $R=13.1$ for a fully smooth tumbler. The mean value is $R=13.5$ for a smooth cylindrical wall with the various endwall roughnesses. Since the relative standard deviation for the different endwall roughnesses is quite small (less than 4\%), $R$ can be considered independent of the endwall roughness, as assumed in the model. \begin{figure}[htbp] \includegraphics[width=\linewidth]{figure21} \caption{Curves of drift, topographies and fits (labeled `topo.+fit'), topography gradient that is common for all tumblers, and $R$ times curvature in the case of long tumblers with smooth endwalls and increasing cylindrical wall roughness: smooth, 0.5, 1, 2~mm. The tumbler is filled at 30\% and rotates at 15~rpm {\color{black}($Fr=0.018$)}. Arrows indicate the y-axis associated to the curve.} \label{topobyh} \end{figure} In the reverse situation, tumblers with smooth endwalls and increasing cylindrical wall roughness, we expect different values for $R$. As the topographies are nearly the same, a common fit can be used, leading to a single common topography gradient curve representing the endwall drift $D_{ew}$ (Fig.~\ref{topobyh}). Determining the values of $R$ so that intersections of $D_{ew}$ (topography gradient) with each $D_{cyl}$ ($R$ times curvature) match the boundary between the endwall cell and the central cell yields increasing values of $R=13.2$, 15.2, 19.2 and 26.3 for increasing cylindrical wall roughnesses of smooth, 0.5, 1 and 2~mm, respectively. The value for $R$ and, accordingly, the cylindrical drift $D_{cyl}$ increase significantly with the cylindrical wall roughness, as expected. A plot of $R$ versus the cylindrical wall and the endwall roughnesses is provided in Supplemental Material (Fig.~S12 \cite{Supplemental} ). When tumblers are short enough, there is only one cell in each half-tumbler, and the $R$ coefficient cannot be estimated. Nevertheless, if $R$ values obtained for smooth and rough longer tumblers are used, the topography gradient and R times curvature ($D_{ew}$ and $D_{cyl}$) curves do not cross, as would be expected. For smooth and rough tumbler lengths corresponding to the 4 cells to 2 cells transition, the two curves are nearly tangent, also as expected (see Fig.~S13 in Supplemental Material \cite{Supplemental}). There are some important points to make about the drift approach that we use to explain the effects of roughness of the tumblers. First, the model is robust since using the $R$ coefficient obtained for long tumblers shows that drift curves are tangent or do not cross for shorter tumblers. Second, the approach is applicable near the boundary between the endwall cell and the central cell, but not over the entire length of the tumbler. This is evident in Fig.~\ref{topocosh}, where the difference between the topography gradient and the curvature effect deviate substantially from the measured drift. Hence, we do not expect this approach to provide quantitative predictions of the dependence of the recirculation cell structure on wall roughness, rotation speed, fill level, and mixed wall roughness conditions considered in the next section. Nevertheless, it aides in understanding the underlying mechanism. \subsection{Discussion} Accounting for the drifts ($D_{ew}$ and $D_{cyl}$) induced by the endwalls and cylindrical wall separately, each of them dominating in one cell, provides a means to understand the mechanisms affecting the formation of the cells. It also decouples the recirculation in the central cell from the recirculation in the endwall cell, which is quite different from hydrodynamic systems where adjacent vortical cells are coupled at their interface. The proposed mechanism can explain why various configurations (larger drift in endwall cells or in central cells) are possible, even though the origin of the drift for all cells is the endwall friction. For mixed roughness tumblers, increasing only the cylindrical wall roughness increases $R$ and consequently $D_{cyl}$ leading to a smaller endwall cell (Fig.~\ref{driftmixed}). Increasing only the endwall roughness increases both the curvature (Figs.~\ref{curvaturehyb} and \ref{topobyh}) and the topography difference (Fig.~\ref{surflib30hyb}), thereby increasing both $D_{cyl}$ and $D_{ew}$ in a similar manner leaving the size of the cells almost unchanged (Figs.~\ref{driftmixed} and \ref{topoCourb}). Increasing the roughness of both walls increases $D_{cyl}$ more than $D_{ew}$, reducing the endwall cell size and the drift amplitude near the endwall. Indeed, in that case, the increase of $D_{cyl}$ has two contributions, one due to the curvature that is similar to the increase of $D_{ew}$ and another part due to the increase of $R$. {\color{black}It is also important to note that the effects of endwall and cylindrical roughness, corresponding to $D_{ew}$ and $D_{cyl}$, respectively, are at the surface of the flowing layer {\color{black} and progressively decrease below the surface}. After all, $D_{ew}$ is based directly on the surface topography gradient, and $D_{cyl}$ is based on the trajectory curvature resulting from interactions of {\color{black} the flowing particles at} the surface with the cylindrical wall, which is {\color{black} here measured for particles near the surface} of the flowing layer (recalling that {\color{black} the curvature measurement} is based on trajectories that start only 3 mm from the cylinder wall in the fixed bed and thereby remain near the surface of the flow as described in section \ref{link2}, {\color{black} noting that particle trajectories away from the free surface also have some degree of curvature and drift). As a result, topography and roughness weakly affect the deeper portion of the recirculation cells with smaller values of $D_{ew}$ and $D_{cyl}$}. This is evident in comparing Figs.~\ref{displacementmap}(a,b) and Figs. \ref{maphybbyhsurf}(a,b). In both cases, changing from a smooth cylindrical wall to a rough one shifts the boundary between the recirculation cells from vertical to diagonal as a result of the position of the boundary at the surface moving toward the endwall, while the position of the boundary at the bottom of the flowing layer barely changes at all.} {\color{black} The largest effect on the bottom boundary results from the rotation speed, which shifts the position of the bottom boundary toward the endwall for both roughnesses (see Fig.~S5 and S6 in Supplemental Material \cite{Supplemental}). This demonstrates how several related effects define the spatial organization of the recirculation cells. A complete description of trajectory drift and curvature requires further modelling and is beyond the scope of this paper.} Linking endwall drift to the topography gradient and cylindrical wall drift to the curvature allows us to consider the effects of rotation speed and fill level. For a smooth wall tumbler, increasing the rotation speed increases the size of the endwall cell and reduces the size of the central cell; the reverse occurs for a rough wall tumbler (Fig.~\ref{driftvsvit}). In both cases, increased rotation speed induces increased curvature (Fig.~\ref{curvevsvit}) and, hence, increased $D_{cyl}$. The topography difference also increases with rotation speed corresponding to an increase in $D_{ew}$. The difference between smooth and rough walls comes from the dependence of $R$ on rotation speed: $R$ increases only moderately with rotation speed for a smooth cylindrical wall (from $R=12.1$ to 14.4 for 5~rpm to 30~rpm) compared to a larger change for the rough case (from $R=20.9$ to 34.4 for 5~rpm to 30~rpm, see Figs. S14 and S15 \cite{Supplemental}). Consequently, the increase in $D_{cyl}$ dominates the increase in $D_{ew}$ for rough walls, so the endwall cell shrinks with increasing rotation speed, but the opposite occurs for the smooth tumbler. The strong increase of $R$ with rotation speed for rough cylindrical walls indicates an increasing effect of the rough cylindrical wall on particle trajectories. This is likely due to the fact that particle trajectories are almost perpendicular to the cylindrical wall while they are tangent to endwalls. The effect of the fill level is more complicated. When increasing the fill level, the endwall cell reduces its size moderately for a smooth wall tumbler and increases its size for a rough wall tumbler (Fig.~\ref{driftvsfill}). For both cases, the surface topographies are almost unaffected by the fill level. Thus, $D_{ew}$ does not change with fill level. The curvature decreases with increasing fill level near the endwall for both the smooth and rough wall tumblers (Fig.~\ref{curvefill}). As the fill level increases from 25 to 50\%, the $R$ coefficient increases from 12.8 to 16.2 for smooth walls but decreases from 30.3 to 23.2 for rough walls (see Figs. S16 and S17 \cite{Supplemental}). The variation in $R$ related to curvature for smooth walls dominates, giving an increase of $D_{cyl}$ and an endwall cell size reduction. For rough walls, $R$ and curvature dependence on fill level increases the endwall cell size. The variation in $R$ may come about because of the combination of the angle at which particles in the flowing layer impact the downstream wall, which is perpendicular for 50\% fill level and oblique for smaller fill levels, and the roughness of the wall, which tends to hold the particles in place strongly for thin layer as they arrive at the downstream end of the flowing layer. These two effects oppose one another and both vary with different amplitudes for rough and smooth tumblers. Friction with particles at rest is probably the dominant effect for rough tumblers and can explain the decrease in $R$ with increasing fill level. The inclination of trajectory dominates for a smooth tumbler, leading to an increase of $R$ with increasing fill level. The case of mixed wall roughness confirms the complex dependence of $D_{cyl}$ on fill level (Figs.~S18 to S20 \cite{Supplemental}). Similar to fully smooth and fully rough tumblers, the topography is unaffected by fill level variations, so $D_{ew}$ does not vary with fill level. For a rough cylindrical wall and smooth endwalls, the curvature is almost unaffected by the fill level (Fig.~S19 \cite{Supplemental}). Hence, the slight increase of the endwall cell size with decreasing fill level is only due to an increase of the $R$ coefficient. The same increase of $R$ is expected for the fully rough tumbler, but as the rough endwall induces an increase of the curvature with decreasing fill level, both effects combine to produce a strong decrease in the size of the endwall cell. For a tumbler with a smooth cylindrical wall and rough endwalls, the curvature variation with fill level is similar to that of a fully smooth tumbler (Fig. S19 \cite{Supplemental}). The variation of the $R$ coefficient with fill level is small for a tumbler with rough endwalls and a smooth cylindrical wall (Fig. S17 \cite{Supplemental}). As a consequence, the decrease of the endwall cell with fill level is small and similar to that of a fully smooth tumbler (Fig.~S18 \cite{Supplemental}). \section{Conclusions} Recirculation cells in cylindrical tumblers are a consequence of the axial displacement and the associated drifts created by the friction on both the tumbler endwalls and the tumbler cylindrical wall. Endwall friction induces trajectory curvature and drift of surface particles toward the endwall, and increasing roughness induces more curved particle trajectories, while cylindrical wall roughness reduces the curvature (Fig.~\ref{trajmixed}). On the other hand, axial particle drift is mainly controlled by the cylindrical wall roughness. For the endwall cells, a smooth cylindrical wall enhances drift, while a rough cylindrical wall reduces drift (Fig.~\ref{trajmixed}). For long enough tumblers, the opposite occurs for central cells. In either case, this drift induces recirculation cells, with the endwall cell having a surface flow directed toward the endwall, and a central recirculation cell having surface flow directed toward the center of the tumbler (Figs.~\ref{displacementmap} and~\ref{maphybbyhsurf}). The concept of two opposing drifts, endwall-induced drift linked to the free surface topography and cylindrical wall-induced drift linked to both the trajectory curvature and a coefficient taking into account the cylindrical wall roughness, provides a framework for understanding the effects of wall roughness, rotation speed, and fill level on the recirculation cells. Endwall drift dominates near the endwall, inducing the recirculation cell with a surface flow directed toward the endwall. Both drifts decrease moving toward the tumbler center, but the endwall effect decreases more rapidly than the cylindrical effect. For long enough tumblers, drift due to the cylindrical wall can become dominant, resulting in a new pair of recirculation cells adjacent to the endwall cells. Since endwall cells are due to the endwall drift dominating and central cells are due to the cylindrical drift dominating, there is no hydrodynamical coupling between them and no additional cells appear in longer tumblers. In spherical and double-cone tumblers, the trajectory curvatures are reversed compared to the cylindrical tumbler case, so both drifts are directed toward the pole. As a result, only one pair of cells is observed in these geometries. These results have implications for both studies of and applications for cylindrical tumblers. While it has been known that curved particle trajectories near the endwalls extend about $D/2$ from the endwalls \cite{ChenOttino08,PohlmanMeier06} and that recirculation cells related to axial drift can occur that extend even further from the endwalls \cite{DOrtonaThomas18}, here we confirm those results in terms of axial drift and particle trajectory curvature, considering the additional effect of wall roughness. The implication is that wall roughness and tumbler length matter in studies of granular flows in tumblers as well as their application in industry. In fact, cylindrical wall roughness is sometimes intentionally added to industrial tumblers in the form of ``lifters," which are axially oriented bars that protrude slightly from the inner wall of the cylinder to prevent slip of the bed of particles with respect to the tumbler. The results in this study provide useful guidelines for the study of granular flow in tumblers, both experimental work and DEM simulations, as well as the practical implications of smooth and rough walls in applications of tumblers in industry. \section*{Acknowledgments} RML and UDO thank CNRS-PICS for financial support. Centre de Calcul Intensif d'Aix-Marseille University is acknowledged for granting access to its high performance computing resources.
1,116,691,501,053
arxiv
\subsection*{DATA AVAILABILITY} The data that support the findings of this study are available from the corresponding author upon reasonable request. \subsection*{REFERENCES}
1,116,691,501,054
arxiv
\section{Introduction} Coordinated multi-point transmission and reception (CoMP) has emerged as one of the key technologies for fifth generation (5G) and beyond 5G communications. In the downlink CoMP system, the base stations (BSs) in a CoMP cluster jointly assign dedicated resources to cell-edge users and prohibits the use of the same resources by other non-CoMP users \cite{yogi}. Thus, the throughput of the network reduces due to joint transmission CoMP \cite{yogi}. However, the loss in throughput due to CoMP can be compensated by considering Non-orthogonal multiple access (NOMA). In NOMA, the two users associated with a BS with suitable difference in channel gains can be paired in the power domain. The superposition coding (SC) and successive interference cancellation (SIC) are used in NOMA at the transmitter and receiver, respectively \cite{mouni}. In an ultra-dense network (UDN), large number of small cells are deployed which in turn reduces the proximity between users and BSs. This BS densification improves the overall spectral efficiency at the cost of increased interference from neighboring BSs \cite{l20}. There are several existing works which have considered NOMA \cite{l3,l5,l15} and CoMP \cite{l19}, individually, for UDN. However, the CoMP and NOMA together have not been studied in detail for UDN. There are key implementation issues, such as grouping, user pairing, and the order in which NOMA and CoMP are implemented which are non-obvious. As mentioned in \cite{aaa}, a CoMP user cannot act as both strong and weak users when paired with multiple non-CoMP users. Given such conditions, pairing of CoMP users and non-CoMP users is a non-trivial task. Recently, in \cite{ourpaper}, a user grouping and pairing scheme for a CoMP–NOMA-based system has been considered for typical user and BS densities. However, the performance also depends on the order of implementation of CoMP and NOMA (NOMA--CoMP, CoMP--NOMA, etc.). Motivated by this, we investigate the effect of CoMP and NOMA implementation order on the UDN by proposing multiple user grouping and pairing schemes. The main contributions of this paper are as follows: \begin{enumerate} \item We propose user grouping and pairing schemes that differ in the order of implementation of NOMA and CoMP and the types of permissible user pairs. \item We analyze the CoMP and NOMA based systems using a proportionally-fair scheduler. We believe this is the first paper that considers the proportionally fair scheduling for a CoMP and NOMA based UDN. \item We investigate the effect of average cluster size and CoMP signal-to-interference-plus-noise ratio (SINR) threshold on the average throughput of proposed system. \item We present detailed simulation results comparing the performance of the proposed schemes with the state-of-art schemes for various user and BS densities. \end{enumerate} The organization of the paper is as follows. Section \ref{system_model} describes the system model in detail. Section \ref{pairing} presents the user grouping and pairing schemes proposed in this paper. The simulation results are discussed in Section \ref{results}. The paper is concluded in Section \ref{conclusion}. \section{System Model} \label{system_model} Consider a UDN with users and BSs deployed randomly with densities $\lambda_u$ and $\lambda_b$ following homogeneous Poisson point process (PPP) \cite{ppp} as shown in Fig. \ref{fig:system_model}. Let $\mathcal{M}=\lbrace 1,2,...,M \rbrace$ and $\mathcal{B}=\lbrace 1,2,...,B \rbrace$ be the set of subchannels and BSs, respectively. The users are associated with a BS $b$ based on the maximum received power rule \cite{yogi}. \subsection{Channel Model} Assuming Time division duplexing, the downlink SINR of user $i$ on subchannel $m$ from BS $b$ for a maximum transmit power $P^b$ is given as \begin{equation} \label{eq1} \gamma_i^{b,m}=\frac{P^{b,m} g_i^{b,m}}{\sum\limits_{\substack {\hat{b} \neq b \\ \hat{b} \in \pazocal{B}}}P^{\hat{b},m}g_{i}^{\hat{b},m} + \sigma^2 } \, , \end{equation} where $P^{b,m} = P^{b}/M$ is the power transmitted per subchannel $m$, $\forall \ m \in \pazocal{M}$, $M$ is the total number of subchannels, $g_i^{b,m}$ is the channel gain between user $i$ and BS $b$, ${\sum\limits_{\substack {\hat{b} \neq b \\ \hat{b} \in \pazocal{B}}}P^{\hat{b},m}g_{i}^{\hat{b},m}}$ is the interference on subchannel $m$ from neighbouring BSs, and $\sigma^2$ is the noise power. For distance $d_i^b$ between user $i$ and BS $b$, the channel gain can be represented as \begin{equation} \label{eq2} g_i^{b,m}=10^{\frac{-pl(d_i^b)+g_t+g_r-f_s-v}{10}}, \end{equation} where $pl(d_i^b)$, $g_t$, $g_r$, $f_s$, and $v$ are the path loss of user $i$ at a distance $d^b_i$, transmitter gain, receiver gain, shadowing loss, and penetration loss, respectively. The link rate of a user $i$ with respect to BS $b$ is given as \begin{equation} \label{eq3} r_i^{b}=\frac{\eta(\gamma_i^{b,m}) sc_o sy_o}{t_{sc}} M, \end{equation} where $\eta(\gamma_i^{b,m})$ is the spectral efficiency of user $i$ from Adaptive Modulation and Coding Scheme as in \cite{yogi}. Further, $sc_o$, $sy_o$, and $t_{sc}$ represent the number of subcarriers per subchannel, the number of symbols per subchannel, and subframe duration (in seconds), respectively. \subsection{CoMP} We consider $\mathcal{C}=\lbrace 1,2,...,C \rbrace$ as the set of CoMP clusters in the area under consideration. We consider \textit{K-means} approach for cluster formation \cite{kmeans}. However, any other clustering can also be used. Let the set of BSs in the CoMP cluster $c$ be denoted by $\mathcal{B}_c$, $\mathcal{B}_c=\lbrace 1,2,...,B_c \rbrace$. For a cluster $c$, the CoMP and non-CoMP users are decided based on the SINR threshold ($\gamma_{th}$). If $\gamma_i^{b,m}<\gamma_{th}$, then user $i$ is designated as CoMP user, otherwise it is treated as a non-CoMP user. The SINR of a CoMP user $i$ in a cluster $c$ is given by \cite{yogi} \begin{equation} \label{eq4} \gamma_{i}^{c,m}=\dfrac{\sum\limits_{\substack{l \in \pazocal{B}_{c}}}P^{l,m}g_{i}^{l,m}}{\sum\limits_{\substack{\hat{l} \in \pazocal{B} \\ \hat{l} \not\in \pazocal{B}_{c}}}P^{\hat{l},m}g_{i}^{\hat{l},m} + \sigma^{2}}, \, \end{equation} where ${\sum\limits_{\substack{\hat{l} \in \pazocal{B} \\ \hat{l} \not\in \pazocal{B}_{c}}}P^{\hat{l},m}g_{i}^{\hat{l},m}}$ is the interference from BSs of neighbouring clusters, $\sum\limits_{\substack{l \in \pazocal{B}_{c}}}P^{l,m}g_{i,c}^{l,m}$ is the received power of user $i$ from all BSs in cluster $c$. Let $\theta_c$ be the time duration for which CoMP user $i$ jointly receives information from all BSs in cluster $c$ \cite{yogi}, then the resultant downlink rate, denoted by $\lambda_i^{c}$, is given by \begin{equation}\label{eq6} \lambda_{i}^{c} = \theta_{c}\beta_{i}^{c}r_{i}, \forall i \in \mathcal{I}_c,\, \end{equation} where $\mathcal{I}_c$ is the set of CoMP users in cluster $c$, $\beta_{i}^{c}$ is the downlink time fraction for which scheduler assigns all $M$ subchannels to user $i$, and $r_{i}^{c}$ is the link rate of CoMP user $i$. Similarly, the downlink rate for a non-CoMP user is \begin{equation}\label{61} \lambda_{i}^{b} = (1-\theta_{c})\beta_{i}^{b}r_{i}^{b}, \forall i \in \mathcal{I}_{nc}\;\text{and}\;\forall b \in \mathcal{B}_{c},\, \end{equation} where $\mathcal{I}_{nc}$ is the set of non-CoMP users in a cluster, $\beta_{i}^{b}$ is the user scheduling time fraction for BS $b$, and $r_{i}^{b}$ is as in \ref{eq3}. The optimal user scheduling time fraction for CoMP and non-CoMP users, $\beta_i^{c}$ and $\beta_{i}^{b}$, respectively, are computed as in \cite{yogi}. \begin{figure}[t] \centering \includegraphics[width=8.5cm,height=10cm,keepaspectratio]{system_model.eps} \caption{System Model} \label{fig:system_model} \end{figure} \subsection{NOMA} We consider power-domain NOMA along with CoMP for an ultra-dense network. The two users are paired as per NOMA scheme based on the minimum SINR criterion as in \cite{mouni}. Let $\gamma_w^{b,m}$ and $\gamma_s^{b,m}$ be the OMA SINR of weak user $w$ and strong user $s$ computed using (\ref{eq1}), respectively. Then, \begin{equation} \label{eq7a} \hat{\gamma}_s^{b,m}=\frac{\zeta_s P^{b,m} g_s^{b,m}}{\sum\limits_{\substack{\hat{b} \in \pazocal{B} \backslash b}}P^{\hat{b},m}g_s^{\hat{b},m} + \sigma^{2}}, \end{equation} \begin{equation} \label{eq7b} \hat{\gamma}_w^{b,m}=\frac{(1-\zeta_s) P^{b,m} g_w^{b,m}}{\zeta_s P^{b,m}g_w^{b,m}+\sum\limits_{\substack{\hat{b} \in \pazocal{B} \backslash b}}P^{\hat{b},m}g_w^{\hat{b},m} + \sigma^{2}}, \end{equation} where $\hat{\gamma}_s^{b,m}$ and $\hat{\gamma}_w^{b,m}$ are the SINR of the strong user with perfect SIC and weak user, respectively, after NOMA pairing, $\zeta_s$ is the power fraction allocated to the strong user which is computed as in \cite{mouni}, $P^{b,m}$ is the total power assigned to the NOMA pair, $g_s^{b,m}$ and $g_w^{b,m}$ is the channel gain of strong and weak user, respectively. In this paper, we have considered Adaptive User Pairing algorithm (AUP) proposed in \cite{mouni}. Next, we explain the proposed NOMA and CoMP schemes. \section{NOMA and CoMP for UDN} \label{pairing} There are three kinds of NOMA pairs possible based on the users present in a cluster: CoMP--CoMP, (non-CoMP)--CoMP, and (non-CoMP)--(non-CoMP). We propose two pairing schemes to analyze the performance of CoMP and NOMA based UDN. We also use the scheme proposed in \cite{ourpaper} to study the CoMP and NOMA based UDN. \subsection{\textit{Scheme A}} While NOMA increases the throughput of the system, CoMP increases SINR/throughput for cell-edge users \cite{yogi}. Motivated by this, in this scheme, we implement NOMA first $\forall b \in \pazocal{B}_c$ to enhance the throughput of the system and then implement CoMP for the unpaired users to enhance their SINR. We pair the users in cluster $c$ using AUP as given in \cite{mouni}. After the implementation of NOMA, we consider all the unpaired OMA users (if any) as CoMP users, irrespective of their SINRs (\textit{we do not follow the CoMP SINR threshold criteria to designate CoMP users in this Scheme}). We then pair the CoMP users using the same AUP algorithm resulting in the formation of CoMP--CoMP NOMA pairs and CoMP OMA users (if any). Thus, in this particular scheme, we have (non-CoMP)--(non-CoMP) NOMA pairs, CoMP--CoMP NOMA pairs, and CoMP OMA users (if any). As all unpaired users are considered as CoMP users, no non-CoMP OMA user exists in this scheme. The SINRs of a strong and weak user in (non-CoMP)--(non-CoMP) NOMA pair are computed using (\ref{eq7a}) and (\ref{eq7b}), respectively. Similarly, the SINR expressions for the strong user with perfect SIC ($\Bar{\gamma}_{i_s}^{c,m}$) and weak user ($\Bar{\gamma}_{i_w}^{c,m}$) in a CoMP--CoMP NOMA pair are, respectively, given as \begin{equation} \label{c1} \Bar{\gamma}_{s}^{c,m}= \frac{\sum\limits_{\substack{t \in \pazocal{B}_{c}}}\zeta_t P^{b,m} g_{s}^{t,m}}{\sum\limits_{\substack{\hat{l} \in \pazocal{B} \\ \hat{l} \not\in \pazocal{B}_{c}}}P^{\hat{l},m}g_{i_s}^{\hat{l},m} + \sigma^{2}}, \end{equation} \begin{equation} \label{c2} \Bar{\gamma}_{w}^{c,m}= \frac{\sum\limits_{\substack{t \in \pazocal{B}_{c}}} (1-\zeta_t) P^{b,m} g_{w}^{t,m}}{\sum\limits_{\substack{t \in \pazocal{B}_{c}}} \zeta_t P^{b,m}g_{w}^{t,m} + \sum\limits_{\substack{\hat{l} \in \pazocal{B} \\ \hat{l} \not\in \pazocal{B}_{c}}}P^{\hat{l},m}g_{i_w}^{\hat{l},m} + \sigma^{2}}, \end{equation} where $\zeta_t$ is the fraction of the power assigned to strong user in a CoMP--CoMP NOMA pair, $\sum\limits_{\substack{t \in \pazocal{B}_{c}}} \zeta_t P^{b,m}g_{w}^{t,m}$ is the interference due to the strong user. The SINR of the CoMP OMA user is computed using (\ref{eq4}). The CoMP--CoMP NOMA pairs and CoMP OMA users are scheduled in the time fraction $\Bar{\theta}_c$, whereas, the (non-CoMP)--(non-CoMP) NOMA pairs of each BS are scheduled in the time fraction of ($1-\Bar{\theta}_c$). The users are scheduled in their respective time fractions using a proportionally-fair scheduler \cite{yogi}. Let $\Bar{\beta}_i^{c}$ be the scheduling time fraction for CoMP--CoMP NOMA pairs and CoMP OMA users and $\Bar{\beta}_{i}^{b}$ be the scheduling time fraction of (non-CoMP)--(non-CoMP) NOMA pairs of BS $b$. We use \cite{yogi} to compute $\Bar{\theta}_c$, $\Bar{\beta}_i^{c}$, and $\Bar{\beta}_{i}^{b}$ that are as follows \begin{figure}[t] \centering \includegraphics[width=9cm,height=10.5cm,keepaspectratio]{flowchart.eps} \caption{Illustration of \textit{Scheme A}.}\vspace{-0.1in} \label{fig:scheme_a} \end{figure} \begin{equation} \Bar{\theta}_c = \frac{|\Bar{\pazocal{I}}_c| + |\Bar{\pazocal{I}}_c^{oma}|}{|\Bar{\pazocal{I}}_c| + |\Bar{\pazocal{I}}_c^{oma}| + |\Bar{\pazocal{I}}_{nc}|},\, \end{equation} \begin{equation} \Bar{\beta}_{i}^{c} = \frac{1}{|\Bar{\pazocal{I}}_c| + |\Bar{\pazocal{I}}_c^{oma}|},\, \mbox{and} \, \Bar{\beta}_{i}^{b} = \frac{1}{|\Bar{\pazocal{I}}_{nc}^{b}|},\, \end{equation} where $|X|$ represents the cardinality of set $X$, $\Bar{\pazocal{I}}_c$, $\Bar{\pazocal{I}}_c^{oma}$, and $\Bar{\pazocal{I}}_{nc}$ are the set of CoMP NOMA pairs, CoMP OMA (unpaired) users, and non-CoMP NOMA pairs, respectively, in cluster $c$, and $\Bar{\pazocal{I}}_{nc}^{b}$ is the set of (non-CoMP)--(non-CoMP) pairs formed with the users associated with the BS $b$. These results are derived in \cite{yogi} for a purely CoMP system, hence, they may be sub-optimal for this scheme. A detailed schematic of \textit{Scheme A} is presented in Fig. \ref{fig:scheme_a}(a). \subsection{\textit{Scheme B}} In this scheme, we implement CoMP first for a cluster $c$ to get the cell edge users under coverage and then implement NOMA to boost their rates. To avoid complexities while pairing and to ensure that a CoMP user paired with multiple users acts as either strong or weak user with all the users \cite{aaa}, in this scheme, we consider (non-CoMP)--CoMP pairs such that CoMP user is always a weak user in the pair formed. This also abstains the CoMP user from performing SIC. Similar pairing can also be performed with CoMP user as strong user at the cost of increased receiver complexity. Therefore, in this scheme, we consider only (non-CoMP)--CoMP with CoMP user as weak user and (non-CoMP)--(non-CoMP) NOMA pairs to study the CoMP and NOMA based UDN. Firstly, all the users in cluster $c$ are divided into groups $\textbf{G1}$ and $\textbf{G2}_b$, where $\textbf{G1}$ contains the SINRs of CoMP users in cluster $c$, $\textbf{G1}=\lbrace \gamma_{1,c}^m, \gamma_{2,c}^m,...,\gamma_{i,c}^m \rbrace$ and $\textbf{G2}_b$, $\textbf{G2}_b=\lbrace \gamma_1^{b,m}, \gamma_2^{b,m},...,\gamma_i^{b,m} \rbrace$ contains the SINRs of non-CoMP users associated with BS $b$ in the cluster $c$. $\textbf{G2}_b$ is formed for every BS $b$ in cluster $c$. To apply AUP algorithm, we need to form two user groups, namely, a weak user group and a strong user group. The SINR of every user in the weak user group should be less than that of every user in the strong user group. Therefore, in this scheme, the necessary condition for pairing the CoMP users in $\textbf{G1}$ is that there should exist atleast one user in the \textbf{G1} whose SINR is less than the maximum of SINRs of all users in $\textbf{G2}_b$. The CoMP users which satisfy this condition are eligible to be paired with users in $\textbf{G2}_b$ for a BS $b$. After verifying this condition, we first form a new group $\textbf{G1}_b^{'}$ with those users in \textbf{G1} that satisfy the previously mentioned condition. After forming $\textbf{G1}_b^{'}$, the group $\textbf{G2}_b^{'}$ is formed for each BS $b$ by picking those users from $\textbf{G2}_b$ whose SINR is greater than maximum SINR of all users in $\textbf{G1}_b^{'}$. The aforementioned procedure is carried out for every BS present in the cluster in an iterative manner till there exists no CoMP user whose SINR is less than atleast one non-CoMP user in every $\textbf{G2}_b$ formed. Then, the users in both the groups are paired using AUP. Therefore, a CoMP user can be a part of NOMA pairs of multiple BSs simultaneously. The unpaired CoMP users are served as OMA users. The non-CoMP users associated with a BS $b$ that are not paired with the CoMP users are paired among themselves using the same AUP at every BS. The SINR of the weak CoMP user ($\Tilde{\gamma}_{w}^{c,m}$) in a (non-CoMP)--CoMP NOMA pair is computed using (\ref{eq4}) and (\ref{eq7b}) and given as follows. \begin{equation} \label{eq10} \Tilde{\gamma}_{w}^{c,m}= \frac{\sum\limits_{\substack{k \in \Tilde{\pazocal{B}}_c}} (1-\zeta_k) P^{b,m} g_{w}^{k,m} + \sum\limits_{\substack{q \in \pazocal{B}/\Tilde{\pazocal{B}}_c}} P^{b,m} g_{w}^{q,m}}{\sum\limits_{\substack{k \in \Tilde{\pazocal{B}}_c}} \zeta_k P^{b,m}g_{w}^{k,m} + \sum\limits_{\substack{\hat{l} \in \pazocal{B} \\ \hat{l} \not\in \pazocal{B}_{c}}}P^{\hat{l},m}g_{w}^{\hat{l},m} + \sigma^{2}}, \end{equation} where $\Tilde{\pazocal{B}}_c$ is the subset of BSs in $\pazocal{B}_c$ with which a CoMP user has formed pairs. Similarly, the SINR of the strong non-CoMP user ($\Tilde{\gamma}_{s}^{k,m}$) in a (non-CoMP)--CoMP NOMA pair with perfect SIC is given as follows. \begin{equation} \label{eq11} \Tilde{\gamma}_{s}^{k,m}= \frac{\zeta_k P^{k,m} g_{s}^{k,m}}{\sum\limits_{\substack{\hat{k} \in \pazocal{B} \backslash k}}P^{\hat{k},m}g_{s}^{\hat{k},m} + \sigma^{2}}, \forall k \in \pazocal{B}_c, \end{equation} where $\zeta_k$ is the power fraction allocated by the BS $k$ in the cluster $c$. In addition, one more kind of pairing considered in this scheme is (non-CoMP)--(non-CoMP). The SINR of users involved in (non-CoMP)--(non-CoMP) pairing can be computed using (\ref{eq7a}) and (\ref{eq7b}). \begin{figure}[t] \centering \includegraphics[width=9cm,height=11cm,keepaspectratio]{scheme_1.eps} \caption{User grouping and pairing based on \textit{Scheme B}.}\vspace{-0.19in} \label{fig:scheme_1} \end{figure} Note that a CoMP user can be paired with more than one non-CoMP user (each non-CoMP user associated with different BS in cluster $c$) and thus, forming a NOMA-CoMP cluster as shown in Fig. \ref{fig:scheme_1}. (non-CoMP)--CoMP NOMA pairs and CoMP OMA users (if any) are scheduled in the CoMP time fraction of $\Tilde{\theta}_c$. Each pair is scheduled using a proportionally-fair scheduler. For illustration, suppose that a CoMP user $i \in \pazocal{I}_{c}$ is paired with 3 non-CoMP users associated with 3 different BSs in the cluster $c$. The CoMP user $i$ are served by all the BSs in the cluster in the duration of $\Tilde{\theta}_c$ within the scheduled time fraction of $\Tilde{\beta}_{i}^{c}$. In the same duration, the 3 non-CoMP users are served by their respective BSs. During the remaining $(1-\Tilde{\theta_c})$ duration, each BS in cluster $c$ schedules their respective non-CoMP user pairs and non-CoMP OMA users (if any) using a proportionally-fair scheduler within the scheduled time fractions \{$\Tilde{\beta}_{i}^{b}$\}. The non-CoMP users that are paired with CoMP users are not be served in the duration of $(1-\Tilde{\theta_c})$. The $\Tilde{\theta}_c$, $\Tilde{\beta}_{i}^{c}$, and $\Tilde{\beta}_{i}^{b}$ using \cite{yogi} are given, respectively, as follows. \begin{equation}\label{eq:comp_tf1} \Tilde{\theta}_c = \frac{|\Tilde{\pazocal{I}}_c| + |\Tilde{\pazocal{I}}_{c}^{oma}|}{|\Tilde{\pazocal{I}}_c| + |\Tilde{\pazocal{I}}_{c}^{oma}| + |\Tilde{\pazocal{I}}_{nc}| + |\Tilde{\pazocal{I}}_{nc}^{oma}| }, \end{equation} \begin{equation}\label{eq:beta_1} \Tilde{\beta}_{i}^{c} = \frac{1}{|\Tilde{\pazocal{I}}_c| + |\Tilde{\pazocal{I}}_{c}^{oma}|},\, \mbox{and}\, \Tilde{\beta}_{i}^{b} = \frac{1}{|\Tilde{\pazocal{I}}_{nc}^{b}| + |\Tilde{\pazocal{I}}_{nc}^{b,oma}|},\, \end{equation} where $\Tilde{\pazocal{I}}_c$ is the set of (non-CoMP)--CoMP users in cluster $c$, $\Tilde{\pazocal{I}}_{c}^{oma}$ is the set of OMA CoMP users in cluster $c$, $\Tilde{\pazocal{I}}_{nc}$ is the set of (non-CoMP)--(non-CoMP) pairs in cluster $c$, $\Tilde{\pazocal{I}}_{nc}^{oma}$ is the set of OMA non-CoMP users in the cluster $c$, $\Tilde{\pazocal{I}}_{nc}^{b}$ is the set of (non-CoMP)--(non-CoMP) pairs associated with the BS $b$, and $\Tilde{\pazocal{I}}_{nc}^{b,oma}$ is the set of OMA non-CoMP users associated with the BS $b$. \subsection{\textit{Scheme Proposed in \cite{ourpaper}}} The scheme in \cite{ourpaper} differs from \textit{Scheme B} in the types of user pairs formed. Further the scheme in \cite{ourpaper} has been proposed for a typical cellular scenario, whereas, we evaluate this scheme's performance for a UDN in this paper. In this scheme only CoMP--CoMP and (non-CoMP)--(non-CoMP) NOMA pairs are formed. The SINRs of weak and strong CoMP user in the CoMP--CoMP NOMA pair are given in \cite{ourpaper}. Similarly, The SINRs of strong and weak users of (non-CoMP)--(non-CoMP) NOMA pair can be computed using (\ref{eq7a}) and (\ref{eq7b}), respectively. In this scheme, there is no scope for a NOMA-CoMP cluster as in \textit{Scheme B}. The CoMP--CoMP pairs are scheduled in the duration of $\Hat{\theta}_c$. Each CoMP--CoMP NOMA pair or OMA CoMP user (if any) is given a time fraction of $\Hat{\beta}_{i}^{c}$, whereas (non-CoMP)--(non-CoMP) NOMA pairs are served by their respective BSs in the duration of $(1-\Hat{\theta}_c)$ with a proportionally fair scheduler and each pair is given a time fraction of $\Hat{\beta}_{i}^{b}$. The expressions of $\Hat{\theta}_c$, $\Hat{\beta}_{i}^{c}$, and $\Hat{\beta}_{i}^{b}$ are given in \cite{ourpaper}. A detailed schematic of this scheme is presented in Fig. \ref{fig:scheme_a}(b). \section{Results and Discussions} \label{results} \begin{table}[t] \caption{Simulation Setup} \begin{center} \begin{tabular}{| m{4.4cm} | m{3.3cm}|} \hline \textbf{\textit{Parameter}}& \textbf{\textit{Value}} \\ \hline Area under consideration ($\text{km}^2$) & 1\\ \hline AWGN Power spectral density (dBm) & $-174$ \\ \hline Base Station density, $\lambda_{b}$ (/$\text{km}^2$) & $100,200,300,400,500$\\ \hline Number of subcarriers per subchannel, $sc_o$ & $12$ \\ \hline Number of symbols per subcarrier, $sy_o$ & $14$ \\ \hline Average Cluster Size & $5,8,10$ \\ \hline Number of iterations & $10^5$ \\ \hline Standard deviation of shadowing random variable (dB) & 8\\ \hline Subchannel Bandwidth (kHz) & $180$ \\ \hline Total number of subchannels, M & $100$ \\ \hline Transmission power, $P^{b}$ (dBm) & $24$ dBm \\ \hline User density, $\lambda_u$ (/$\text{km}^2$) & $100,200,400,600,1000$\\ \hline \end{tabular}\vspace{-0.15in} \label{table} \end{center} \end{table} The parameters considered for simulation are summarized in Table \ref{table}. In this paper, we consider the expressions of average throughput and coverage as in \cite{yogi}. Fig. \ref{fig:throughput_1} shows the variation of average throughput with respect to $\lambda_u$ for $\lambda_b = 500/\text{km}^2$. It is observed that for $\lambda_u \leq 600$/km$^2$, \textit{Scheme A} outperforms \textit{Scheme B} and the scheme proposed in \cite{ourpaper} as well as conventional schemes. This is due to the formation of a very few NOMA pairs (or even no NOMA pairs) at the initial stage of NOMA implementation, particularly for lower $\lambda_u$. As we do not use $\gamma_{th}$ to separate CoMP users in \textit{Scheme A}, a large number of unpaired OMA users are considered as CoMP users unlike in \textit{Scheme B} and the scheme in \cite{ourpaper}. The pairing of such CoMP users using NOMA results in the increase in the average throughput of \textit{Scheme A}. For a given $\lambda_b$, at relatively higher $\lambda_u$, number of users per BS increases which leads to an increase in the number of non-CoMP NOMA pairs and decrease in the unpaired non-CoMP OMA users. Therefore, at higher $\lambda_u$, NOMA is performing better than \textit{Scheme A.} \begin{figure}[t] \centering \includegraphics[width=9.5cm,height=12cm,keepaspectratio]{average_throughput_1.eps} \vspace{-0.15in} \caption{Variation of throughput with $\lambda_u$ for $\lambda_{b} = 500/\text{km}^2$.} \label{fig:throughput_1} \vspace{-0.25in} \end{figure} We present the variation of the average throughput of the system for various $\lambda_b$ and for $\lambda_u = 400$/km$^2$ in Fig. \ref{fig:throughput_2}. For a given $\lambda_u$, at comparatively lower $\lambda_b$, more number of users can get associated with a single BS. Therefore, there is a higher possibility for non-CoMP user NOMA pairs being formed. However, with increase in $\lambda_b$, number of users per BS decreases. Therefore, the possibility of non-CoMP NOMA pairs being formed decreases. Hence, the number of unpaired non-CoMP OMA users increases with increase in $\lambda_b$. Therefore, the average throughput of \textit{Scheme A} is superior for higher $\lambda_b$ as compared to lower $\lambda_b$. The relatively lower performance of \textit{Scheme B} and the scheme proposed in \cite{ourpaper} in Fig. \ref{fig:throughput_1} and Fig. \ref{fig:throughput_2} can be attributed to the threshold based CoMP implementation in the initial stage. Due to this SINR threshold based CoMP, time fraction available to the users gets reduced because the $\hat{\theta}_c$ and $\Tilde{\theta}_c$ start increasing for lower values of $\lambda_b$. The (non-CoMP)--CoMP pairing in \textit{Scheme B} further reduces the time fraction available for some of the non-CoMP users. Hence, its average throughput performance is the worst when compared to all other schemes. At higher $\lambda_b$, number of users per BS decreases which inturn decreases the number of non-CoMP NOMA pairs. Therefore, for \textit{Scheme B} and the scheme proposed in \cite{ourpaper}, the average throughput is similar to that of CoMP-only system. \begin{figure}[t] \centering \includegraphics[width=8.5cm,height=10.5cm,keepaspectratio]{average_throughput_2.eps} \vspace{-0.15in} \caption{Variation of throughput with $\lambda_b$ for $\lambda_{u} = 400/\text{km}^2$.} \label{fig:throughput_2} \vspace{-0.1in} \end{figure} \begin{figure}[t] \centering \includegraphics[width=8.5cm,height=10.5cm,keepaspectratio]{average_throughput_3.eps} \vspace{-0.15in} \caption{Variation of throughput with $\gamma_{th}$ for $\lambda_{u} = 200/\text{km}^2$ and $\lambda_{b} = 200/\text{km}^2$.} \vspace{-0.12in} \label{fig:throughput_3} \end{figure} \begin{figure}[t] \centering \vspace{-0.11in} \includegraphics[width=8.5cm,height=10.5cm,keepaspectratio]{average_throughput_4.eps} \caption{Variation of throughput with the average cluster size for $\lambda_{u} = 200/\text{km}^2$ and $\lambda_{b} = 200/\text{km}^2$.} \vspace{-0.15in} \label{fig:throughput_4} \end{figure} Fig. \ref{fig:throughput_3} presents the variation of average throughput with respect to $\gamma_{th}$. We have observed that \textit{Scheme A}, benchmark, and NOMA-only systems maintain average throughput as constant for all values of $\gamma_{th}$. Whereas, a downward trend was observed for \textit{Scheme B}, for the scheme proposed in \cite{ourpaper}, and for CoMP only system with an increase in $\gamma_{th}$. However, the drop in the average throughput for CoMP only system is not as steep as it is for \textit{Scheme B} and for the scheme proposed in \cite{ourpaper}. The reason for such marginal drop is that at such high $\lambda_b$, an increase in the CoMP threshold may not increase the number of CoMP users significantly that can make huge differences in throughput. However, there is a marginal increase in the number of CoMP users because of which there is a marginal drop in the average throughput. Nevertheless, for \textit{Scheme B} and for the scheme proposed in \cite{ourpaper}, this marginal increase in the number of CoMP users leads to an increase in the number of (non-CoMP)--CoMP and CoMP--CoMP NOMA pairs, respectively. Therefore, $\Hat{\theta}_c$ and $\Tilde{\theta}_c$ increases due to which the drop in the average throughput is much steeper than that of CoMP. The variation of average throughput with respect to average CoMP cluster size is shown in Fig. \ref{fig:throughput_4}. As the average cluster size increases, the performance of \textit{Scheme A} starts deteriorating. This is due to the increase in the number of NOMA pairs as well as unpaired OMA users that are considered as CoMP users in \textit{Scheme A}. With the increase in the number of unpaired OMA users, the $\Bar{\theta}_c$ gradually increases. Hence, the performance of \textit{Scheme A} gets worse than that of the scheme proposed in \cite{ourpaper} as the cluster size increases. \begin{figure}[t] \centering \vspace{-0.11in} \includegraphics[width=8.5cm,height=10.5cm,keepaspectratio]{coverage_1.eps} \caption{Variation of coverage with $\lambda_b$.}\vspace{-0.15in} \label{fig:coverage} \end{figure} Fig. \ref{fig:coverage} presents the variation of the coverage of the various schemes with respect to $\lambda_b$. We can observe that the coverage of CoMP and NOMA based systems is less than the Benchmark and CoMP-only systems at lower $\lambda_b$. However, their coverage is comparatively equal or slightly greater than the NOMA-only system. The \textit{Scheme A} is performing better than other schemes in terms of throughput under certain conditions but its coverage is less. The \textit{Scheme B}'s performance in terms of throughput is the worst. However, its coverage is slightly better than NOMA and other two schemes. Thus, the proposed schemes offer a trade-off between the coverage and throughput. \section{Conclusion} \label{conclusion} In this paper, we have proposed multiple user grouping and pairing schemes to study the performance of CoMP and NOMA based UDN. The proposed schemes not only differ in the order of implementation of NOMA and CoMP but also in the kinds of permitted NOMA pairs. We have compared the performance of these schemes with the conventional OMA-based benchmark, CoMP-only, NOMA-only systems, and the state-of-the-art. The \textit{Scheme A} among the proposed schemes results in the enhanced throughput when compared to its counterparts for lower $\lambda_u$ and higher $\lambda_b$. The coverage of all the three schemes is less than the benchmark and CoMP-only systems for lower $\lambda_b$. The \textit{Scheme B} performs marginally better than the other schemes and NOMA-only in terms of coverage. The proposed schemes can be used by cellular network planners to appropriately deploy UDNs. \section{Acknowledgement} This work was supported in part by the Indo-Norwegian Collaboration in Autonomous Cyber-Physical Systems (INCAPS)--project: 287918 of the INTPART program, the Low-Altitude UAV Communication and Tracking (LUCAT)--project: 280835 of the IKTPLUSS program from the Research Council of Norway and the Department of Science and Technology (DST), Govt. of India (Ref. No. INT/NOR/RCN/ICT/P-01/2018), and DST NMICPS through TiHAN Faculty fellowship of Dr. Abhinav Kumar.
1,116,691,501,055
arxiv
\section{Introduction} \label{sec:emp_intro} \input{emp_intro} \input{emp_body} \section{Conclusion} \label{sec:emp_conclu} \input{emp_conclu} \section*{Acknowledgements} We thank the anonymous referee for insightful feedback, Alina Böcker, Christoph Engler, and Mattis Magg for useful discussions and inputs. Special thanks go to Michael Hayden for providing the raw data that allowed us to reproduce the MDFs from \citet{Hayden:2015aa} and Sven Buder for providing the analysis routine and raw data from the GALAH survey Data Release 3 to compare the MDFs. The authors acknowledge financial support from DFG via the Collaborative Research Center (SFB 881, Project-ID 138713538) 'The Milky Way System' (subprojects A1, B1, B2, B8). SCOG and RSK also acknowledge funding from the Heidelberg cluster of excellence (EXC 2181 - 390900948) `STRUCTURES: A unifying approach to emergent phenomena in the physical world, mathematics, and complex data', and from the European Research Council in the ERC synergy grant `ECOGAL – Understanding our Galactic ecosystem: From the disk of the Milky Way to the formation sites of stars and planets' (project ID 855130). The analysis was carried out on the ISAAC and VERA computers at the Max Planck Computing and Data Facility (MPCDF). We also acknowledge the HPC resources and data storage service SDS@hd supported by the Ministry of Science, Research and the Arts Baden-W\"{u}rttemberg (MWK) and the German Research Foundation (DFG) through grant INST 35/1314-1 FUGG and INST 35/1503-1 FUGG. \section*{Software} \href{https://github.com/illustristng/illustris_python}{illustris\_python}, matplotlib \citep{Hunter:2007aa}, numpy \citep{Harris:2020aa}, pandas \citep{McKinney:2010aa,Reback:2022aa}, python \citep{python09}, scipy \citep{Virtanen:2020aa}. \section*{DATA AVAILABILITY} As of February 1st, 2021, data of the TNG50 simulation series are publicly available from the IllustrisTNG repository: \href{https://www.tng-project.org}{https://www.tng-project.org} and described by \citealt{Nelson:2019aa}. Data directly related to content and figures of this publication will be shared upon reasonable request to the corresponding author. \bibliographystyle{mnras} \section{Methods} \label{sec:emp_met} In the following sections, we describe the TNG50 simulation and how the MW/M31-like systems are identified. We also explain the post-processing procedure to partition each system into stellar morphological components. \subsection{The TNG50 simulation} \label{sec:tng50} TNG50 \citep{Nelson:2019ab,Pillepich:2019aa} is the highest resolution simulation in the IllustrisTNG\footnote{The simulations of the IllustrisTNG project are fully publicly available and described in \citet{Nelson:2019aa}.} simulation suite \citep{Marinacci:2018aa,Naiman:2018aa,Nelson:2019aa,Pillepich:2018ab,Springel:2018aa}. The simulation starts at $z=127$ and runs until $z=0$ following the \citet{Planck15XIII} cosmology ($h = 0.6774$, $\Lambda_{\Omega,0} = 0.6911$, $\Lambda_{\rm m,0} = 0.3089$, $\Lambda_{\rm b,0} = 0.0486$, $\sigma_8 = 0.8159$, and $n_{\rm s} = 0.9667$). It has a dark matter mass resolution of $3.1 \times 10^5\,h^{-1}{\rm M}_\odot$ and a baryonic mass resolution of $5.8 \times 10^4 \,h^{-1}{\rm M}_\odot$ with a comoving volume of $35^{3}\,h^{-3}$Mpc$^{3}$. Stellar particles therefore represent $\lesssim 10^5\,{\rm M}_\odot$ mono-age stellar populations. TNG50 is run with the moving mesh code \textsc{arepo} \citep{Springel:2010aa}. It includes a large number of physical processes such as primordial and metal-line cooling, heating by the extragalactic UV background, stochastic, gas-density threshold-based star formation, evolution of stellar populations represented by star particles, chemical feedback from supernovae and AGB stars, and supermassive black hole (SMBH) formation and feedback. The details of the model are described in \citet{Weinberger:2017aa} and \citet{Pillepich:2018aa}. Importantly, TNG50 is a cosmological simulation, whereby the coupled equations of gravity, magnetohydrodynamics, and galaxy formation processes are solved in an expanding universe, so that also the hierarchical growth of structure is fully accounted for. Moreover, stellar particles are allowed to age, to loose mass and to enrich the surrounding inter-stellar medium (ISM) with metals, so that subsequent episodes of star formation in the simulated galaxies produce a progressively more metal-enriched ISM and star particles with progressively higher initial metallicities. \subsection{MW/M31 analogues in TNG50} \label{sec:mwm31} To identify analogues of the MW and M31 in TNG50, we follow the criteria already proposed by \citet{Engler:2021aa,Pillepich:2021aa} and described in great detail by Pillepich et al. (in preparation): \begin{enumerate} \item the galaxy has a stellar mass, $M_* (< 30$~kpc), in the range of $10^{10.5-11.2}{\rm M}_\odot$, \item its 3D minor-to-major axial ratio $s$ of the stellar mass distribution is less than 0.45 or it appears disky by visual inspection. \item there are no other galaxies with $M_* > 10^{10.5}{\rm M}_\odot$ at a distance of less than 500\,kpc, \item the mass of the host underlying dark matter halo is $< 10^{13}{\rm M}_\odot$. \end{enumerate} This leads to the identification of 198 MW/M31-like galaxies in TNG50 at $z=0$, including a majority with stellar bars \citep[][Pillepich et al. in preparation]{Frankel:2022aa}, with more or less massive and extended bulges \citep{Gargiulo:2022aa}, and with a diversity of past merger histories \citep{Sotillo-Ramos:2022aa}. Each of these galaxies is surrounded by a more-or-less numerous population of satellite galaxies \citep{Engler:2021aa}: throughout this paper, we consider as MW/M31-like satellites those galaxies identified by the \textsc{subfind} algorithm \citep{Springel:2001aa,Dolag:2009aa} that fulfill the following criteria: \begin{enumerate} \item the galaxy is within 300\,kpc (3D) of one of the MW/M31 analogues, \item it has a stellar mass $\ge 5 \times 10^{6}{\rm M}_\odot$ within 2 times its stellar half-mass radius. \end{enumerate} We limit the distances of satellites to the host galaxy to the value typically adopted when quantifying the demographics of the MW's and M31's satellites \citep{McConnachie:2012aa, McConnachie:2018aa}. Moreover, 300 kpc is slightly larger, but similar, to the typical virial radius of MW/M31-mass haloes. In TNG50 a minimum stellar mass of $5 \times 10^{6}{\rm M}_\odot$ is equivalent to resolving a galaxy with at least 63 star particles: we impose this minimum as it ensures completeness in the underlying halo mass \citep[see Fig.~A2 in ][]{Engler:2021aa}. The number of satellites that fulfil the above selection criteria varies among the 198 MW/M31 analogues from 0 to 20 \citep[see Fig.~3 in][]{Engler:2021aa}. \subsection{Morphological decomposition with kinematics} \label{sec:morph} Many different methods have been used and proposed in the literature, both observationally and numerically, to subdivide the bodies of galaxies into different stellar components, such as disk, bulge, and stellar halo. Even in the context of the IllustrisTNG simulations, a number of complementary approaches have been used and catalogued \citep[e.g.][]{Du:2019aa, Gargiulo:2022aa, Zhu:2022aa}. Here we follow the recommendation by \citet{Du:2019aa} and go beyond photometry-based (i.e. geometric) morphological decompositions of galaxies. In particular, we embrace the method of \citet{Zhu:2022aa}, which accounts for the stars' kinematics and that we describe below. Firstly, for each simulated MW/M31-like system, we take all of its stars within the halo radius of $r_\mathrm{halo} = 300$\,kpc. In other words, we only take into account stars that are within 300\,kpc of the centre of a MW/M31 analogue, where the centre is taken to be the location of the particular cell with minimum gravitational potential energy. From these, we exclude satellite stars: these are stellar particles that are gravitationally bound to other galaxies, i.e. satellites, according to the \textsc{subfind} algorithm. The remaining stars compose what we refer to as the ``main galaxy body'': their coordinates and velocities can be translated and hence expressed immediately in the coordinate system of the galaxy -- the bulk velocity of the galaxy is the resultant velocity of all its resolution elements in the reference system of the simulation box. Therefore, we classify stars in the main galaxy body into four components based on their circularity $\epsilon_\mathrm{z}$ (defined below) and their distance from the center of the galaxy $r_*$: \begin{enumerate} \item Cold disk: $\epsilon_\mathrm{z} > 0.7$ and $r_* \leq r_\mathrm{disk}$; \\ \item Bulge: $\epsilon_\mathrm{z} \leq 0.7$ and $r_* < r_\mathrm{cut}$; \\ \item Warm disk: $0.5 < \epsilon_\mathrm{z} < 0.7$ and $r_\mathrm{cut} \leq r_* < r_\mathrm{disk}$; \\ \item Stellar halo: $\epsilon_\mathrm{z} \leq 0.5$ and $r_\mathrm{cut} \leq r_* \leq r_\mathrm{halo}$ plus $\epsilon_\mathrm{z} > 0.5$ and $r_\mathrm{disk} < r_* \leq r_\mathrm{halo}$. \end{enumerate} Here, $r_\mathrm{cut} = 3.5$\,kpc, $r_\mathrm{disk} = 6 \times r_\mathrm{disk\,scale\,length}$ is the disk radius, where $r_\mathrm{disk\,scale\,length}$ is the disk scale length computed by \citet{Sotillo-Ramos:2022aa}, and $r_\mathrm{halo} = 300$\,kpc is the maximum distance that we consider. The separation between bulge and stellar halo at the fixed cut of $r_\mathrm{cut} = 3.5\,$kpc is taken from \citet{Zhu:2022aa}, who showed that the distribution of stars with $\epsilon_\mathrm{z} < 0.5$ peaks at $< 1.5$\,kpc for most galaxies in TNG50 and most of the stars with $\epsilon_\mathrm{z} < 0.5$ are at $r < 3.5\,$kpc (see their Fig.~3). The idea of this kinematically-defined morphological decomposition is shown with a cartoon plot in Fig.~\ref{fig:morph_decomp}, together with additional details related to how much stellar mass is assigned to each component for all 198 MW/M31-like galaxies of TNG50 (see Appendix \ref{app:emp_app}). Next, we list the steps of how we obtain the circularity $\epsilon_\mathrm{z}$ for each stellar particle of a given galaxy: \begin{enumerate} \item Shift the origin to the centre of the main galaxy such that $\textbf{r}_\mathrm{*} = \textbf{r}_\mathrm{*, ini} - \textbf{r}_\mathrm{main, ini}$ and remove the bulk velocity so that $\textbf{v}_\mathrm{*} = \textbf{v}_\mathrm{*, ini} - \textbf{v}_\mathrm{main, ini}$, where $\textbf{r}_\mathrm{main, ini}$ and $\textbf{v}_\mathrm{main, ini}$ denote the distance vector to the origin and velocity vector of the main galaxy, respectively. \item Compute the un-rotated angular momentum of each star in the main galaxy, $\textbf{j}_\mathrm{*} = \textbf{r}_\mathrm{*} \times \textbf{v}_\mathrm{*}$. \item Sum up orbital angular momenta of all stars in the main galaxy $\textbf{J}_\mathrm{gal} = \Sigma_{i} \textbf{j}_{\mathrm{*}}$. \item Rotate the coordinate system such that the new $z$-axis is parallel to the angular momentum of the main galaxy; i.e.\ $\hat{\mathrm{z}} \parallel \textbf{J}_\mathrm{gal}$. \item Take the $z$-component of the specific stellar angular momentum after coordinate rotation for each star in the system, ${j}_\mathrm{z, rot}$. \item Calculate the value of the circular velocity for each star $v_\mathrm{c} = \sqrt{G M(< r_\mathrm{*,rot})/r_\mathrm{*,rot}}$, where $r_\mathrm{*,rot}$ is the stellar distance from the centre of the galaxy after coordinate rotation. \item Compute the magnitude of the angular momentum that the star would have if it were in a circular orbit at the stellar distance from the centre of the galaxy, namely $j_\mathrm{c} = r_\mathrm{*,rot} v_\mathrm{c}$. \item Finally, we obtain the circularity by $\epsilon_\mathrm{z} \equiv \frac{\textbf{j}_\mathrm{z, rot}}{j_\mathrm{c}}$. \end{enumerate} Stars on a circular orbit have $\epsilon_\mathrm{z}\sim1$, while those in hot and chaotic orbits have $\epsilon_\mathrm{z}\sim 0$: see Fig.~\ref{fig:epzr} and the Appendix of \citet{Zhu:2022aa}. The fifth component of each MW/M31-like system is composed by its satellites, which we consider as a population, i.e. for each MW/M31 analogue and when referring to its satellites or satellites component, we sum up the stars in its satellites (see above). We discuss the scatter across the whole population of MW/M31-like satellites (about 1200 in total) and possible trends as a function of their stellar mass in Sec.~\ref{sec:emp_dis}. Finally, we characterise each MW/M31-like system based on five stellar components: cold disk, bulge, warm disk, stellar halo, and satellites. \subsection{EMP stars in TNG50, and across morphological components} As well as tracing the overall metallicity, the TNG50 simulation traces individual abundances of 9 species in addition to a tracer of Europium \citep{Pillepich:2018aa, Naiman:2018aa}: H, He, C, N, O, Ne, Mg, Si, Fe. We label star particles as EMP stars if their $\mathrm{[Fe/H]}$ is $< -3$, measured from their total metal mass fraction at birth. We address different choices of such a threshold in Sec.~\ref{sec:emp_dis}. It is important to keep in mind that star particles in TNG50 are of the order of $10^5\,{\rm M}_\odot$ and therefore do not represent individual stars but rather star clusters or populations that form at the same time in the same environment. In addition to the spatial selection of stars outlined above, here we consider all star particles that survive until $z=0$. We compute the EMP mass and stellar mass in each galaxy stellar morphological component to study where we are more likely to find EMP stars. In this work we use two different stellar mass fractions to quantify such statement, which we calculate for each 198 TNG50 MW/M31-like system: \begin{itemize} \item frequency by component: $M_{\mathrm{EMP, comp}}$-to-$M_{\mathrm{tot, comp}}$\, fraction, where we take the ratio between the mass in EMPs in each component and the total stellar mass of the same component;\\ \item contribution by component: $M_{\mathrm{EMP, comp}}$-to-$M_\mathrm{EMP} (<300\mathrm{kpc})$\,, which represents the fraction of EMP mass in each component to the total EMP mass in the system, i.e. across all its components. \end{itemize} Here $\rm{comp}$ stands for cold disk, bulge, warm disk, stellar halo, and satellites. The first quantity can also be defined for the whole system. \section{Results for TNG50 MW/M31-like galaxies} \label{sec:emp_result} Equipped with the output of the TNG50 simulation, with a galaxy selection, and with the morphological decomposition described above, here we present the main results of our analysis. We quantify direct outputs such as the spatial distribution and radial profiles of EMP stars as well as derived results such as the EMP fractions in different morphological components. However, before doing so, we briefly characterise and comment on the MDF of the 198 MW/M31-like galaxies of TNG50 at $z=0$. \begin{figure*} \centering \includegraphics[width=0.85\textwidth]{figs/MDF_All.png} \includegraphics[width=0.85\textwidth]{figs/MDF_Y20B21B21.png} \includegraphics[width=0.85\textwidth]{figs/MDF_Hayden15.png} \caption[MDFs of TNG50 MW/M31 analogues]{{\bf Stellar metallicity distribution functions (MDFs) of MW/M31-like galaxies in TNG50.} {\it Top panels:} we show the MDFs of all 198 MW/M31 systems, across {\it all} their morphological components: disks, bulge, stellar halo and satellites, colour coded by galaxy stellar mass. The redder the colour, the higher the galaxy stellar mass. On the left, we show the cumulative fraction and on the right, we show the stellar mass. {\it Middle three panels:} MDFs of the subsample TNG50 galaxies with mass more similar to the Milky Way (gray curves) overlaid to results from observations and thus with stars selected by height and radial distance to attempt to account for the surveys' selection functions: we report here the Milky Way's MDF by \citet[][green]{Youakim:2020aa}, \citet[][magenta]{Bonifacio:2021aa}, and by \citet[][orange]{Buder:2021aa}. For the middle left panel, we select stars that have heliocentric distances between 6 and 20 kpc, and at Galactic latitudes between 30 and 78 degrees ($|b| = 30-78$). The probability is then normalised to the total number of stars between $-4. < \mathrm{[Fe/H]} < -1.05$ to compare with \citet{Youakim:2020aa}. In the middle central panel, we select stars that are < 6\,kpc from the galactic plane and < 14\,kpc from the galactic centre. The probability is again normalised to the total number of stars between $-4. < \mathrm{[Fe/H]} < -1.05$ to compare with \citet{Bonifacio:2021aa}. For the middle right panel, we select stars that have heliocentric distances $<2$ kpc and that are positioned at Galactic latitudes larger than 10 degrees ($|b| > 10$) to compare with data from GALAH survey Data Release 3 \citep{Buder:2021aa}. In order to take into account the distance-selection criterion from the survey, the solar position in each MW-like galaxy is randomly sampled, at 8.2 kpc from the galactic centre, and we compute the mean MDF over 100 possibilities. The probability is normalised to the total number of stars. {\it Bottom three panels:} MDF comparison to data from \citet[][shades of blue, for three different height selections]{Hayden:2015aa}. We group stars at different distances from the galactic plane to compare with \citet{Hayden:2015aa}, where the probabilities are normalised to total number of stars in each subset. We do not impose any additional selection function to TNG50 star particles but for the aforementioned geometrical spatial cuts that account for the effective footprints of the APOGEE survey Data Release 12. Note that the survey cannot report many metal-poor stars as it hits the detection limit at $\mathrm{[Fe/H]} \sim -2$.} \label{fig:mdf} \end{figure*} \subsection{Metallicity distribution functions of TNG50 MW/M31-like systems} \label{sec:mdf} We show the TNG50 predictions for the metallicity distribution functions (MDFs) of all 198 MW/M31-like systems in the top panels of Fig.~\ref{fig:mdf}. Here we consider {\it all} the stars in each galaxy within 300 kpc distance from its centre, without distinguishing by stellar morphological component. On the left, we show the cumulative fraction. On the right, we show the stellar mass in bins of metallicity, without normalising; the colour of the curves denote the galaxy stellar mass. As it can be seen, TNG50 predicts a non-negligible tail towards very low metallicity: $\sim$1 in every 10,000 stellar particles in any given galactic system has a metallicity as low as $\mathrm{[Fe/H]} \sim -4$. The galaxy-to-galaxy variation is mostly driven by the different total metal content across galaxies of different mass or assembly history: more massive galaxies have higher MDF peaks and more substantial high metallicity contributions. A comparison to observational constraints is not straightforward, as it requires applying to TNG50 data the same selection functions as in the observational surveys, both in terms of survey footprints and possible implicit or explicit selection functions, as in color and magnitude. We partially go in this direction by implementing at least the spatial i.e. geometrical selection functions to TNG50 stars and by comparing to three sets of observational results. This comparison is shown in the lower six smaller panels of Fig.~\ref{fig:mdf}. There we show normalised MDFs for TNG50 MW analogues only (thin grey lines), selected among the 198 galaxies to have stellar mass in the $10^{10.5-11.8}{\rm M}_\odot$ range (127 galaxies in total). We select three observational data sets for comparison, for which we adapt the simulated data selection individually. \begin{itemize} \item \citet{Youakim:2020aa} aimed to study the metallicity distribution in the Galactic halo. They analysed $\sim 80000$ main sequence turnoff stars from the Pristine Survey that have heliocentric distances between 6 and 20 kpc, and at Galactic latitudes between 30 and 78 degrees ($|b| = 30-78$). Their results are shown as the green solid curve in middle left panel. Their stellar metallicities fall in the range of $\mathrm{[Fe/H]} = [-4, -1.05]$. \item \citet[][]{Bonifacio:2021aa} analysed $\sim 140000$ stars from SDSS data release 12. These stars are located at $\leq 6$ kpc from the Galactic plane and have distances $\leq 14$ kpc from the Galactic center \item \citet{Buder:2021aa} presented stellar spectra from the GALAH survey Data Release 3 with more than 600,000 stars. The stars are within 2 kpc from the Sun and with Galactic latitudes larger than 10 degrees ($|b| > 10$). We follow the same spatial selection criteria in order to compare the MDFs. \item \citet{Hayden:2015aa} used APOGEE Data Release 12 to derive MDFs in varying bins of heliocentric distances and heights from the mid Galactic plane ($|z|$/kpc) -- see Sec.~\ref{sec:emp_intro}. The survey did not report many metal-poor stars and hit the detection limit at about $\mathrm{[Fe/H]} \sim -2$. We show them as dashed curves in the three bottom panels of Fig.~\ref{fig:mdf} and apply similar spatial cuts to the stars of the TNG50 MW-like galaxies. \end{itemize} To make a somewhat fairer comparison with the MDFs in \citet{Youakim:2020aa}, \cite{Bonifacio:2021aa} and the GALAH Survey \citep{Buder:2021aa}, we follow the same selection based on spatial information, as described in the caption. However, it should be stressed that there are other selections in the making of the observed MDFs (e.g. in color or magnitude and hence potentially, indirectly, on metallicity) that we are not replicating for the TNG50 stars. Keeping this in mind, we see that the peak of the MDF from \citet{Bonifacio:2021aa} is lower than the one from the TNG50 MW analogues; both MDFs from \citet{Bonifacio:2021aa} and \citet{Youakim:2020aa} show convex curves, whereas the MDF from TNG50 appears to be concave; the MDFs from the GALAH survey and TNG50 MW analogues cover the same metallicity range and agree well in general, while the peak from the GALAH survey is at slightly higher metallicity; and finally, the overall shape of the TNG50 MDFs are wider than the ones in \citet{Hayden:2015aa} at $|z| \leq 1.0$\,kpc, whereby the widths of the MDFs between TNG50 and the \citet{Hayden:2015aa} results are more similar at $1.0 < |z| / \rm{kpc} \leq 2$. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figs/grid_stellarmap_yz_-3.png} \caption[Spatial distribution of stars in 16 MW/M31 analogues]{\textbf{Stellar mass column density of 16 randomly-chosen MW/M31-like galaxies among the 198 of TNG50 at $z=0$.} The galaxies are shown in edge-on projections, based on the orientation of their stellar disks. The ID is the unique \textsc{subfind} identifier of the galaxy in TNG50 at the snapshot 099. The colours denote the cumulative stellar mass density in spatial pixels of $0.6 \times 0.6$ kpc, for a total of 60 kpc per side. Here we include all stars, irrespective of metallicity.} \label{fig:smap} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figs/grid_stellarmap_yz_-3_lowz.png} \caption[Spatial distribution of EMP stars in 16 MW/M31 analogues]{As in Fig.~\ref{fig:smap} but only for the EMP stellar particles. } \label{fig:lzmap} \end{figure*} Given the complexity of the observations-to-simulation comparison, the impact of the selection functions on the observationally-derived MDFs (which are indeed all different), and the fact that the MDFs at $z=0$ are the results of 14 billion years of star formation, supernova explosions and stellar winds, outflows triggered by star formation and SMBH feedback, and accretion of stars from lower-mass galaxies and mergers, we conclude that it is reassuring that TNG50 returns MDFs of simulated MW analogues that are in the ball park of the constraints for the Galaxy. We can hence proceed with our analyses with some added confidence in the underlying model. To compare the MDF at lower metallicity (e.g. $\mathrm{[Fe/H]} \lesssim -3.5$), we would need more complete observational data samples: the distributions of Fig.~\ref{fig:mdf} in the EMP regime are thus predictions of the TNG50 simulation, to be proved or ruled out with future data. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{figs/RDF.png} \caption[Radial profiles of $M_{\mathrm{EMP}}(r)$-to-$M_{\mathrm{tot}}(r)$\, fraction and $M_\mathrm{EMP}(r)$-to-$M_\mathrm{EMP, tot}$ fraction]{ \textbf{Radial spatial distribution of EMPs in 198 TNG50 MW/M31 analogues}. Here we consider all stars irrespective of morphological component, including stars in satellites, and we show the unweighted-mean profiles across many simulated galaxies, along with the their galaxy-to-galaxy 1~$\sigma$. In the left panel, we show the radial profiles of the total stellar mass (stars) and of the stellar mass in EMP stellar populations (squares). In the right panel, we show radial profiles of the mean $M_{\mathrm{EMP}}(r)$-to-$M_{\mathrm{tot}}(r)$\, stellar fraction (circles) and $M_\mathrm{EMP}(r)$-to-$M_\mathrm{EMP}(<300 \rm{kpc})$ stellar fraction (triangles). The radial bins have a width of 10~kpc. The magenta and orange curves denote galaxies in different groups: MW-mass (galaxy stellar mass within 30 kpc of $M_* = 10^{10.5-10.9} {\rm M}_\odot$) and M31-mass ($M_* = 10^{11.9-11.2} {\rm M}_\odot$), respectively.} \label{fig:rdf} \end{figure*} \subsection{Spatial distribution of EMP stars} Most of the EMP stars are located in the central regions of the MW/M31-like systems and do not typically form a visually-identifiable disk. We choose at random 16 examples among the 198 MW/M31 analogues and show the stellar mass column density of all their stars (Fig.~\ref{fig:smap}) and of their EMP stars (Fig.~\ref{fig:lzmap}), both in edge-on projections. In the first set of images, structures like the bulge and the cold disk can be clearly seen with visual inspection. Next we show the radial distributions of all stars and EMP stars in Fig.~\ref{fig:rdf}. We divide the 198 systems into 2 groups: MW-mass where $M_*(<30\mathrm{kpc})/{\rm M}_\odot = 10^{10.5-10.9}$ \citep{McMillan:2017aa} and M31-mass where $M_*(<30\mathrm{kpc}))/{\rm M}_\odot = 10^{10.9-11.2}$ \citep{Tamm:2012aa}. These two groups are plotted in magenta and orange, respectively. In the left panel, we display the mass density of all stars with star symbols and EMP stars with squares at different radii. The mean and 1 standard deviation among the 198 systems are shown. In the right panel, we show the radial profiles of the $M_{\mathrm{EMP}}(r)$-to-$M_{\mathrm{tot}}(r)$\, fraction and $M_\mathrm{EMP}(r)$-to-$M_\mathrm{EMP} (<300\mathrm{kpc})$ at different radii. As anticipated in the maps, EMPs are centrally concentrated, with declining mass density profiles similar to those of stars of any metallicity. On the other hand, the $M_{\mathrm{EMP}}(r)$-to-$M_{\mathrm{tot}}(r)$\, fraction increases as $r$ increases and we observe a clear trend that MW-mass galaxies shows higher $M_{\mathrm{EMP}}(r)$-to-$M_{\mathrm{tot}}(r)$\, fraction than the M31-mass galaxies at large galactocentric distances. Conversely, the fraction of EMP stars located at a certain radius to the total EMP mass ($M_\mathrm{EMP}(r)$-to-$M_\mathrm{EMP} (<300\mathrm{kpc})$) decreases as the galactocentric distance increases and there is no significant difference among galaxies of different masses. In other words, EMPs are more frequent and easier to find at large galactocentric distances, with one out of every 10-100 stars being EMP beyond a few tens of kpc in comparison to one EMP star every $10^{3-4}$ stars in the more inner regions. On the other hand, cumulatively, the inner regions of galaxies still contribute relatively more to the total mass in EMPs than the outskirts. The distribution of EMP among different morphological components is discussed in more details in Sec.~\ref{sec:emp_empfreqmorph}. \subsection {EMP frequency and contribution in and by different components} \label{sec:emp_empfreqmorph} In this section we quantify the frequency and contribution of EMPs from the different morphological components of TNG50 MW/M31-like systems. In the top panel of Fig.~\ref{fig:empfraction} we show the TNG50 predictions for the mass in EMPs in MW/M31-like systems throughout their bodies and haloes ($M_\mathrm{EMP} (<300\mathrm{kpc})$, black crosses) and within different morphological components (coloured circles). In the lower panels, we quantify, on the left, the $M_{\mathrm{EMP, comp}}$-to-$M_{\mathrm{tot, comp}}$\, fraction of each component vs the stellar mass of the main galaxy (i.e.\ the EMP frequency within each component) and, on the right, the fraction of EMP mass in each component to the total EMP mass of the system ($M_{\mathrm{EMP, comp}}$-to-$M_\mathrm{EMP} (<300\mathrm{kpc})$\,), namely the contribution of EMPs by each component to the total EMP mass. According to TNG50, the stellar halo of the main galaxy hosts the great majority of the EMP stars in all MW/M31-like systems (orange circles in all panels of Fig.~\ref{fig:empfraction}). In fact, the average EMP frequency is similar within the stellar halo and the satellites for most of the systems (bottom left panel, orange vs. grey circles), even though there is typically much less mass in EMPs in the satellite populations than in the stellar haloes. Also keep in mind that the frequency and contribution of EMPs of satellites can vary by up to two orders of magnitude depending on the system. More specifically, in all simulated MW-M31-like objects, the mass in EMPs is about 1/1000 of the total stellar mass (black crosses in the lower left panel), and the frequency of EMPs in stellar halo and satellites is on average one every 100-300 stars, with a mild dependence on galaxy stellar mass. \begin{figure*} \centering \includegraphics[width=0.5\textwidth]{figs/Mstar_McompLowZ.png} \includegraphics[width=0.9\textwidth]{figs/fraction.png} \caption[Mass fraction of EMP stars in 198 TNG50 MW/M31 analogues in different morphological components]{\textbf{Stellar mass and mass fractions of EMPs in 198 TNG50 MW/M31 analogues across their different morphological components.} In the top panel, we give the amounts of stellar mass in EMP stellar populations predicted by TNG50 across and within the different galaxies. In the bottom left panel, we show the $M_{\mathrm{EMP, comp}}$-to-$M_{\mathrm{tot, comp}}$\, fraction, i.e. the frequency of EMPs on a component-by-component basis. In the bottom right, we show the $M_{\mathrm{EMP, comp}}$-to-$M_\mathrm{EMP} (<300\mathrm{kpc})$\, fractions, i.e. the contribution of each morphological component to the total mass in EMPs, i.e. across all components. Satellites belonging to each galaxy are considered as one component in the system (the ``satellites'', see Sec.~\ref{sec:morph} for details). Bulge, cold disk, warm disk, stellar halo, and satellites in each MW/M31-like system are shown as red, blue, green, orange, and grey circles, respectively. The amount of EMPs across all the morphological components ($M_\mathrm{EMP} (<300\mathrm{kpc})$) is shown as black crosses. There are 6 MW/M31 analogues without any satellites and 1 MW/M31 analogue that has only one metal-enriched satellite. They are manually added in the bottom of the top panel in filled grey circles. } \label{fig:empfraction} \end{figure*} On the other hand, EMPs are relatively rarer within the bulges and cold disks of MW/M31-like galaxies (red and blue circles in the lower left panel of Fig.~\ref{fig:empfraction}). Yet, given the large stellar mass of these components, their overall contribution of EMP mass to the total galaxy-wide EMP mass is not negligible. About ten per cent of the total EMP mass in MW-mass galaxies can reside in their bulge, even though this fraction decreases as the stellar mass of the main galaxy increases (red circles in lower right panel). Even more interestingly, there are TNG50 MW/M31-like systems whose cold disks also host non-negligible amounts of EMPs (blue circles in lower right panel): 33 MW/M31-like galaxies in TNG50 have cold disks that contribute more than 10 per cent to the total EMP mass, each with $\gtrsim 10^{6.5-7}\,{\rm M}_\odot$ of EMPs in cold circular orbits. These can provide theoretical counterparts for understanding the origin of observed EMPs with near-circular orbits in the Galaxy (see Sec. \ref{sec:emp_intro}). \section{Discussion} \label{sec:emp_dis} \subsection{EMPs across TNG50 MW/M31-like satellites} Throughout this analysis, we have considered all satellite stars in each MW/M31-like system as one component: namely, we have stacked all satellites together in each system (see Sec.~\ref{sec:morph}). However, the number of EMPs in satellite galaxies is also expected to depend on the properties of each satellite. In Fig.~\ref{fig:empsat} we therefore show the $M_\mathrm{EMP}$-to-$M_{\rm tot, sat}$ fraction for individual satellites. \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/MlowZMstarSAT.png} \caption[Mass fraction of EMPs in MW/M31-like satellites]{{\bf Stellar mass fraction of EMPs in TNG50 MW/M31-like satellites, across all the 198 TNG50 MW/M31-like hosts.} This is shown as the mass fraction to the total stellar mass of the satellite, as a function of the latter. In this work, satellites are limited to those within 300 kpc from the host centre (3D distance) and to the massive end of the classical dwarfs of the Milky Way. Satellites with no EMPs or with too low amounts in comparison to the minimum mass of the stellar particles in the TNG50 simulation (a few $10^4\, {\rm M}_\odot$) are placed by hand at the bottom of the plot. } \label{fig:empsat} \end{figure} According to TNG50, there is a clear trend whereby the fraction of EMPs mass decreases as the stellar mass of the satellite increases. However, based on the findings of Fig.~\ref{fig:empfraction} (bottom left panel, black crosses), such a trend is not observed at even higher galaxy masses, as across the MW/M31-like host sample the frequency of EMPs does not depend on galaxy mass. The scatter also seems to increase as we look at more massive satellites. However, this is an effect of numerical resolution, as there are satellites with a too low amount of EMPs in comparison to the minimum mass of the stellar particles in the TNG50 simulation (a few $10^4\, {\rm M}_\odot$) -- these are placed by hand at the bottom of the plot. \begin{figure} \includegraphics[width=\columnwidth]{figs/binnedfraction_comparison.png} \includegraphics[width=\columnwidth]{figs/binnedfraction_comparison_up.png} \caption[Location of EMPs for alternative definitions of EMPs.]{ \textbf{Location of EMPs for alternative definitions of EMPs in TNG50 198 MW/M31-like systems.} \textit{Top}: $M_{\mathrm{EMP, comp}}$-to-$M_\mathrm{EMP} (<300\mathrm{kpc})$\, fraction in different morphological components for $\mathrm{[Fe/H]}_\mathrm{th} = [-3, -4, -5, -6]$ with solid lines, dotted lines, dashed lines, and dot-dashed lines, respectively. The components are coloured as in Fig.~\ref{fig:empfraction}. \textit{Bottom}: fraction in different morphological components with $\mathrm{[Fe/H]}_\mathrm{th}$ for $\mathrm{[Fe/H]}_\mathrm{th} = [-1, -2, -3$] with solid lines, dotted lines, and dashed lines, respectively.} \label{fig:zthcomp} \end{figure} \subsection{Dependence on the definition of EMPs} Next, we check whether the $M_{\mathrm{EMP, comp}}$-to-$M_\mathrm{EMP} (<300\mathrm{kpc})$\, fraction changes with different metallicity thresholds $\mathrm{[Fe/H]}_\mathrm{th}$ adopted to define extremely metal poor stars. In addition to the fiducial value of $\mathrm{[Fe/H]}_\mathrm{th} = -3$, we plot three cases where $\mathrm{[Fe/H]}_\mathrm{th} = -4, -5, $ and $-6$ in Fig.~\ref{fig:zthcomp}, top panel. The stellar halo component still hosts the great majority of EMPs and the difference among the four cases is negligible. We find that the $M_{\mathrm{EMP, comp}}$-to-$M_\mathrm{EMP} (<300\mathrm{kpc})$\, fraction decreases in the the bulge, cold disk and warm disk components as we lower the threshold, whereas the $M_{\mathrm{EMP, comp}}$-to-$M_\mathrm{EMP} (<300\mathrm{kpc})$\, fraction in the satellites increases. For comparison, we also perform the same analysis but increasing the threshold, towards less extreme definitions of metallicity threshold. In Fig.~\ref{fig:zthcomp}, bottom panel, we show $M_{\mathrm{EMP, comp}}$-to-$M_\mathrm{EMP} (<300\mathrm{kpc})$\, fraction for $\mathrm{[Fe/H]}_\mathrm{th} = -1, -2, $ and $-3$. We notice that the contribution of stellar halo EMPs to the total amount of EMPs drops from $\sim 0.7$ to $\sim 0.6$ when $\mathrm{[Fe/H]}_\mathrm{th} = -1$, but still the stellar haloes remain the most most dominant contributors of metal-poor stars in MW/M31-like systems. \subsection{Possible limitations of the TNG50 expectations} Despite being the highest resolution simulation among the suite, TNG50 still cannot resolve mini-haloes ($M_\mathrm{halo} = 10^5-10^6{\rm M}_\odot$) properly. Therefore, some star formation in mini-haloes, especially at high redshift, may be unresolved and ignored or delayed in the simulation. This probably leads to an underestimate of EMP stars by some fraction, which may be more relevant for the satellites, especially for the ultra-faint dwarf galaxies (UFDs, $M_\mathrm{tot} < 10^5{\rm M}_\odot$). For satellites defined in this analysis ($M_\mathrm{tot} > 5\times10^{6}{\rm M}_\odot$), this is not an issue.
1,116,691,501,056
arxiv
\section{Introduction} \textbf{RMT architecture and the P4 language}: In recent years, reconfigurable match-action table (RMT)~\cite{bosshart2013forwarding} architecture-based programmable switches have become increasingly popular and have seen widespread deployment. The P4 language has emerged as the de-facto standard language to program these switches. Since its introduction~\cite{bosshart2014p4}, the P4 programming language has undergone several architectural changes. Its latest version (version 16~\cite{p416}, also known as P4\textsubscript{16})\footnote{Through the rest of the work, by P4, we mean P4\textsubscript{16}~\cite{p416}, unless mentioned otherwise.} is a major redesign from the initial version (P4\textsubscript{14}~\cite{p414}) of the language. It is designed to support various \textit{target} switches with different architectures (i.e., software switch~\cite{shahbaz2016pisces}, smartNIC~\cite{piasetzky2018switch}, eBPF~\cite{tu2018linux}, FPGA~\cite{wang2017p4fpga}, RMT~\cite{opentofino}, dRMT~\cite{chole2017drmt} etc.) for packet processing. RMT architecture~\cite{bosshart2013forwarding} based switches are designed as multistaged pipelines containing reconfigurable parser, multiple match-action stages, deparser, and a few other fixed blocks (i.e., packet replication engine and traffic manager, etc.~\cite{bosshart2013forwarding,robin2020toward,opentofino}). P4 provides language constructs to describe these switch's architecture and runtime behavior. The \textit{architecture description} of a switch consists of the pipeline's high-level structure, capabilities, and interfaces. The functions supported by the hardware are provided as separate \textit{target-specific libraries}. Both of them are supplied by hardware vendors. The data plane program developers use the target-specific libraries and the P4 core libraries to describe the run time behavior of an RMT switch as a P4 program. \begin{figure}[t] \centering \includegraphics[trim=0.0in 0in 0.0in 0, clip,scale=.35]{CompilerArchitecture.pdf} \caption{High level workflow of a P4 compiler for V1Model switches} \label{fig:CompilerArchitecture} \end{figure} \textbf{A P4 compiler for RMT switches}: The P4 language is target hardware independent in nature and provides high-level imperative constructs to express packet processing logics for various packet processing architectures. Hence, there is no direct mapping between the P4 program and the RMT architecture's components. A P4 compiler is required to translate a given P4 program into a target-specific executable program (hardware configuration binary) to be executed by a \textit{target switch}. Usually, a P4 compiler (fig.~\ref{fig:CompilerArchitecture}) consists of three main components~\cite{budiu2016architecture}: a) a \textit{target-independent frontend} responsible for syntax analysis, verification of target-independent constraints (e.g., loop-free control flow required for P4) and transforming the P4 program into a target-independent intermediate representation (\textit{IR}) presenting the control flow between a series of \textit{logical match-action table}s. b) a midend for architecture-independent optimizations~\cite{dangeti2018p4llvm} and c) a target-dependent backend responsible for generating executable programs to be executed by the target hardware. It requires a resource allocation mechanism (\textit{Compiler Backend-Mapping Phase} in fig.~\ref{fig:CompilerArchitecture}) to map the IR components on the target hardware resources. It computes the P4 program's \textit{header field} to the RMT hardware's packet header vector (PHV) mapping, packet header parser state machine (represented as \textit{Parse Graph} in the IR) to RMT hardware's state table mapping and the P4 program's control flow (represented as a graph of logical match-action tables) to RMT hardware's physical match-action table mapping. These mappings need to be conformant with the target-dependent constraints (i.e., header vector capacity, crossbar width, match-action table dimensions, etc.). If the P4 program can be successfully mapped onto the target hardware; corresponding \textit{hardware configurations} are generated from the mapping (\textit{Compiler Backend-Configuration Generation Phase} in fig.~\ref{fig:CompilerArchitecture}) in the form of an executable hardware configuration binary. This executable configuration is loaded into the target hardware by the control plane and executed by the target hardware. \textbf{Open source P4 compilers for RMT switch}: P4C~\cite{P4C} is the reference compiler implementation for the P4 language. It is developed by the \textit{P4 Language Consortium} and follows~\cite{budiu2016architecture,P4C} the workflow shown in fig.~\ref{fig:CompilerArchitecture}. It supports two different RMT architecture based switches: a) simple\_switch model widely known as V1Model architecture~\cite{V1Model} and b) Portable Switch Architecture (PSA)~\cite{PSA} developed by the \textit{P4 Language Consortium} (not fully implemented yet). However, P4C does not provide any compiler backend for the real life target hardware of these two architectures. The P4C frontend+midend emits the intermediate representation as a hardware-independent JSON file, and the reference software switch implementation (BMV2~\cite{BMV2}) executes them over a CPU simulated version of the respective hardware architecture. It does not consider the practical hardware resource limitations that exist in real target switches. Hence, P4C can not decide about the realizability of the given P4 program over a specific instance of these RMT switches. Besides the P4C, several other open-source compilers for RMT architecture-based switches are available in the literature. However, some of them~\cite{jose2015compiling} work with the older version (P4\textsubscript{14}~\cite{p414}) of the P4 language, which is architecturally different from the current version of P4 (P4\textsubscript{16}~\cite{p416}). Some other works focus on different packet processing languages (e.g., Domino~\cite{sivaraman2016packet,gao2020switch}), different architecture (drmt~\cite{chole2017drmt}), or different hardware platforms (e.g., FPGA~\cite{wang2017p4fpga}). As a result, researchers need to use proprietary compiler backends~\cite{P4Studio} to decide whether a P4 program can be implemented using an RMT switch or not. However, these systems are closed source, expensive, and often come with additional non-disclosure agreements~\cite{opentofino}. \textbf{Why open-source compiler backend}: The compiler backend plays a crucial role in the P4 ecosystem by mapping a P4 program to the target hardware. It is responsible for measuring a P4 program's resource consumption in the RMT pipeline. Programmable switches contain a limited amount of hardware resources. Therefore, the P4 program with the least hardware resource required to achieve a specific task is more resource-efficient. In recent times, a large number of research works have used the BMV2~\cite{BMV2} simulator with the P4C compiler as their target platform, which lacks a compiler backend that can consider the real-life resource constraints present in the target hardware. Without such a compiler backend, researchers can not measure the resource requirement of their schemes and can not compare the resource usage efficiency of multiple schemes. In the worst case, their P4 program may not be realizable using P4 switches, which are not even identifiable without a compiler backend. Thus, it stands to argue whether these P4 programs are directly executable over real-life RMT switches. A compiler backend needs to address several computationally intractable problems~\cite{jose2015compiling,vass2020compiling} to find the mapping of a P4 program to the target hardware. The optimal algorithms often require a long time to finish~\cite{jose2015compiling,vass2020compiling}. With the growing rise of the \textit{in-network computing}~\cite{sapio2017network} paradigm, various research works~\cite{macdavid2021p4,robin2022clb,dang2016paxos,robin2022p4te,robin2021p4kp} are also focusing on delegating different \textit{network function}s in the data plane. In these cases, researchers do not need to fit large P4 programs required for full-fledged switches with various features. Optimal mapping algorithms are useful when the data plane programmers need to fit such large P4 programs into target hardware. On the other hand, an open-source compiler backend that uses heuristic-based algorithms can provide the researchers a quick decision about a smaller P4 program's realizability using a target hardware. The mapping algorithms used in the compiler backend are sensitive~\cite{jose2015compiling} to the resource (TCAM/SRAM storage, number of ALUs, crossbar width, etc.) requirements of a P4 program and the available resources in a target switch. The resource requirements of a P4 program can change at run time (e.g., an increase in the size of an IPv4 forwarding table), which can invalidate a previously computed mapping. With the rapid proliferation of \textit{network virtualization}~\cite{hancock2016hyper4} and \textit{Network-as-a-Service}~\cite{zhou2010services} paradigm, the requirement for on-demand network function deployment is also growing rapidly. It requires quick and automated deployment of customized data plane algorithms on a short notice. Therefore, developing faster and more efficient heuristic/approximate mapping algorithms carry enormous significance here. With a closed source compiler backend, researchers can not experiment with different mapping algorithms. Besides this, there is a growing focus~\cite{da2018extern,seibulescu2020leveraging,karrakchou2020endn} on developing hardware units for supporting complex instructions (\textit{extern}~\cite{p416} in P4 language) in RMT architecture. Without an open-source compiler backend, researchers can not integrate newly developed externs in a P4 program and test their effectiveness. Independently developing compiler backend from scratch requires various common and repetitive tasks (i.e., IR parsing, representing parsed IR using a graph data structure, modeling hardware resources, etc.) not directly related to the computation of the mappings. An open-source compiler backend can allow researchers to focus on developing efficient mapping algorithms rather than focusing on the repetitive tasks. Inspired by these factors, in this work, we present the design of an open-source P4 compiler backend (mapping phase only) for \textbf{V1Model}~\cite{V1Model} architecture-based RMT switches. To the best of our knowledge, it is the first open-source P4\textsubscript{16} compiler backend for RMT architecture based programmable switches. The compiler backend requires two inputs: a) a specification of the available resources in a V1Model switch and b) the intermediate representation (IR) of a P4 program generated by the P4C frontend. As the P4C does not provide any interface to specify the hardware resources of a V1Model switch, we have developed JSON format based hardware specification language (HSL) (sec.~\ref{HSLSection}) to express the \textit{hardware resource specification}s of a V1Model switch. After discussing the related works in sec.~\ref{RelatedWorks}, we briefly discussed the V1Model architecture in sec.~\ref{V1ModelArchitecture} along with the HSL (sec.~\ref{HSLSection}). Then we present the structure of the IR provided by the P4C compiler frontend (sec.~\ref{IR}). This backend uses various existing heuristic-based algorithms to allocate resources in the V1Model switch pipeline and computes the \textit{IR to hardware} resource mapping. To the best of our knowledge, this is the first scheme in literature which considers the constraints arising from the use of stateful memory in a P4 program and its impact on the mapping decision. We discussed the details of the mapping process in sec.~\ref{MappingProblem}. Once the mapping is found, computing the hardware configuration binaries requires a straightforward translation of the mapping into hardware instruction codes. As this work does not focus on executing a P4 program on any specific instance of V1Model switch, we leave the hardware configuration binary generation for future work. We discuss the implementation and evaluation of our compiler backend in sec.~\ref{ImplementationAndEvaluation} and conclude the paper in sec.~\ref{Conclusion}. \textbf{Why V1Model}: V1Model is the only RMT switch fully supported by the open-source P4C compiler frontend. Moreover, V1Model architecture is supported by major programmable switch hardware vendors~\cite{harkous2020p8,opentofino}. In recent years, a large set of research works~\cite{hauser2021survey} have used the V1Model as their reference hardware architecture (either through the use of commercial hardware or a BMV2 simulator). Moreover, V1Model is similar to the abstract switch model used in the P4 language version 14. Therefore, all the P4\textsubscript{14} based research works can also be mapped to this model. Finally, the latest programmable switch architecture being standardized by the P4 consortium is PSA~\cite{PSA}, and it is also similar to the V1Model architecture. The compiler backend presented in this work can be extended for PSA architecture with a small number of modifications. Due to these reasons, V1Model is a representative hardware architecture for a large number of research works, and we have chosen to build the compiler backend for this architecture. \textbf{What the compiler-backend is not}: The compiler backend presented in this work supports only V1Model architecture and a subset of P4 language (P4\textsubscript{16}) constructs which cover a wide range of use cases. A full list of P4 constructs supported by the system can be found at~\cite{P4CB}. Proprietary hardware can have special instructions (as \textit{extern}~\cite{p416}) available for packet processing, and they can also have additional constraints in their system. Our compiler backend is not a full replacement of any proprietary system. It uses heuristic algorithms for mapping a P4 program to V1Model switches, and it can sporadically reject a P4 program (like other programmable switch backends~\cite{gao2019autogenerating}) despite the existence of a valid mapping. Moreover, due to the use of heuristics, it does not guarantee the optimality of the computed mapping. Finally, the compiler backend only covers the mapping phase shown in fig.~\ref{fig:CompilerArchitecture} and does not cover the \textit{hardware configuration generation} phase. \section{Related Works} \label{RelatedWorks} In~\cite{bosshart2014p4}, the authors introduced an RMT architecture based \textit{abstract switch forwarding model} and presented the P4 programming language to program them in a protocol-independent manner. The authors also presented the high-level structure of a two-stage P4 language compiler. Though the work briefly discussed the parser and TDG mapping problem, a complete open-source system for the compiler backend was absent. In~\cite{gibb2013design}, the authors addressed the problem of mapping a packet parsing logic to CAM-based hardware. However, its main focus was on synthesizing parser hardware circuitry. Hence it can not be directly used in a P4\textsubscript{16} compiler backend. In~\cite{jose2015compiling}, the authors discussed the computational complexity of mapping the logical match-action tables to the physical match-action tables of an RMT switch. They presented an integer linear programming-based method (for optimal solution) and a few heuristic-based methods for computing the mapping. The system is available as an open-source project. However, it can not support stateful memories in the P4 program, which is a crucial requirement for the in-network computation paradigm. All the works mentioned above were designed to support the initial version of the P4 language (a.k.a. P4\textsubscript{14}~\cite{p414}), and none of them provides a complete compiler backend. Moreover, the latest version of the P4 language (a.k.a. P4\textsubscript{16}) is architecturally different from the P4\textsubscript{14}. Hence these works can not be used directly for compiling P4\textsubscript{16} programs. The reference compiler for the P4\textsubscript{16} language developed by the P4 language consortium is P4C~\cite{P4C}. Its frontend can compile a P4\textsubscript{16} program for various target architectures (including the RMT architecture). It provides backend support through CPU-based simulation for two RMT architecture switches: V1Model~\cite{V1Model} and PSA~\cite{PSA}. These simulated backends execute the \textit{intermediate representation} of a P4 program over a CPU. It has no provision to model the hardware resources of an RMT switch. It is unable to consider the hardware resource limit constraints in deciding whether a P4\textsubscript{16} program can be mapped to target hardware in real life or not. In~\cite{wang2017p4fpga}, the authors presented an open-source P4\textsubscript{16} compiler backend for FPGA-based platforms. However, this system's basic blocks are different from the physical match-action tables used in RMT architecture. Here, the basic blocks can execute both match, and branching instructions; and based on their result, some actions can be executed. Hence, it provides a more flexible match-action capability in every node compared to the original RMT architecture. Few other open-source compilers compiler~\cite{gao2019autogenerating,voros2018t4p4s} exist in the literature for the RMT architecture. However, they~\cite{gao2019autogenerating} either do not support the program written in P4\textsubscript{16} as the input or are designed for non-RMT architecture-based hardware platforms~\cite{voros2018t4p4s}. Besides these open-source systems, a few proprietary compiler backends~\cite{gao2020lyra,P4Studio} capable of supporting the P4\textsubscript{16} language for RMT switches also exist. However, they are closed source in nature and do not provide access to their internal mechanisms. \section{V1Model Architecture } \label{V1ModelArchitecture} The V1Model is an instance of a \textit{reconfigurable match-action} (RMT) architecture. Its packet processing pipeline (fig.~\ref{fig:V1ModelArchitecture}) consists of several components arranged in multiple stages. This section describes its components, the specification of different resource types, and how they process a packet. Finally, in sec.~\ref{HSLSection} we present a hardware specification language to represent a V1Model switch's resources. \begin{figure}[b] \centering \includegraphics[trim=0in 1in 0in 0, clip,scale=.345]{Pipeline.pdf} \caption{ V1Model pipeline architecture} \label{fig:V1ModelArchitecture} \end{figure} \subsection{Parser and Packet Header Vector} \label{ParserPacketHeaderVector} In V1Model architecture, an incoming packet at first goes through a TCAM based~\cite{gibb2013design} \textit{programmable parser} (fig.~\ref{fig:RMTSingleStage}), which executes the parsing logic provided in the form of a state machine (converted to a \textit{state table} by a compiler backend). The parser contains two main building blocks: a) \textit{Header Identification Unit}: It contains a $P_B$ bit wide buffer to look ahead in the packet and identify maximum $H$ headers every cycle. It also contains a TCAM capable of storing $P_L^{T}$ entries to implement the \textit{state table}. Every TCAM entry contains information of a current parsing state and values (as bit sequence) of header fields to be matched. At every cycle, maximum $f_C^T$ lookup field values (each having maximum lookup width $f_W^T$ b) and the \textit{current state} can be looked up in the TCAM. The TCAM entries are $P_W^T$ b wide to store the lookup field values and the current state value. Every entry also contains a pointer to RAM cells for storing the \textit{next parsing state} and the location of the header fields to be extracted by the \textit{extraction unit}. b) \textit{Extraction Unit}: After matching a packet in the TCAM, the information stored at the \textit{match index} th cell of the SRAM is loaded into the \textit{extraction unit}. This unit can extract maximum $P_W^{E}$ bit wide data as header fields and store them in a \textit{field buffer}. At every cycle, a few header fields are extracted, and the \textit{next parsing state} is fed to the \textit{header identification} unit for matching in the TCAM in the next cycle. The header identification unit can move ahead to a maximum $P_{MA}$ bit in the packet to start identifying the next header fields. Every parser unit is designed for a maximum parsing rate ($P_{Rate}$) throughput. V1Model switches can deploy multiple parser units parallelly for achieving a higher packet parsing rate. After completing the parsing, all the extracted header fields are sent to a \textit{Packet Header Vector} (PHV) from the \textit{field buffer}. The PHV can store $F$ different types of fields; all ${f}_C^i$ header fields of type $i$ are ${f}_{W}^i$ bit wide ($i = 1$ to $F$ ). Multiple fields in the PHV can be merged together to form a larger header field. Besides the parsed header fields, a PHV also stores hardware-specific metadata (i.e., ingress port, timestamp, etc.). The PHV is passed to subsequent components ($N$ match-action stages fig.~\ref{fig:V1ModelArchitecture}) in the pipeline through a wide \textit{header bus}. \subsection{Match-Action Stages} \label{MatchActionStagesSubSection} Next, the PHV goes through a series of $N$ match-action stages for \textit{ingress stage} processing. Each stage (fig.~\ref{fig:RMTSingleStage}) contains $T$ units of $T_{W}$ bit wide TCAM blocks, each capable of storing $T_{L}$ entries. It also contains $S$ units of $S_{W}$ bit wide SRAM blocks, each capable of storing $S_{L}$ entries. The TCAM blocks are used to implement \textit{physical match-action table}s (MAT) for ternary/range/prefix/exact matching. A fraction of the SRAM blocks ($S^M$ blocks) are used to implement hashtable (using $HS_K$ way Cuckoo hashtable~\cite{pagh2004cuckoo,kirsch2010more}) based \textit{physical match-action table}s for exact matching, and the rest are used for storing other information (i.e., action arguments, next MAT address, etc.). These smaller \textit{physical match-action tables} can be run independently or grouped together to match wider header fields within a stage. Physical MATs across the stages can be merged to implement a longer table. Header fields are supplied from PHV to the TCAM and SRAM based physical MATs through two crossbars, TCB (${TCB}_W$ bit wide) and SCB (${SCB}_W$ bit wide), respectively. With every entry in the MATs, there is a pointer to corresponding action information (action arguments, action instruction, address of the next MAT to be executed, etc.). On finding a match in the MATs, the corresponding action information is loaded from the memory. Every match-action stage contains a separate arithmetic logic unit (ALU) for every field of the PHV for parallel computation. Two or more units can be grouped together to execute computation on larger fields. Besides the per header field ALU units, a fixed number of \textit{extern} units (hash, counter, register, meter, etc.) are also available in every match-action stage for special operations (i.e., hash computations, counting, storing/loading states, etc.). \begin{figure}[b] \centering \includegraphics[trim=0.1in 0in 0in 0, clip,scale=.35]{SingleStage.pdf} \caption{ A match-action stage in RMT pipeline} \label{fig:RMTSingleStage} \end{figure} Every stage can store ${A}_{C}$ VLIW instructions for all the physical MATs. Every VLIW instruction carries separate instruction for the per header field ALU and extern units. Data is provided to these processing units from PHV through an ${ACB}_W$ bit wide crossbar (ACB). Similar to the match crossbars (TCB and SCB), every bit of this crossbar is driven by all the fields from PHV. The action information (except the action instructions are stored in a dedicated memory) and the stateful memories used by the \textit{extern} units are allocated in separate chunks of the available SRAM blocks ($S^A$ blocks and $S^S$ SRAM blocks, respectively). Every stage contains $M_{P}$ memory ports (each one $M_{BW}$ bit wide) capable of read-write from/to an SRAM cell in one clock cycle. These ports are used to read/write data from SRAM blocks for exact MATs, action memory, and stateful memories. Every TCAM based MAT can store a fixed number of match entries (up to their full capacity). On the other hand, an SRAM-based MAT can store a variable number of entries because the same SRAM blocks are allocated to store match entries, action entries, and stateful memories. The number of total SRAM blocks ($S^M$, $S^A$, and $S^S$ out of total $S$ blocks available) used for exact match MAT, action memory, and stateful memory depends on the \textit{logical to physical MAT mapping} algorithms of sec.~\ref{sec:TDGMapping}. To optimize the SRAM usage, RMT architecture allows \textit{word packing}, creating a \textit{packing unit} of multiple SRAM blocks. Multiple entries (match, action, or stateful memory entry) can be stored in one unit to reduce SRAM waste. This \textit{variable packing} format does not impact the match performance, and the match units can match a packet against multiple words stored in the same SRAM block. \textbf{Packet Replication Engine and Traffic manager (PRE \& TM)}: After finishing the ingress stage processing, the packet is submitted to the egress port's queue. The \textit{PRE \& TM} is a non-programmable component responsible for handling the packet's life cycle in the port's queues, scheduling the packets, and replicating the packets if necessary. Besides these, there are two more fixed-function components for computing and verifying the checksum of a packet. As they are fixed-function blocks, we do not discuss their details. \textbf{Egress stage}: Once the packet is picked from the egress port's queue, it undergoes the egress stage processing. The egress stage is similar to the \textit{ingress stage} and shares the same physical components for their processing. The compiler backend allocates the resources between the ingress and egress thread in such a way that they do not hamper each other's packet processing activities. \textbf{Deparser}: After egress stage processing is finished, a packet goes through a deparser block. It recombines the data from the packet header vector fields and the payload. Then the packet is finally propagated through the outgoing channels. \subsection{V1Model Hardware Specification Language}\label{HSLSection} A compiler backend requires information about the available resources of a V1Model switch. However, the openly available P4C compiler does not provide any interface to model it. Packet header vector, programmable parser, and match-action stages are the major programmable components in the V1Model architecture. We developed a JSON format-based hardware specification language (\textit{HSL}) to specify the available resources in the programmable components of a V1Model architecture based switch. The language allows specifying how many header fields can be accommodated in a PHV and what are the bitwidth of these fields. Similarly, it allows specifying the dimension of various hardware resources used in the programmable parser (sec.~\ref{ParserPacketHeaderVector}). It also allows specifying the number of match-action stages and the number of resources in every stage as described in sec.~\ref{MatchActionStagesSubSection}. Appendix~\ref{APP:ExampleV1ModelSpecs} shows an example hardware specification of a V1Model switch. \section{P4C Intermediate Representation} \label{IR} The P4C frontend takes the \textit{architecture description} of the V1Model architecture and a P4 program as the input. The intermediate representation generated by the P4C frontend (along with the midend) is a \textit{target-independent representation} (IR) of the P4 program. The JSON representation of the IR contains the following major components: \subsection{Header Information}\label{IR:Header} The P4 language provides language constructs to represent a packet header in an object-oriented style where a header object can contain multiple fields. The IR contains a list of all the headers (including the packet metadata header) used in the P4 program along with their member fields and their bitwidth. \subsection{Parse Graph}\label{IR:ParseGraph} The \textit{parse-graph} is a directed acyclic graph representation of the parser state machine (parsing logic given in the P4 program). It defines the sequence of headers inside a packet. Every node in the \textit{parse graph} represents a header type, and the edges represent a transition of the parser state machine. An edge from node $a$ to node $b$ indicates that after parsing header $a$, based on a specific value of one of its member header fields $b$ will be parsed. \subsection{Table Dependency Graph}\label{IR:TDG} Processing logic for \textit{ingress} and \textit{egress} stages is written using imperative programming constructs given by the P4 programming language. The P4C frontend converts these logics into a flow of \textit{logical match-action tables} and their \textit{dependencies}. Thus each stage's control flow is converted into a \textit{Table Dependency Graph} (TDG). A TDG is a directed acyclic graph where every node represents a logical MAT, and every edge represents the dependency between any two logical MATs. Each node describes the set of fields (header and/or metadata fields) to be matched against the table entries, the types of matching (exact, prefix, ternary, etc.), and the maximum number of entries to be stored in the memory for this table. It also describes the set of actions to be executed based on the match result and the address of the next table to be loaded after executing the current table. \textbf{Non-stateful memory dependecy}: Every path in the TDG represents a chain of logical MATs. Following four types of dependencies can arise between any two logical MATs (\textit{A} comes first and then \textit{B}) in the same path. These dependencies do not involve access to any stateful memory used in the P4 program. \hspace{2mm} \textbf{1. Match dependency}: A field is modified by the actions of \textit{A}, and it is a match field of \textit{B}. Hence execution of B's match part must start after table A's action part has finished execution. \hspace{2mm} \textbf{2. Action dependency}: A field modified by \textit{A} is not matched in \textit{B}. The same field is also modified by \textit{B}. The modification done by the action of later table \textit{B} becomes the final result. Hence execution of \textit{B}'s action part must start after table \textit{A}'s action part has finished execution. \hspace{2mm} \textbf{3. Successor dependency}: Whether table \textit{B} will be executed or not is decided by the match result (hit or miss) of table \textit{A}. Hence execution of \textit{B}'s match part must start after table \textit{A}'s match part has finished execution. \hspace{2mm} \textbf{4. Reverse match dependency}: A match field of table \textit{A} is modified by the action part of table \textit{B}. Hence execution of \textit{B}'s action part must start after table \textit{A}'s match part has finished execution. \textbf{Stateful memory dependency}: Another type of dependency arises between two logical MATs on the same/different paths if they access the same set of \textit{stateful memories} (counter, meter, register, etc.). The P4 language provides two different ways for accessing stateful memories: a) \textit{direct}: these stateful memories are attached to a logical MAT and do not create any dependency on other logical MATs. And b) \textit{indirect}: they are not directly attached to any specific MAT, and multiple MAT can access them. Hence they create a dependency among the tables accessing them. The SRAM cells are attached to only one match-action stage in the RMT pipeline. Hence, if a set of SRAM blocks in stage X of the RMT pipelines are allocated for an indirect stateful memory; two or more logical MAT accessing it must need to be mapped on the same match-action stage X. We term this as \textbf{stateful memory} dependency. These dependencies influence how the logical MATs in a TDG will be mapped on the physical MATs of an RMT pipeline. The predecessor node in every type of dependency guides the starting clock cycle of its successor~\cite{jose2015compiling,bosshart2013forwarding}. Hence, they also determine the total processing delay of a packet undergoing the processing defined by a P4 program. However, the TDG in the IR itself has no mechanism to represent various types of dependencies directly. A compiler backend needs to analyze the TDG to identify the dependency between two logical MAT before computing the mapping. \section{IR to Hardware Resource Mapping} \label{MappingProblem} The goal of the \textit{mapping phase} shown in fig~\ref{fig:CompilerArchitecture} is to map a given P4 program to the resources of a V1Model switch. A compiler backend takes two types of information as input for this purpose: a) the hardware resource specification of the switch described using the HSL presented in sec.~\ref{HSLSection} and b) the hardware-independent intermediate representation (IR) (sec.~\ref{IR}) of the given P4 program generated by the P4C compiler frontend. To determine whether the given P4 program is realizable over a V1Model switch, the compiler backend needs to address three different types of mapping problems: a) header mapping: mapping the IR's header fields to the PHV fields b) parse graph mapping: mapping the parse graph to parser hardware resources, and c) TDG mapping: mapping of the TDG to the resources of match-action stages. While addressing these problems, the compiler backend needs to ensure that a computed mapping does not violate the constraints (both architectural and resource constraints) of the target hardware and the control flow of the P4 program. Besides computing a valid mapping, the backend also needs to maximize the concurrency and resource usage efficiency of the P4 program. Computing an \textit{optimal and valid mapping} of a P4 program to a V1Model switch is a computationally intractable problem~\cite{jose2015compiling,vass2020compiling}. Our compiler backend uses heuristic-based algorithms to find a mapping while maintaining the architectural and resource constraints of an RMT switch. We describe how our compiler backend works to find these three types of mapping through the rest of this section using a simple \textit{QoS-modifier} program (see appendix~\ref{App:QoSModiferP4Program} for the P4 source code). In this program, the control plane supplies a separate QoS value for IPv4 and IPv6 packets forwarded through an individual port. The P4 program stores these values in two indirect stateful memories (register array) on receiving the control packet from the control plane (defined in MAT named \textit{match\_control\_packet}). For every valid IPv4 packet, if it matches with the entries configured in \textit{ipv4\_nexthop}, that packet's \textit{diffserv} value is replaced with a value read from the register array (\textit{ipv4\_port\_qos}), and it's IPv6 destination is set as a server's IP address. Then the packet is matched with \textit{ipv6\_nexthop} MAT to set its IPv6 \textit{trafficClass} after reading from another register array (\textit{ipv6\_port\_qos}). In the example, we have used the benchmark V1Model switch specified in appendix~\ref{APP:ExampleV1ModelSpecs}. \subsection{Header Mapping} \label{sec:HeaderMapping} V1Model switches contain a limited number of fixed bitwidth fields in the PHV (provided as a part of the hardware specification). The header fields used in the program is needed to be accommodated using these PHV fields to execute a P4 program. For example, a 17b wide header field can be accommodated using three 8b or 32b wide PHV fields. In the first case, the amount of resource \textit{waste} is 7b and 15b in the latter case. Here, the mapping algorithm also needs to minimize the amount of waste. Besides this, as the V1Model switches contain two different processing threads (for the ingress and egress stage) sharing the same hardware pipeline, every PHV field needs to be explicitly allocated to one of them. The problem of optimally allocating the PHV fields (set of items) to accommodate the P4 program's header fields (set of resources) can be modeled as a \textit{multiple knapsack} problem~\cite{puchinger2010multidimensional}. In this case, we need to find how to optimally fill up a set of bins (the header fields in the P4 program) using the set of items (all the PHV fields) while minimizing the waste. As the problem is computationally intractable, we used a heuristic-based (fill the largest header field at first using the largest bitwidth PHV field available) algorithm to compute the mapping. The algorithm first analyzes the header information in the IR (sec~\ref{IR:Header}) and makes two disjoint sets of header fields used in the ingress and egress stage. The metadata fields are needed to be replicated for both stages. The header fields are sorted in descending order of their bitwidth. For each header field in sorted order, the algorithm picks the largest bitwdith PHV field from the remaining PHV field that leads to the least amount of waste. The process continues until the bitwidth of a header field is filled with selected PHV fields. Table~\ref{tab:PhvUsageOfQosModifer} shows the bitwidth of the header fields used in the QoS-Modifier program (appendix~\ref{App:QoSModiferP4Program}). It also shows the bitwidith of the PHV fields, and how many of them are required to accommodate those header fields according to the mappings computed by the compiler backend. \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{4}{|c|}{\textbf{P4 Program Header Fields}} & \multicolumn{2}{c|}{\textbf{PHV Fields}} \\ \hline Bitwidth & \multicolumn{1}{l|}{Count} & \multicolumn{1}{l|}{Bitwidth} & Count & Bitwidth & Count \\ \hline 1 & 1 & 13 & 1 & \multirow{3}{*}{8} & \multirow{3}{*}{49} \\ \cline{1-4} 2 & 1 & 16 & 6 & & \\ \cline{1-4} 3 & 3 & 19 & 2 & & \\ \hline 4 & 3 & 20 & 1 & \multirow{2}{*}{16} & \multirow{2}{*}{17} \\ \cline{1-4} 7 & 4 & 32 & 7 & & \\ \hline 8 & 11 & 48 & 2 & \multirow{2}{*}{32} & \multirow{2}{*}{24} \\ \cline{1-4} 9 & 5 & 128 & 2 & & \\ \hline \end{tabular} \caption{Bitwidth of \textit{QoS-Modifier} program's (appendix~\ref{App:QoSModiferP4Program}) header fields and bitwidth of PHV fields required to accomodate them } \label{tab:PhvUsageOfQosModifer} \end{table} \subsection{Parse Graph Mapping} \label{sec:ParseGraphMapping} The IR gives a graph (\textit{parse graph} in sec.~\ref{IR:ParseGraph}) representation of the parser state machine. On the other hand, the parser in the V1Model switches follows \textit{match-action} semantics. In every cycle, the parser hardware can look into the packet header, identify a limited number of header fields, and \textit{match} them with the header-identification unit's TCAM entries (works as a state table). On finding a match in the TCAM, the corresponding \textit{action} (header fields extraction by the \textit{extraction-unit}) is executed. The compiler backend needs to convert the \textit{parse graph} into a state table which needs to be accommodatable within the dimensions of the parser TCAM. Similarly, the field extraction operations must be accommodatable within the capacities of the \textit{extraction unit}. Generating an optimal state table from a parse graph and relevant TCAM entries is a computationally intractable problem~\cite{gibb2013design}. Our compiler backend uses the algorithm presented in~\cite{gibb2013design}. This algorithm tries to find the clusters in the \textit{parse-graph} which can be accommodated within the capacity (lookup in the packet header and extracting header fields) of the parser hardware. For every unique pair of a cluster and an edge (transition in the parser state machine) in the cluster graph, one entry is required in the TCAM based parse-table. The parser hardware executes the match-action logic for every cluster in one cycle and moves to the \textit{next parsing state}. We converted the source code of~\cite{gibb2013design}\footnote{In~\cite{gibb2013design} the authors also proposed several optimizations to reduce the number of clusters. We leave integrating them as future work. } to work with the IR generated by P4C and used it in the compiler backend to generate the \textit{state-table} entries. If the entries can not be accommodated using the TCAM, then the P4 program is rejected. As an example, the compiler backend was used to compute the \textit{parse-graph mapping} for the P4 program shown in appendix~\ref{App:QoSModiferP4Program} over the V1Model hardware described in appendix~\ref{APP:ExampleV1ModelSpecs}. Fig.~\ref{fig:ParseGraphMappingExample} shows the corresponding \textit{parse graph}, identified clusters in it, and the corresponding TCAM entries. \begin{figure}[] \centering \includegraphics[trim=0.0in 0in 0in 0, clip,scale=.34]{ParseTableExample.pdf} \caption{The \textit{parse graph}, identified clusters in it, and the corresponding TCAM entries in \textit{state table} for the P4 program of fig.~\ref{App:QoSModiferP4Program}} \label{fig:ParseGraphMappingExample} \end{figure} \subsection{TDG Mapping}\label{sec:TDGMapping} The TDG expresses the packet processing logic of a P4 program as a control flow among logical match-action tables guided by their dependencies. The compiler backend needs to map these logical MATs over the physical MATs of the V1Model switch while preserving the original control flow. The computed mapping must not violate the following constraints. \begin{itemize} \item The \textit{match width} of all the logical MATs mapped to any match-action stage must not exceed the crossbar width for TCAM based MATs ($TCB_W$) and SRAM based MATs ($SCB_W$) \item The bitwidth of the match fields of every logical MAT must not exceed the total match width of the mapped physical MATs. \item The total number of entries required for any logical MAT must not exceed the capacity of the mapped physical MATs. \item The total width of the fields used as the action inputs must not exceed the width of the action crossbar of a match-action stage. \item The total number of actions (using ALU or extern units) executed on an individual PHV field $f$ in a match-action stage must not exceed the available number of actions for $f$ in that stage. \item The total amount of SRAM required by logical MATs for action memories and stateful memories (using a register, meter, counter, etc.) must not exceed the available SRAM volume in a match action stage. \item If two or more logical MAT nodes access the same indirect stateful memory they need to be mapped on the same physical match-action stage. \item The physical match action tables in a single stage can be executed concurrently. However, due to the dependencies (sec.~\ref{IR:TDG}), the compiler backend needs to map the logical MATs over the physical MATs in such a way that does not violate the dependencies. \end{itemize} Besides ensuring the constraints, the compiler backend needs to minimize the number of match-action stages and amount of resource usage in each of the used match-action stages. The first one leads to reduced processing delay of a packet~\cite{jose2015compiling}. The second one leads to an overall reduction in resource usage and power consumption by a V1Model switch. Finding a mapping that has optimal resource usage and does not violate the upper mentioned constraints is a computationally intractable problem~\cite{jose2015compiling,vass2020compiling}. \subsubsection{\textbf{Our approach}}\label{OurApproach} The TDG mapping problem can be modeled as an \textit{integer linear programming}~\cite{jose2015compiling} problem, and an optimal mapping can be determined (if it exists). However, it may take a large amount of time~\cite{jose2015compiling,vass2020compiling} to compute such a mapping. Hence we applied heuristic-based algorithms to compute the mapping. The algorithm works as follows: First, the algorithm preprocesses (sec.~\ref{TDG:Preprocessing}) the TDG to transform the conditional nodes into MAT. Then it loads the TDG into a graph-based data structure with dependencies among the nodes as edge. It checks whether the \textit{stateful memory dependencies} in the TDG can be met on the target hardware or not. It also transforms the logical MATs to meet the target hardware constraints if necessary. Finally, the algorithm maps the logical MATs to the physical match-action stages. If a dependency chain in the TDG contains $n$ logical MATs, it requires at least as many match-action stages in the V1Model pipeline. Similar to~\cite{jose2015compiling}, we define the \textit{level} of a logical MAT as the number of \textit{dependencies} remaining in any \textit{dependency chain} starting from that node. Assigning the \textit{level}s requires a topological ordering of the logical MATs in the TDG. Due to topological ordering, all logical MATs at the same level have no dependency on each other. Then the mapping algorithm tries to map all the logical MATs of the same \textit{level}. This process is repeated for all \textit{level}s. \begin{figure}[h] \centering \includegraphics[trim=0.0in 0in 0in 0, clip,scale=.34]{TDGPreprocessing.pdf} \caption{Preprocessing conditional branching and stateful memory access in P4; \textit{Sd\_mem. Dep. = Stateful Memory Dependency}} \label{fig:TDGPreprocessing} \end{figure} \paragraph{TDG preprocessing}\label{TDG:Preprocessing} The TDG generated by the P4C frontend transforms every conditional branching instruction (if/if-else pair) in the P4 program to three logical MATs. Fig.~\ref{fig:TDGPreprocessing}a shows the TDG of the P4 code presented in appendix~\ref{App:QoSModiferP4Program}. The topmost node shows the TDG node for evaluating the conditional expression, and the next two nodes show its two branches (one each for true and false) as the next nodes. However, the task for evaluating the conditional expression is executed in the action portion of the first MAT, and it does not pass the result of the expression evaluation to the next nodes. Hence, a packet can successfully match with both the next nodes, which is incorrect. To avoid this, we modify the conditional evaluation expression step with a conditional assignment available~\cite{bosshart2013forwarding,opentofino} in V1Model switches. It assigns 1 to an auxiliary header field if the expression evaluates true and 0 if false (fig.~\ref{fig:TDGPreprocessing}b). This field is added to the match fields of the next nodes; both of these MATs are executed for a packet and successfully matche with only one of them. \textit{Stateful Memory Dependency} can exist between logical MATs in the same or different paths in the TDG. Indirect stateful memories accessed by two (or more) logical MATs in the same path implies non-stateful memory dependency (sec.~\ref{IR:TDG}) exists among the logical MATs. Hence they need to be mapped on different physical match-action stages. However, they need to access the same stateful memory and the RMT architecture does not allow access to same SRAM block from differnt match-action stages. Therefore, these kinds of P4 program can not be implemented using V1Model switches (though the P4 language syntax allows it), and our algorithm rejects them. On the other hand, logical MATs on different paths in the TDG with stateful memory dependency need to be mapped on the same physical match-action stages. Hence they need to be assigned the same \textit{level}. Besides this, a logical MAT may need to be divided into two separate logical MATs for valid mapping. For example, consider the case of three logical MATs ipv4\_nexthop, ipv6\_nexthop, and match\_control\_packet in the TDG of fig.~\ref{fig:TDGPreprocessing}a. Here match\_control\_packet is writing over two stateful memories ipv4\_port\_qos, and ipv6\_port\_qos; ipv4\_nexthop is reading from only ipv4\_port\_qos and ipv6\_nexthop is reading from only ipv6\_port\_qos. Now, there exists a \textit{match-dependency} between ipv4\_nexthop and ipv6\_nexthop; hence they can not be mapped to same stage. Therefore, the stateful memories accessed by these logical MATs also can not be mapped to the same stage. Assume ipv4\_port\_qos and ipv6\_port\_qos are mapped to stage X and Y in the pipeline; therefore, ipv4\_nexthop and ipv6\_nexthop are needed to be mapped on stage X and Y, respectively. However, this requires match\_control\_packet to be mapped on both stage X and Y; clearly this is not possible. To accommodate such scenarios, we bifurcate match\_control\_packet's action into two half based on the stateful memory access (fig.~\ref{fig:TDGPreprocessing}c). The first half contains the original match and action part up to accessing ipv4\_port\_qos, and another logical MAT (match\_control\_packet--Part-2) is created without any match field and only the remaining part of the match\_control\_packet's actions (accessing ipv6\_port\_qos). This new logical MAT is added between match\_control\_packet and its successor in the TDG. \paragraph{Level Generation} \label{Mapping:LevelGeneration} In the TDG, there can be multiple dependencies between logical MATs in the same path; however, the strictest dependency (among the four types of non-stateful memory dependencies) is the main factor affecting the mapping decision~\cite{jose2015compiling}. The mapping algorithm keeps the \textit{strictest} dependency and removes other dependencies. Besides this, all the logical MAT nodes having stateful memory dependency are needed to be assigned the same \textit{level} to ensure they are mapped on the same physical match-action stage. After the preprocessing step, the TDG nodes are topologically ordered to label them with their appropriate \textit{level}s. This ordering ensures that every node with any non-stateful memory dependency is assigned a higher \textit{level} than its successors. Finally, if two neighbor nodes in the TDG only have successors or reverse match dependencies or no dependency, they can be mapped to the same physical match-action stages (as they can be executed concurrently or speculatively) and assigned the same \textit{level}. Fig.~\ref{fig:LevelAndMapping} shows the preprocessed version of the TDG shown in fig.~\ref{fig:TDGPreprocessing}, along with the \textit{level}s assigned to every node. \begin{figure}[h] \centering \includegraphics[trim=0.0in 2in 0in 0, clip,scale=.34]{LevelAndMapping.pdf} \caption{The TDG of fig.~\ref{fig:TDGPreprocessing}a after preprocessing, assigned \textit{level} for every node and the physical match-action stages after mapping} \label{fig:LevelAndMapping} \end{figure} \paragraph{Mapping Logical Tables} After \textit{level} generation, all the logical MATs with the same \textit{level} imply they can be mapped to the same physical match-action stages. They can be divided into two subsets. Firstly, a subset of these logical MATs contain stateful memory dependency among them, and they need to be mapped on the same stage where the indirect stateful memories are mapped. Similar to existing commodity RMT switches~\cite{opentofino}, our compiler backend does not support spreading indirect stateful memories over multiple stages. The mapping algorithm at first maps the set of logical MATs containing stateful memory dependency to only one of the available physical match-action stages. Secondly, the following subset of logical MATs contains no match, action, or stateful dependencies, and these logical MATs can be executed concurrently on the same math-action stage. However, a single match-action stage may not contain enough hardware resources to accommodate them. In that case, they are mapped to one or multiple consecutive math-action stages. Logical MATs from these two sets can be ordered and selected for mapping to the physical match-action stage using various heuristics~\cite{jose2015compiling}. In this work, we prioritized the logical MATs with non-exact (ternary, lpm, or prefix) match fields and mapped them to TCAM-based physical match-action blocks. Next, the logical MATs with exact match fields are mapped to SRAM-based physical match-action units. These logical MATs can spill into TCAMs if it runs out of SRAMs. Ties are broken by at first mapping the logical MAT that appears in the TDG earlier. Fig.~\ref{fig:LevelAndMapping} shows the logical MAT nodes of the TDG shown in fig.~\ref{fig:TDGPreprocessing} and on which physical match-action stages they are mapped. In allocating SRAM blocks for the match, action, and stateful memory entries, the mapping algorithm utilizes the \textit{memory packing} feature of RMT architecture. The mapping algorithm tries to store multiple entries in a \textit{packing unit} of up to $p_f$ SRAM blocks to reduce SRAM waste and fragmentation. The value of $p_f$ is configurable at compile time. Currently, our compiler backend allocates SRAM at block level granularity. As a result, an SRAM block is allocated exclusively for the match or action entries of only one MAT; or for accomodating the indirect stateful memories. It can lead to a waste of SRAM resources when a small number of entries are required. We leave the goal of improving the SRAM utilization as future work. The number of action entries required for a logical MAT can be determined at compile time~\cite{robin2022clb} or unknown beforehand~\cite{jose2015compiling}. Our compiler backend can reserve either a fixed number of action entries or one action entry for every MAT entry in a logical MAT. \section{Implementation and Evaluation} \label{ImplementationAndEvaluation} \subsection{Implementation} \label{Implementation} Our P4 compiler backend is entirely implemented in the Python 3 programming language. For the frontend, it relies on the P4 consortium's reference compiler implementation (P4C~\cite{P4C}), which parses the P4 source code and generates an intermediate representation in JSON format. Our backend parses this JSON format data and stores it in a graph-based data structure. For computing the \textit{parse graph} to \textit{state table} representation for the parser TCAM, we relied on the algorithms proposed in~\cite{gibb2013design}. This project's source code was written in Python 2 language, and it was not designed for P4\textsubscript{16}'s intermediate representation of P4C. We converted the source code in Python 3 language and integrated it with our project with a moderate level of modifications. All source code of the project is publicly available~\cite{P4CB} under an open-source license. \subsection{Evaluation} \label{Evaluation} In this section, we analyze the performance of the proposed P4 compiler backend presented in this work. Due to various factors, evaluating and comparing its performance with other compiler backends is challenging. A few of the important reasons are the following: a) Due to the lack of openly available complete compiler backend (which can compute all three types of mapping for a P4 program) for RMT switches, we can not compare the overall performance of our compiler backend. b) To the best of our knowledge, there are no benchmark p4 programs and complete information about their resource consumption in the RMT hardware pipeline is available. Few research works have mentioned corresponding P4 program's resource consumption using proprietary compilers. However, these target hardware often contains various externs capable of executing complex actions. These externs can significantly impact the mapping of a P4 program. Hence, it is not appropriate to use them for comparison without knowing the details of the target hardware and relevant mapping algorithm. Moreover, a large number of research works are based on the older version of the P4 language (P4\textsubscript{14}). P4C can compile both P4\textsubscript{14} and various P4\textsubscript{16} programs targeted for V1Model switches. However, we have found several instances~\cite{dang2016paxos,sivaraman2015dc} of P4\textsubscript{14} programs used in literature are not compilable using P4C due to a lack of proper backward compatibility. Besides this, we have found that several P4\textsubscript{16} programs~\cite{dang2016paxos,ding2019estimating} are also not compilable with P4C due to changes in some of the APIs in BMV2~\cite{BMV2} implementation. To this end, we have selected the following four P4 programs to evaluate our compiler backend's performance. \textit{a) IPv4/IPv6 QoS modifier}: This program is discussed in~\ref{MappingProblem} and the P4 source code is presented in appendix~\ref{App:QoSModiferP4Program}. \textit{b) Simple layer-2/3 forwarding}: This P4 program is designed for simple layer-2/3 packet forwarding and written using P4 version 14. It is compatible to P4\textsubscript{16} and used in the TDG mapper proposed in~~\cite{jose2015compiling}. \textit{c) Complex layer-2/3 forwarding}: This P4 program is a complex version of the previous one and requires more resources. It is also written using P4 version 14. It is compatible to P4\textsubscript{16} and used in the TDG mapper proposed in~~\cite{jose2015compiling} also. Both versions of the layer 2/3 forwarding program was adopted after minor modification to make it compatible with P4C. Moreover, TDG mapping of this P4 program is mainly influenced by the availability of memory (SRAM and TCAM) is the pipeline stages. Hence, it can be considered as a memory intensive P4 program. \textit{d) Traffic Aanonymizer}: This program is an implementation of the scheme proposed in~\cite{kim2019ontas} to anonymize a packet's content. The program is fully realizable using Tofino~\cite{tofino2} switches and BMV2 based source code is available as open-source project. This program has a small stateful memory (SRAM and TCAM) requirement; however, it has a complex TDG and mainly requires computational power in various stages. Hence, it can be considered as a compute intensive P4 program. The first one is selected to show some important features of our compiler backend; the next two are selected as their resource consumption is available in the literature~\cite{jose2015compiling}. The last one is selected as it is reported to be realizable using Tofino~\cite{tofino2,opentofino} switches. For each P4 program, we generated the intermediate representation (IR) using the P4C frontend. We used the V1Model switch described in appendix~\ref{APP:ExampleV1ModelSpecs} as the benchmark hardware. We provided its hardware specification and the IR to the compiler backend as input. Then we used our compiler backend to map the IR of the P4 programs to this benchmark hardware. As there is no open-source compiler available for computing the header and parser mapping of a P4 program, we have only presented the result using our compiler backend. However, for TDG mapping, we have computed the mapping using the TDG mapper presented in~\cite{jose2015compiling} (with \textit{First-Fit-Decreasing} heuristic) and compared it with the mapping generated by our backend. The TDG mapper's~\cite{jose2015compiling} mapping algrotihm reserves a fixed number of SRAM blocks (16 blocks per stage~\cite{jose2015compiling}) for accomodating action memories and can not dynamically adjust the number of action entries (for one or more logical MATs) beyond the limit of 16 SRAM blocks. As it reserves a fixed number of SRAM blocks, it does not require to compute the mappings of the action entries over the physical MAT stages. Besdies the heuristic based mapping, this TDG mapper~\cite{jose2015compiling} can produce optimal mapping of the logical MATs to physical MATs; However, for optimal mapping it requires a large amount of time~\cite{jose2015compiling,vass2020compiling} to compute the mapping using inteleger linear programming (ILP) method. As our goal is to build a compiler backend for to quickly decide about realizability of a P4 program, we have not included the ILP variation of the TDG mapper proposed in~\cite{jose2015compiling}. Compared to the TDG mapper of~\cite{jose2015compiling}, we have configured our compiler backend to allocate upto 16K action entries for every logical MAT in every stage. All of our experiments were run on an HP laptop with an Intel Core-i7 processor, 24 GB RAM, running Ubuntu 20.04. \begin{table*}[t] \centering{ \small{ \begin{tabular}{|c|c|c|c|c|c|} \hline \begin{tabular}[c]{@{}c@{}}Program Name\end{tabular} & \begin{tabular}[c]{@{}c@{}}\# Header Fields\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\sum$ Bitwidth of Header Fields\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\sum$ Bitwidth of Req. PHV Fields\end{tabular} & \begin{tabular}[c]{@{}c@{}}Waste (\%)\end{tabular} & Ex. Time (in ms) \\ \hline QoS-Modifer & 66 & 1288 & 1432 & 10.05 &2 \\ \hline L2L3-Simple & 58 & 1064 & 1208 & 11.92 &2.03\\ \hline L2L3-Complex & 126 & 2912 & 3088 & 5.69 &2.12 \\ \hline Traffic-Anony & 94 & 1976 & 2112 & 6.43 &2.92\\ \hline \end{tabular} } } \caption{Total number of header fields used in P4 programs (both ingress end egress stage), total bitwidth of the header fields, total bitwidth of required PHV fields, percentage of waste in PHV fields and the total execution time required for computing the \textit{\textbf{header mapping}}.} \label{tab::headercomparison} \end{table*} \subsubsection{Result Analysis}\label{ResultAnalysis} \textbf{Header Mapping}: Table~\ref{tab::headercomparison} shows the result of our compiler backend's \textit{header mapping} of the benchmark P4 programs. There are 64, 96, and 64 fields of 8, 16, and 32b width (total 4096b wide PHV) exists in the PHV. Many PHV fields remain unused for both small and large programs; for example, the small programs (\textit{QoS-Modifier} and \textit{L2L3-Simple}) both consume around 30-35\% of the PHV's capacity. On the other hand, the large programs (\textit{Traffic-Anony} and \textit{L2L3-Complex}) consume around 51\% of the PHV's capacity. While accommodating the P4 program's header fields using the PHV fields, some space in the PHV fields is wasted. For the small programs, the \textit{waste} in PHV bitwidth is higher (approx. 10-12\%) compared to the larger programs (approx. 5.7-6.4\%). The rightmost column of table~\ref{tab::headercomparison} shows the total time (in milliseconds) required to compute the \textit{header mapping} for the benchmark P4 programs. For all the P4 programs, the required time is short and ranges between 2 to 3 milliseconds (approx.). \begin{table*}[] \centering { \small{ \begin{tabular}{|c|c|c|c|c|} \hline \begin{tabular}[c]{@{}c@{}}Program Name\end{tabular} & \begin{tabular}[c]{@{}c@{}}\# States in Parse Graph\end{tabular} & \begin{tabular}[c]{@{}c@{}}\# Edges in Parse Graph\end{tabular} & \begin{tabular}[c]{@{}c@{}}Required TCAM Entries\end{tabular} & Ex. time (in ms) \\ \hline QoS-Modifer & 5 & 8 & 5 & 31 \\ \hline L2L3-Simple & 4 & 10 & 5 & 21 \\ \hline L2L3-Complex & 11 & 31 & 22 & 132 \\ \hline Traffic-Anony & 7 & 14 & 13 & 65 \\ \hline \end{tabular} } } \caption{ Total number of states and edges in the parse graph of the P4 programs, number of TCAM entries required for the \textit{state table} and the total execution time required for computing the \textit{\textbf{parse graph mapping}}.} \label{tab::parserComparison} \end{table*} \textbf{Parse Graph Mapping}: Table~\ref{tab::parserComparison} shows the result of our compiler backend's \textit{header mapping} of the benchmark P4 programs. The benchmark hardware used in the experiments contains 256$\times$40b TCAM for implementing the parser \textit{state table}; it can look at 48 bytes into the packet, identify a maximum of four headers and extract 48 bytes of header fields data at every cycle to parse the incoming packets at 40 Gbps. For the complex P4 program (\textit{L2L3-Complex}) with 11 states and 31 edges in the parse graph, it requires only 22 entries in the TCAM, which is less than 9\% of the total capacity of the TCAM. The \textit{parse graph} is simpler for the rest of the programs and consumes only 2\% of the TCAM's capacity. The rightmost column of table~\ref{tab::parserComparison} shows the total time (in milliseconds) required to compute the \textit{parse graph mapping} for the benchmark P4 programs. The result shows that for the P4 programs with a larger number of nodes and edges in the \textit{parse graph}, the \textit{parse graph mapping} algorithm requires more time to compute the mapping. For example, the total number of nodes and edges in the \textit{parse graph} of \textit{L2L3-Complex} is approximately twice as compared to the \textit{parse graph} of the \textit{Traffic-Anony} program. Computing the \textit{parse graph mapping} for \textit{L2L3-Complex} program requires approximately 2x time compared to the \textit{Traffic-Anony} program. In the other two programs, the number of nodes and edges in the \textit{parse graph} is small; hence the execution time of the \textit{parse graph mapping} is also relatively short (less than 35 milliseconds). \begin{table*}[!t] \centering{ \begin{tabular}{|c|c|c|cc|cc|cccc|cc|} \hline \multirow{3}{*}{\textbf{Program Name}} & \multicolumn{1}{l|}{\multirow{3}{*}{\# Nodes in TDG}} & \multicolumn{1}{l|}{\multirow{3}{*}{\# Edges in TDG}} & \multicolumn{2}{c|}{\multirow{2}{*}{Stages}} & \multicolumn{2}{c|}{\multirow{2}{*}{Latency (in cycle)}} & \multicolumn{4}{c|}{Resource Usage (in Blocks)} & \multicolumn{2}{c|}{\multirow{2}{*}{Ex. Time (in ms)}} \\ \cline{8-11} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{TCAM} & \multicolumn{2}{c|}{SRAM} & \multicolumn{2}{c|}{} \\ \cline{4-13} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{~\cite{jose2015compiling}} & * & \multicolumn{1}{c|}{~\cite{jose2015compiling}} & * & \multicolumn{1}{c|}{~\cite{jose2015compiling}} & \multicolumn{1}{c|}{*} & \multicolumn{1}{c|}{~\cite{jose2015compiling}} & * & \multicolumn{1}{c|}{~\cite{jose2015compiling}} & * \\ \hline QoS-Modifer & 16 & 20 & \multicolumn{1}{c|}{Inv.} & 3 & \multicolumn{1}{c|}{Inv.} & 38 & \multicolumn{1}{c|}{Inv.} & \multicolumn{1}{c|}{6} & \multicolumn{1}{c|}{Inv.} & 4 & \multicolumn{1}{c|}{Inv.} & 31 \\ \hline Traffic-anony & 84 & 194 & \multicolumn{1}{c|}{Inv.} & 18 & \multicolumn{1}{c|}{Inv.} & 156 & \multicolumn{1}{c|}{Inv.} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{Inv.} & 35 & \multicolumn{1}{c|}{Inv.} & 1700 \\ \hline L2L3-simple & 24 & 38 & \multicolumn{1}{c|}{4} & 5 & \multicolumn{1}{c|}{53} & 42 & \multicolumn{1}{c|}{56} & \multicolumn{1}{c|}{57} & \multicolumn{1}{c|}{191} & 207 & \multicolumn{1}{c|}{243} & 41 \\ \hline L2L3-Complex & 60 & 138 & \multicolumn{1}{c|}{30} & 31 & \multicolumn{1}{c|}{110} & 108 & \multicolumn{1}{c|}{272} & \multicolumn{1}{c|}{260} & \multicolumn{1}{c|}{762} & 995 & \multicolumn{1}{c|}{148} & 925 \\ \hline \end{tabular} } \caption{Comarison of \textit{\textbf{TDG mapping}} computed by the work proposed in~\cite{jose2015compiling} and the compiler backend presented in this work (\textit{marked by *}): Inv. = Invalid mapping.} \label{tab::TDGmapperComparison} \end{table*} \textbf{TDG Mapping}: Table~\ref{tab::TDGmapperComparison} shows the comparison between the TDG mapping computed by the TDG mapper of~\cite{jose2015compiling} and our compiler backend. For the \textit{QoS-Modifier} program, the TDG mapper proposed in~\cite{jose2015compiling} computes the mapping wrongly. It maps \textit{match\_control\_packet} and \textit{ipv4\_nexthop} in one stage and \textit{ipv6\_nexthop} in another stage. This is an invalid mapping, as explained in sec.~\ref{OurApproach} and fig.~\ref{fig:TDGPreprocessing}. The reason behind the invalid mapping (row 1 in table~\ref{tab::TDGmapperComparison}) is that the TDG mapper proposed in~\cite{jose2015compiling} only focuses on the four types of non-stateful memory dependencies (sec.~\ref{IR:TDG}) in computing the mapping. However, the \textit{QoS-Modifier} program contains a sequence of logical MATs where both match and stateful memory dependency guide the control flow. Hence, the mapper fails to compute a valid mapping for the \textit{QoS-Modifier} program. On the other hand, our compiler backend includes stateful memory dependency and the other four types of non-stateful memory dependencies in computing the TDG mapping. It generates a valid mapping for this P4 program. It maps the logical MATs over three physical match-action stages, and the processing latency of every packet under this mapping is 38 cycles. It requires 6 TCAM blocks and 4 SRAM blocks to accommodate the logical match-action tables. The \textit{Traffic-Anony} program is written in P4\textsubscript{16}. Whereas the work described in~\cite{jose2015compiling} does not support various P4\textsubscript{16} language constructs and only works with the P4\textsubscript{14} programs. Hence it can not generate the mapping for the \textit{Traffic-Anony} P4 program (row 2 in table~\ref{tab::TDGmapperComparison}). However, our compiler backend supports the P4\textsubscript{16} language and generates a valid mapping for the program. It maps the P4 program over 18 physical match-action stages and the processing latency of every packet under this mapping is 156 cycles. The \textit{Traffic-Anony} is a computation-intensive P4 program; it requires 2 TCAM blocks and 35 SRAM blocks to accommodate the logical match-action tables in the RMT pipeline. The P4 programs for simple and complex L2L3 forwarding (\textit{L2L3-Simple} and \textit{L2L3-Complex}) are both written in P4\textsubscript{14}. None of the programs contain any \textit{stateful memory dependency}~\ref{TDG:Preprocessing}. Hence their TDG mapping can be computed using the work proposed in~\cite{jose2015compiling}. In the case of \textit{L2L3-Simple} (row 3 in table~\ref{tab::TDGmapperComparison}), our compiler backend uses five stages but achieves reduced packet processing latency of 42 cycles. On the other hand, the TDG mapper of~\cite{jose2015compiling} uses one less stage, but the packets face more processing latency (53 cycles). Our compiler backend requires one extra TCAM block and 16 SRAM blocks to accommodate the logical MATs. In the case of \textit{L2L3-Complex}, our compiler backend uses 31 stages which is one stage more compared to~\cite{jose2015compiling}. However, it needs a smaller packet processing latency of 108 cycles. Our compiler backend uses 260 TCAM blocks compared to 272 TCAM blocks used by the TDG mapper of~\cite{jose2015compiling}. However, our compiler backend uses a higher number of SRAM blocks (995 blocks compared to 762 blocks). Overall our compiler backend requires more SRAM blocks for accommodating the simple and complex L2L3 forwarding programs. There are two reasons behind this: Firstly, our compiler backend allocates SRAM blocks for action memory and indirect stateful memories at a granularity of blocks. As a result, when the SRAM requirement of two or more logical MATs can be fulfilled using only one SRAM block, our compiler backend allocates at least one SRAM block to every logical MAT. This memory packing is less efficient compared to~\cite{jose2015compiling}. We leave the goal of improving the SRAM allocation mechanism to reduce such waste as a future goal. Finally, our compiler backend does not reserve a fixed number of SRAM blocks for action memories in every stage. Instead, it can store up to 16K (can be increased or decreased at compile time) action entries for every logical MAT. It sacrifices cheaper SRAM uses for more flexibility in action memory allocation. As a result, it tends to use more SRAM blocks compared to the TDG mapper of~\cite{jose2015compiling}. \textit{Execution time}: The TDG mapper of~\cite{jose2015compiling} can not compute a valid TDG mapping for the \textit{QoS-Modifier} and \textit{Traffic-Anony} program; hence execution time for these two benchmark P4 programs can not be compared. However, in the case of \textit{L2L3-Complex}, our compiler backend requires approximately 6x time compared to the TDG mapper of~\cite{jose2015compiling} (column 12-13, last/5-th row in table~\ref{tab::TDGmapperComparison}) to compute the \textit{TDG mapping}. There are two major reasons behind this: Firstly, our compiler backend preprocesses (sec.~\ref{TDG:Preprocessing}) the TDG to handle the \textit{stateful memory dependency}, which increases the execution time. Secondly, the TDG mapper's~\cite{jose2015compiling} policy of allocating a fixed number of SRAM blocks per stage for action memories does not require any computation at run time. Conversely, our compiler backend dynamically allocates action memories for every logical table and also tries to minimize the usage of SRAM blocks. Hence it needs to search over a larger search space which results in a larger execution time. The \textit{L2L3-Complex} is a memory-intensive P4 program; it contains 60 nodes and 138 edges in the TDG. Here the logical tables need a large number of matches and action entires. As a result, our compiler backend needs to do more computation due to the mentioned two factors and requires more execution time (925 ms compared to 148 ms). On the other hand, for the \textit{L2L3-Simple} program (4-th row in table~\ref{tab::TDGmapperComparison}), the TDG is less complex (24 nodes and 38 edges); its logical tables also requires a smaller number of match and action entries. Hence, our compiler backend quickly computes the TDG mapping and requires approximately 1/6-th time (41 ms vs 243 ms) compared to the TDG mapper of [17]. Our compiler backend's larger execution time can be observed in the case of the \textit{Traffic-Anony} program also. It is a computation-intensive P4 program (3-rd row in table~\ref{tab::TDGmapperComparison}) containing a large number of nodes and edges in the TDG (84 nodes and 194 edges) due to the nested branching instructions. However, it requires very little memory (only 2 TCAM and 35 SRAM blocks). Hence, the \textit{TDG mapping} of this program is mainly influenced by the available bit width in the match ($TCB$ and $SCB$ in fig.~\ref{fig:RMTSingleStage}) and action ($ACB$ in fig.~\ref{fig:RMTSingleStage}) crossbars. The use of crossbar widths is decided by the use of various PHV fields in the match and action portion of a physical MAT. The benchmark hardware used in the evaluation (app.~\ref{APP:ExampleV1ModelSpecs}) contains 226 PHV fields of 4 different bitwidths. As a result, our compiler backend needs to select a good mapping from a large number of variations, and the execution time increases (1700 ms). Though our compiler backend needs more execution time compared to heuristic-based algorithms of the TDG mapper of~\cite{jose2015compiling}; its execution time is very short compared to the integer linear programming (ILP) based optimal mapping algorithms. For example, the ILP based optimal mapping algorithms require 12K-135K ms (table 4 in~\cite{jose2015compiling}) execution time for the \textit{L2L3-Complex} program compared to sub 1K ms execution time of our compiler backend. Moreover, our compiler backend is also capable of dynamically allocating action memories for every logical MAT. Besides the statistics shown in table~\ref{tab::TDGmapperComparison}, our compiler backend also provides details information about which logical MATs are mapped to which physical match-action stages. It also provides the required number of TCAM and(or) SRAM blocks, match and action crossbar width, type of ALU/extern instructions, etc., required for every logical MAT. This information is required in the \textit{Configuration Generation Phase} of a compiler backend. As this phase is not the focus of this work, we do not discuss them here in detail. However, more details are provided in our GitHub repository~\cite{P4CB}. Overall, our compiler backend provides a feature-rich open source alternative to the commercial closed source compilers that can quickly decide the realizability of a P416 program. \section{Discussion} \label{Discussion} \textbf{Limitations}: Our compiler backend supports most of the P4 language constructs covering a wide range of use cases. However, it still does not support variable-length header parsing and direct stateful memory access in actions. Both of them can be avoided through careful design of the p4 program. Besides this, it does not support the atomic transaction mechanism available in the P4 language. We are working on supporting these P4 language features. \textbf{Extending V1Model Architecture}: The PSA~\cite{PSA} or Tofino~\cite{tofino2} is an extension of V1Model architecture where the architectures support different externs. These architectures can combine multiple simpler instructions in one atomic instruction to achieve complex functionalities. For example, the \textit{register extern} available in Tofino switches~\cite{opentofino} can execute four-way branching instructions. It can execute two if-else pairs and read-modify-write operation on a pair of registers (indirect stateful memory) using only one extern. However, to use them (or any new extern in general) in a P4 program, the P4C compiler frontend needs to support them. After that, these externs can be supported in our compiler backend with minor modifications to compute mapping for that P4 program. \textbf{Writing New Mapping Algorithm}: Our compiler backend is designed in a modular way. After parsing the intermediate representation of a P4 program, it stores the preprocessed information (header information, parse graph, TDG) in various convenient data structures (hash table, graph, etc.). Besides this, it also stores the resources in a V1Model switch in various convenient data structures (hashtable, array, etc.). As an open-source project, researchers can reuse this processed information to write newer algorithms for header mapping, parse graph mapping, and TDG mapping. Detail discussion on source code organization is available in~\cite{P4CB}. \section{Conclusion} \label{Conclusion} We have presented an open-source compiler backend that can map a P4 (version 16) program to the hardware resources of a V1Model switch. It uses heuristic-based algorithms to compute this mapping and give a quick decision on the realizability of a P4 program. We believe this open-source compiler backend can serve as a cost-effective platform for analyzing the realizability and resource consumption of a P4 (version 16) program in real-world V1Model switches. It allows researchers to experiment with different mapping algorithms as an open-source platform. It can be extended to support other derivatives of V1Model architecture by supporting various \textit{extern} units. This can provide an open platform to the programmable switch researchers for experimenting with different mapping algorithms and different variations of V1Model switch. \bibliographystyle{unsrt}
1,116,691,501,057
arxiv
\section{Introduction}\label{intro} Let $\Gamma$ be a distance-regular graph~with valency $k$ and let $\theta_{\min}=\theta_{\min}(\Gamma)$ be its smallest eigenvalue. Any clique $C$ in $\Gamma$ satisfies \begin{equation}\label{hoffman-bd} |C|\leq 1-\frac{k}{\theta_{\min}} \end{equation} (see \cite[Proposition 4.4.6 (i)]{bcn}). This bound (\ref{hoffman-bd}) is due to Delsarte, and a clique $C$ in $\Gamma$ is called a {\em Delsarte clique} if $C$ contains exactly $1-\frac{k}{\theta_{\min}}$ vertices. Godsil \cite{godsil-93-paper} introduced the following notion of a geometric distance-regular graph. A non-complete distance-regular graph~$\Gamma$ is called {\em geometric} if there exists a set $\mathcal{C}$ of Delsarte cliques such that each edge of $\Gamma$ lies in a unique Delsarte clique in $\mathcal{C}$. In this case, we say that $\Gamma$ is geometric with respect to $\mathcal{C}$.\\ There are many examples of geometric distance-regular graphs such as bipartite distance-regular graphs, the Hamming graphs, the Johnson graphs, the Grassmann graphs and regular near $2D$-gons.\\ In particular, the local structure of geometric distance-regular graphs play an important role in the study of spectral characterization of some distance-regular graphs. In \cite{H1}, we show that for given integer $D\geq 2$, any graph cospectral with the Hamming graph $H(D,q)$ is locally the disjoint union of $D$ copies of the complete graph of size $q-1$, for $q$ large enough. By using this result and \cite{H0}, we show in \cite{H1} that the Hamming graph $H(3,q)$ with $q\geq 36$ is uniquely determined by its spectrum. \\ Neumaier \cite{neumaier-m} showed that except for a finite number of graphs, any geometric strongly regular graph with a given smallest eigenvalue $-m$, $m>1$ integral, is either a Latin square graph or a Steiner graph (see \cite{neumaier-m} and Remark \ref{geo-rmk} for the definitions). \\ An {\em $n$-claw} is an induced subgraph on $n+1$ vertices which consists of one vertex of valency $n$ and $n$ vertices of valency $1$. Each distance-regular graph~without $2$-claws is a complete graph. Note that for any geometric distance-regular graph~$\Gamma$ with respect to $\mathcal{C}$ a set of Delsarte cliques, the number of Delsarte cliques in $\mathcal{C}$ containing a fixed vertex is $-\theta_{\min}(\Gamma)$. Hence any geometric distance-regular graph~with smallest eigenvalue $-2$ contains no $3$-claws. Blokhuis and Brouwer \cite{3-claw} determined the distance-regular graph s without $3$-claws. \\ Yamazaki \cite{yamazaki} considered distance-regular graph s which are locally a disjoint union of three cliques of size $a_1+1$, and these graphs for $a_1\geq 1$ are geometric distance-regular graph s with smallest eigenvalue $-3$.\\ In Theorem \ref{gdrg}, we determine the geometric distance-regular graph s with smallest eigenvalue $-3$. We now state our main result of this paper. \begin{theorem}\label{main-cor} Let $\Gamma$ be a non-complete distance-regular graph. If $\Gamma$ satisfies \[ \max \{3, \frac{8}{3}(a_1+1)\}<k<4a_1+10-6c_2\] then $\Gamma$ is one of the following. \begin{enumerate} \item[(i)] A Steiner graph $S_3(\alpha -3)$, i.e., a geometric strongly regular graph with parameters $\left(\frac{(2\alpha-3)(\alpha-2)}{3},3\alpha-9,\alpha,9 \right)$, where $\alpha\geq 36$ and $\alpha \equiv 0,2~~(\mbox{mod}~3)$. \item[(ii)] A Latin square graph $LS_3(\alpha)$, i.e., a geometric strongly regular graph with parameters $(\alpha ^2,3(\alpha-1),\alpha,6)$, where $\alpha \geq 24$. \item[(iii)] The generalized hexagon of order $(8,2)$ with $\iota(\Gamma)=\{24,16,16;1,1,3\}$. \item[(iv)] One of the two generalized hexagons of order $(2,2)$ with $\iota(\Gamma)=\{6,4,4;1,1,3\}$. \item[(v)] A generalized octagon of order $(4,2)$ with $\iota(\Gamma)=\{12,8,8,8;1,1,1,3\}$. \item[(vi)] The Johnson graph $J(\alpha,3)$, where $\alpha \geq 20$. \item[(vii)] $D=3$ and $\iota(\Gamma)=\{3\alpha +3,2\alpha +2, \alpha +2-\beta ;1,2,3\beta\}$, where $\alpha \geq 6$ and $\alpha\geq \beta \geq 1$. \item[(viii)] The halved Foster graph with $\iota(\Gamma)=\{6,4,2,1;1,1,4,6\}$. \item[(ix)] $D=\mbox{\em {\texttt h}}+2\geq 4$ and $$(c_i,a_i,b_i)=\left\{ \begin{array}{ll} (1,\alpha,2\alpha+2)& \mbox{ for }1\leq i\leq \mbox{\em {\texttt h}}\\ (2,2\alpha+\beta-1,\alpha-\beta+2) & \mbox{ for }i=\mbox{\em {\texttt h}} +1\\ (3\beta,3\alpha-3\beta+3,0) & \mbox{ for }i=\mbox{\em {\texttt h}} +2 \end{array} \right., \mbox{~where~} \alpha \geq \beta \geq 2.$$ \item[(x)] $D=\mbox{\em {\texttt h}}+2\geq 3$ and $$(c_i,a_i,b_i)=\left\{ \begin{array}{ll} (1,\alpha,2\alpha+2)& \mbox{ for }1\leq i\leq \mbox{\em {\texttt h}} \\ (1,\alpha+2\beta-2,2\alpha-2\beta+4) & \mbox{ for }i=\mbox{\em {\texttt h}} +1\\ (3\beta,3\alpha-3\beta+3,0) & \mbox{ for }i=\mbox{\em {\texttt h}} +2 \end{array} \right., \mbox{~where~} \alpha\geq \beta \geq 2.$$ \item[(xi)] A distance-$2$ graph of a distance-biregular graph with vertices of valency $3$ and $$(c_i,a_i,b_i)=\left\{ \begin{array}{ll} (1,\alpha,2\alpha+2)& \mbox{ for }1\leq i\leq \mbox{\em {\texttt h}}\\ (1,\alpha+2,2\alpha)& \mbox{ for }i=\mbox{\em {\texttt h}} +1\\ (4,2\alpha-1,\alpha)& \mbox{ for }\mbox{\em {\texttt h}} +2\leq i\leq D-2\\ (4,2\alpha+\beta-3,\alpha-\beta+2) & \mbox{ for }i=D-1\\ (3\beta,3\alpha-3\beta+3,0) & \mbox{ for }i=D \end{array} \right., \mbox{~where~} \alpha\geq \beta \mbox{~and~} \beta\in \{2,3\}.$$ \end{enumerate} \end{theorem} Examples of non-complete distance-regular graph s with valency $k>\max \{3, \frac{8}{3}(a_1+1)\} $ include Johnson graphs $J(n,e)$ $\Big( (n\geq 20 \mbox{~and~}e=3)$, $(n\geq 11 \mbox{~and~}e=4)$ or $(n\geq 2e \mbox{~and~}e\geq 5) \Big)$, Hamming graphs $H(d,q)$ \Big($(d=3 \mbox{~and~}q\geq 3)$~or~$(d\geq 4\mbox{~and~}q\geq 2 )$\Big) and Grassmann graphs $\Bigl[{{V}\atop e}\Bigr]$ $\Big( (e=2\mbox{~and~}q\geq 4 )$~or~$(e\geq 3 \mbox{~and~}q\geq 2) \Big)$, where $n\geq 2e$ and $V$ is an $n$-dimensional vector space over $\mathbb{F}_q$ the finite field of $q$($\geq 2$) elements (see \cite[Chapter 9]{bcn} for more information on these examples). Except $J(n,3)~(n\geq 20)$ and $H(3,q)~(q\geq 3)$, all the above examples contain $4$-claws. Whereas, $J(n,3)~(n\geq 20)$ and $H(3,q)~(q\geq 3)$ are geometric distance-regular graph s with smallest eigenvalue $-3$.\\ In Section \ref{no-4-claws}, we prove Theorem \ref{main-thm} which gives a sufficient condition, $\max \{3, \frac{8}{3}(a_1+1)\}<k<4a_1+10-6c_2$, for geometric distance-regular graph s with smallest eigenvalue $-3$. We first show in Theorem \ref{geo} that for any distance-regular graph~satisfying $k>\max \{3, \frac{8}{3}(a_1+1)\}$, the statement that $\Gamma$ has no $4$-claws is equivalent to the statement that $\Gamma$ is geometric with smallest eigenvalue $-3$. By using Theorem \ref{geo}, we will prove Theorem \ref{main-thm}. As an application of Theorem \ref{geo}, we can show non-existence of a family of distance-regular graph s with feasible intersection arrays. For example, in the list of \cite[Chapter 14]{bcn}, the $7$ feasible intersection arrays in Theorem \ref{non-exist-7} are ruled out.\\ In Section \ref{geosection}, we determine the geometric distance-regular graph s with smallest eigenvalue $-3$ in Theorem \ref{gdrg}. By using Theorem \ref{main-thm} and Theorem \ref{gdrg}, we will prove Theorem \ref{main-cor}. \section{Preliminaries}\label{pre} All graphs considered in this paper are finite, undirected and simple (for unexplained terminology and more details, see \cite{bcn}).\\ For a connected graph $\Gamma$, distance $d_{\Gamma}(x,y)$ between any two vertices $x,y$ in the vertex set $V(\Gamma)$ of $\Gamma$ is the length of a shortest path between $x$ and $y$ in $\Gamma$, and denote by $D(\Gamma)$ the diameter of $\Gamma$ (i.e., the maximum distance between any two vertices of $\Gamma$). For any vertex $x\in V(\Gamma)$, let $\Gamma_i(x)$ be the set of vertices in $\Gamma$ at distance precisely $i$ from $x$, where $i$ is a non-negative integer not exceeding $D(\Gamma)$. In addition, define $\Gamma_{-1}(x)=\Gamma_{D(\Gamma)+1}(x):=\emptyset$ and $\Gamma_0(x):=\{x\}$. For any distinct vertices $x_1,x_2,\ldots, x_j\in V(\Gamma)$, define $$\Gamma_1(x_1,\ldots,x_j):=\Gamma_1(x_1)\cap \Gamma_1(x_2) \cap \cdots \cap \Gamma_1(x_j).$$ A {\em clique} is a set of pairwise adjacent vertices. A graph $\Gamma$ is called {\em locally} $G$ if any local graph of $\Gamma$ (i.e., the local graph of a vertex $x$ is the induced subgraph on $\Gamma_1(x)$) is isomorphic to $G$, where $G$ is a graph. The {\em adjacency matrix} $A(\Gamma)$ of a graph $\Gamma$ is the $|V(\Gamma)|\times |V(\Gamma)|$-matrix with rows and columns are indexed by $V(\Gamma)$, and the $(x,y)$-entry of $A(\Gamma)$ equals $1$ whenever $d_{\Gamma}(x,y)=1$ and $0$ otherwise. The eigenvalues of $\Gamma$ are the eigenvalues of $A(\Gamma)$.\\ A connected graph $\Gamma$ is called a {\emdistance-regular graph}~if there exist integers $b_i(\Gamma)$, $c_i(\Gamma)$, $i=0,1,\ldots,D(\Gamma)$, such that for any two vertices $x,y$ at distance $i=d_{\Gamma}(x,y)$, there are precisely $c_i(\Gamma)$ neighbors of $y$ in $\Gamma_{i-1}(x)$ and $b_i(\Gamma)$ neighbors of $y$ in $\Gamma_{i+1}(x)$. In particular, $\Gamma$ is regular with valency $k(\Gamma):=b_0(\Gamma)$. The numbers $c_i(\Gamma),b_i(\Gamma)$ and $a_i(\Gamma):=k(\Gamma)-b_i(\Gamma)-c_i(\Gamma)~(0\leq i\leq D(\Gamma))$ (i.e., the number of neighbors of $y$ in $\Gamma_i(x)$ for $d_{\Gamma}(x,y)=i$) are called the {\em intersection numbers} of $\Gamma$. Note that $b_{D(\Gamma)}(\Gamma)=c_0(\Gamma)=a_0(\Gamma):=0$ and $c_1(\Gamma)=1$. In addition, we define $k_i(\Gamma):=|\Gamma_i(x)|$ for any vertex $x$ and $i=0,1,\ldots,D(\Gamma)$. The array $\iota(\Gamma)=\{b_0(\Gamma),b_1(\Gamma),\ldots,b_{D(\Gamma)-1}(\Gamma);c_1(\Gamma),c_2(\Gamma),\ldots,c_{D(\Gamma)}(\Gamma)\}$ is called the {\em intersection array} of $\Gamma$. In addition, we define the number \begin{equation}\label{head} {\texttt h}(\Gamma) := |\{j \,\mid \, (c_j,a_j,b_j) =(c_1,a_1,b_1),\,1\le j \le D(\Gamma)-1\}| \end{equation} which is called the {\em head} of $\Gamma$. \\ A regular graph $\Gamma$ on $v$ vertices with valency $k(\Gamma)$ is called a {\em strongly regular graph} with parameters $(v,k(\Gamma),\lambda(\Gamma),\mu(\Gamma))$ if there are two constants $\lambda(\Gamma)\geq 0$ and $\mu(\Gamma)>0$ such that for any two distinct vertices $x$ and $y$, $|\Gamma_1(x,y)|$ equals $\lambda(\Gamma)$ if $d_{\Gamma}(x,y)=1$ and $\mu(\Gamma)$ otherwise.\\ When there are no confusion, we omit $\sim_{\Gamma}$ and $\sim(\Gamma)$ in each notation for $\Gamma$, such as $d_{\Gamma}(~,~)$, $D(\Gamma)$, $A(\Gamma)$, ${\texttt h}(\Gamma)$, $k(\Gamma), c_i(\Gamma), b_i(\Gamma), a_i(\Gamma)$, $k_i(\Gamma)$, $\lambda(\Gamma)$ and $\mu(\Gamma)$.\\ Suppose that $\Gamma$ is a distance-regular graph~with valency $k\geq 2$ and diameter $D\geq 2$. It is well-known that $\Gamma$ has exactly $D+1$ distinct eigenvalues which are the eigenvalues of the following tridiagonal matrix \begin{equation}\label{mtx-L} L_1(\Gamma):= \left\lgroup \begin{tabular}{llllllll} $0$ & $b_0$\\ $c_1$ & $a_1$ & $b_1$\\ & $c_2$ & $a_2$ & $b_2$\\ & & . & . & .\\ & & & $c_i$ & $a_i$ & $b_i$\\ & & & & . & . & .\\ & & & & & $c_{D-1}$ & $a_{D-1}$ & $b_{D-1}$\\ & & & & &\makebox{\hspace{.324cm}} & $c_{D}$ & $a_{D}$ \end{tabular} \right\rgroup \end{equation} (cf. \cite[p.128]{bcn}). In particular, we denote by $\theta_{\min}=\theta_{\min}(\Gamma)$ the smallest eigenvalue of $\Gamma$. \section{Distance-regular graphs without $4$-claws}\label{no-4-claws} In this section, we prove the following theorem which gives a sufficient condition for geometric distance-regular graph s with smallest eigenvalue $-3$. \begin{theorem} \label{main-thm} Let $\Gamma$ be a non-complete distance-regular graph. If $\Gamma$ satisfies \begin{equation}\label{k-condi} \max \{3, \frac{8}{3}(a_1+1)\}<k<4a_1+10-6c_2 \end{equation} then $\Gamma$ is a geometric distance-regular graph~with smallest eigenvalue $-3$. \end{theorem} We first show in Theorem \ref{geo} that for any distance-regular graph~satisfying $k>\max \{3, \frac{8}{3}(a_1+1)\}$, the statement that $\Gamma$ has no $4$-claws is equivalent to the statement that $\Gamma$ is geometric with smallest eigenvalue $-3$. By using Theorem \ref{geo}, we will prove Theorem \ref{main-thm}. As an application, by considering a restriction on $c_2$ in Lemma \ref{mu-bd}, we can rule out a family of feasible intersection arrays. In particular, we prove that there are no distance-regular graph s with the intersection arrays in Theorem \ref{non-exist-7}. \begin{theorem}\label{geo} Let $\Gamma$ be a distance-regular graph ~satisfying $k > \max \{3, \frac{8}{3}(a_1+1)\}$. Then the following are equivalent.\\ (i) $\Gamma$ has no $4$-claws.\\ (ii) $\Gamma$ is a geometric distance-regular graph~with smallest eigenvalue $-3$. \end{theorem} \noindent{\em Proof: } Let $\Gamma$ be a distance-regular graph~satisfying $k>\max \{3, \frac{8}{3}(a_1+1)\}$. Let $\theta_{\min}=\theta_{\min}(\Gamma)$.\\ \noindent (ii)$\Rightarrow $(i): Suppose that $\Gamma$ is geometric with respect to $\mathcal{C}$ a set of Delsarte cliques and $\theta_{\min}=-3$. Since the number of Delsarte cliques in $\mathcal{C}$ containing a given vertex is $-\theta_{\min}$, the statement (i) follows immediately.\\ \noindent (i)$\Rightarrow $(ii): Suppose that $\Gamma$ has no $4$-claws. Define a {\em line} to be a maximal clique $C$ in $\Gamma$ such that $C$ has at least $k-2(a_1+1)+1$ vertices. Note here that $a_1\geq 1$ follows, otherwise $\Gamma$ has a $4$-claw from $k>\max \{3, \frac{8}{3}(a_1+1)\}$. Hence, $|C|\geq 3$ for any line $C$ in $\Gamma$. If there exists a line $C$ satisfying $|C|=3$, then $a_1=1$ and $k=6$ both hold by $3\geq k-2(a_1+1)+1$ and $k > \frac{8}{3}(a_1+1)$. By \cite[Theorem 1.1]{a=1k=6}, the graph $\Gamma$ is one of the following.\\ (a) The generalized quadrangle of order $(2,2)$.\\ (b) One of the two generalized hexagons of order $(2,2)$.\\ (c) The Hamming graph $H(3,3)$.\\ (d) The halved Foster graph.\\ All the graphs in (a)-(d) are geometric with smallest eigenvalue $-3$.\\ In the rest of the proof, we assume that each line contains more than $3$ vertices. First, we prove the following claim. \begin{claim}\label{geo-claim1} Every edge of $\Gamma$ lies in a unique line. \end{claim} \noindent{\em Proof of Claim \ref{geo-claim1}: } Let $(x,y_1)$ be an arbitrary edge in $\Gamma$. As $k\geq 2(a_1+1)+1$, there exists a $3$-claw containing $x$ and $y_1$, say $\{x,y_1,y_2,y_3\}$ induces a $3$-claw, where $y_i\in \Gamma_1(x)$ ($i=1,2,3$). Put $Y_i:=\{y_i\}\cup \Gamma_1(x,y_i)$ ($i=1,2,3$). If there exists a vertex $z$ in $\Gamma_1(x)\setminus \cup_{i=1}^{3}Y_i$, then $\{x,z,y_1,y_2,y_3\}$ induces a $4$-claw which is impossible, and therefore $\Gamma_1(x)=\cup_{i=1}^{3}Y_i$ follows. If there exist non-adjacent two vertices $v,w$ in $Y_1\setminus (Y_2\cup Y_3)$, then the set $\{x,y_2,y_3,v,w\}$ induces a $4$-claw which is a contradiction. Hence $\{x\}\cup \left(Y_1\setminus (Y_2\cup Y_3)\right) $ induces a clique containing the edge $(x,y_1)$, and it satisfies \[\left| \{x\}\cup \left(Y_1\setminus \left(Y_2\cup Y_3\right)\right) \right| =|\{x\}|+|\Gamma_1(x)|-|Y_2\cup Y_3|\geq 1+k-2(a_1+1). \] Thus every edge lies in a line.\\ Assume that there exist two lines $C_z$ and $C_w$ containing the edge $(x,y_1)$, where $z\in C_z$ and $w\in C_w$ are two non-adjacent vertices. Then $a_1=|\Gamma_1(x,y_1)|\geq 2(k-2(a_1+1)-1)-(|C_z\cap C_w|-2)$ implies \begin{equation}\label{two lines t} |C_z\cap C_w|\geq 2k-5a_1-4. \end{equation} In addition, by (\ref{two lines t}), \begin{eqnarray}\label{k-bd(geo)} \left| \Gamma_1(x)\setminus \left(\Gamma_1(x,z)\cup \Gamma_1(x,w)\cup\{z,w\}\right)\right|&\geq & k-(\left|\Gamma_1(x,z)\right| +\left|\Gamma_1(x,w) \right|+\left| \{z,w\}\right|-(|C_z\cap C_w|-1))\nonumber\\ &\geq & k-(2(a_1+1)-(2k-5a_1-5)) \nonumber \\&=& 3k-7a_1-7. \end{eqnarray} Since $\Gamma$ has no $4$-claws, $(\{x\}\cup \Gamma_1(x))\setminus \left(\Gamma_1(x,z)\cup \Gamma_1(x,w)\cup\{z,w\}\right)$ induces a clique of size at least $3k-7a_1-6$ by (\ref{k-bd(geo)}). Since any clique in $\Gamma$ has size at most $a_1+2$, we have $k\leq \frac{8}{3}(a_1+1)$ which is impossible. Hence, the edge $(x,y_1)$ lies in a unique line. Now, Claim \ref{geo-claim1} is proved. \hfill\hbox{\rule{3pt}{6pt}}\\ For each vertex $x\in V(\Gamma)$, we define $M_x$ to be the number of lines containing $x$. Then for any vertex $x$, we have $M_x\geq 3$ as $k>\frac{8}{3}(a_1+1)>2(a_1+1)$, and hence \begin{equation}\label{mx=3} M_x=3 \mbox{ for each vertex } x\in V(\Gamma) \end{equation} as $k\geq M_x(k-2(a_1+1))$ holds by Claim \ref{geo-claim1}. Let $B$ be the vertex-line incidence matrix (i.e., the $(0,1)$-matrix with rows and columns are indexed by the vertex set and the set of lines of $\Gamma$ respectively, where $(x,C)$-entry of $B$ is $1$ if the vertex $x$ is contained in the line $C$ and $0$ otherwise). By Claim \ref{geo-claim1} and (\ref{mx=3}), $BB^T=A+3I$ holds, where $B^T$ is the transpose of $B$, $A=A(\Gamma)$ and $I$ is the $|V(\Gamma)|\times |V(\Gamma)|$ identity matrix. Since each line contains more than $3$ vertices, it follows by double-counting the number of ones in $B$ that the number of lines is strictly less than the number of vertices in $\Gamma$. Hence, the matrix $BB^T$ is singular so that $0$ is an eigenvalue of $BB^T$ and thus $-3$ is an eigenvalue of $A$. As $BB^T$ is positive semidefinite, we find $\theta_{\min}=-3$. Hence it follows by (\ref{hoffman-bd}), Claim \ref{geo-claim1}, (\ref{mx=3}) and $\theta_{\min}=-3$ that every line has exactly $1+\frac{k}{3}$ vertices. This proves that $\Gamma$ is geometric with $\theta_{\min}=-3$. \hfill\hbox{\rule{3pt}{6pt}}\\ In \cite[Lemma 2]{shilla}, Koolen and Park have shown the following lemma. \begin{lemma} \label{mu-bd} Let $\Gamma$ be a distance-regular graph~with a $4$-claw. Then $\Gamma$ satisfies \[c_2\geq \frac{4a_1+10-k}{6}.\] \end{lemma} \noindent{\em Proof: } Suppose that $\{x,y_i\mid 1\leq i\leq 4\}$ induces a $4$-claw in $\Gamma$, where $y_i\in \Gamma_1(x)$ ($i=1,2,3,4$). It follows by the principle of inclusion and exclusion that \begin{eqnarray*} k &\geq & \left|\{y_i\mid 1\leq i\leq 4\} \right|+\left| \cup_{i=1}^{4} \Gamma_1(x,y_i)\right|\nonumber \\ &\geq & \left|\{y_i\mid 1\leq i\leq 4\} \right|+\sum_{i=1}^{4} \left|\Gamma_1(x,y_i)\right|-\sum_{1\leq i< j\leq 4}\left|\Gamma_1(x,y_i,y_j)\right|\nonumber \\ &\geq & 4+4a_1- {4\choose 2} (c_2-1), \end{eqnarray*} from which Lemma \ref{mu-bd} follows. \hfill\hbox{\rule{3pt}{6pt}}\\ We now prove our main result of Section \ref{no-4-claws}, Theorem \ref{main-thm}.\\ \noindent{\em Proof of Theorem \ref{main-thm}:} Suppose that $\Gamma$ is a non-complete distance-regular graph satisfying (\ref{k-condi}). Then there are no $4$-claws in $\Gamma$ by Lemma \ref{mu-bd}, so that $\Gamma$ is geometric with $\theta_{\min}(\Gamma)=-3$ by Theorem \ref{geo}. This completes the proof. \hfill\hbox{\rule{3pt}{6pt}}\\ \begin{theorem}\label{non-exist-7} There are no distance-regular graph s with the following intersection arrays\\ (i) $\{55,36,11;1,4,45\}$,\\ (ii) $\{56,36,9;1,3,48\}$,\\ (iii) $\{65,44,11;1,4,55\}$,\\ (iv) $\{81,56,24,1;1,3,56,81\}$,\\ (v) $\{117,80,32,1;1,4,80,117\}$,\\ (vi) $\{117,80,30,1;1,6,80,117\}$,\\ (vii) $\{189,128,45,1;1,9,128,189\}$. \end{theorem} \noindent{\em Proof: } Assume that $\Gamma$ is a distance-regular graph~such that its intersection array is one of the $7$ intersection arrays (i)-(vii). Since $\Gamma$ satisfies $k> \frac{8}{3}(a_1+1)$, $a_1\neq 0$ and $\theta_{\min}(\Gamma)\neq -3$, $\Gamma$ has a $4$-claw by Theorem \ref{geo}. It follows by Lemma \ref{mu-bd} that $c_2\geq \frac{4a_1+10-k}{6}$ which is impossible. This shows Theorem \ref{non-exist-7}.\hfill\hbox{\rule{3pt}{6pt}}\\ \begin{remark} \begin{enumerate} \item[(a)] Koolen and Park \cite{shilla} showed the non-existence of distance-regular graph s with the intersection array (iii) in Theorem \ref{non-exist-7} and so did Juri\v{s}i\'{c} and Koolen \cite{jurisic-koolen} for the intersection arrays (iv)-(vii). \item[(b)] Suppose that $\Gamma$ is a distance-regular graph~with an intersection array (i), (ii) or (iii) in Theorem \ref{non-exist-7}. By \cite[Proposition 4.2.17]{bcn}, $\Gamma_3$(the graph with the vertices are $V(\Gamma)$ and the edges are the $2$-subsets of vertices at distance $3$ in $\Gamma$) is a strongly regular graph with parameters $(672,121,20,22)$, $(855,126,21,18)$ or $(924,143,22,22)$, respectively. No strongly regular graphs with these parameters are known. \end{enumerate} \end{remark} \section{Geometric distance-regular graph s with smallest eigenvalue $-3$}\label{geosection} In this section, we prove Theorem \ref{gdrg} in which we determine the geometric distance-regular graphs with smallest eigenvalue $-3$.\\ Let $\Gamma$ be a distance-regular graph with diameter $D=D(\Gamma)$. For any non-empty subset $X$ of $V(\Gamma)$ and for each $i=0,1,\ldots,D$, we put \[ X_i:=\{x\in V(\Gamma) \mid d(x,X)=i\}, \] where $d(x,X)=\min\{d(x,y) \mid y\in X \}$. Suppose that $C\subseteq V(\Gamma)$ is a Delsarte clique in $\Gamma$. For each $i=0,1,\ldots, D-1$ and for a vertex $x \in C_i$, define \[\psi_i(x,C) := \left| \{ z \in C \mid d(x,z) = i\}\right|.\] The number $\psi_i(x,C)~(i=0,1,\ldots, D-1)$ depends not on the pair $(x,C)$ but depends only on the distance $i = d(x,C)$ (cf. \cite[Section 4]{DCG1} and \cite[Section 11.7]{godsil-93}). Hence denote \[\psi_i:=\psi_{i}(x,C)~~(i=0,1,\ldots,D-1).\] Now, let $\Gamma$ be geometric with respect to $\mathcal{C}$ a set of Delsarte cliques. For $x, y\in V(\Gamma)$ with $d(x,y)=i~~(i=1,2,\ldots, D)$, define $\tau_i(x,y;\mathcal{C}) $ as the number of cliques $C$ in $\mathcal{C}$ satisfying $x\in C$ and $d(y,C) = i-1$. By \cite[Lemma 4.1]{DCG1}, the number $\tau_i(x,y;\mathcal{C})$ ($i=1,2,\ldots,D$) depends not on the pair $(x,y)$ and $\mathcal{C}$, but depends only on the distance $i=d(x,y)$. Thus we may put \[\tau_i:=\tau_i(x,y;\mathcal{C})~(i=1,2,\ldots,D)~.\] Note that for any geometric distance-regular graph $\Gamma$, \begin{equation}\label{tau-D} \tau_D=-\theta_{\min} \end{equation} holds, where $D=D(\Gamma)$ and $\theta_{\min}=\theta_{\min}(\Gamma)$.\\ The next lemma is a direct consequence of \cite[Proposition 4.2 (i)]{DCG1}. \begin{lemma} \label{geo-para} Let $\Gamma$ be a geometric distance-regular graph. Then the following hold.\\ (i) $b_i=-(\theta_{\min}+\tau_i)\left(1-\frac{k}{\theta_{\min}}-\psi_i\right)$ $(1\leq i\leq D-1)$.\\ (ii) $c_i=\tau_i\psi_{i-1}$ $(1\leq i\leq D)$. \end{lemma} Note that by (\ref{tau-D}) and Lemma \ref{geo-para} (ii), any geometric distance-regular graph~with diameter $D$ satisfies \begin{equation}\label{cD} c_D=(-\theta_{\min})\psi_{D-1}\geq -\theta_{\min}. \end{equation} \begin{lemma}\label{psi-bd} Let $\Gamma$ be a geometric distance-regular graph. Then \begin{equation}\label{basic-ineq-psi1} \psi_1\leq \tau_2\leq -\theta_{\min}. \end{equation} In particular, $\psi_1^2\leq c_2\leq \theta_{\min}^2$ holds. \end{lemma} \noindent{\em Proof: } Let $x$ be a vertex and let $C$ be a Delsarte clique satisfying $x \not \in C$. If there are two neighbors $y$ and $z$ of $x$ in $C$, then two edges $(x,y)$ and $(x,z)$ lie in different Delsarte cliques as $\Gamma$ is geometric. This shows $\psi_1\leq \tau_2$. Note that the number of Delsarte cliques containing any fixed vertex is $-\theta_{\min}$, so that $\tau_i\leq -\theta_{\min}$ for all $i=1,\ldots, D$. Hence, we find $\psi_1\leq \tau_2\leq -\theta_{\min}$. In particular, it follows by Lemma \ref{geo-para} (ii) and (\ref{basic-ineq-psi1}) that $\psi_1^2\leq \tau_2 \psi_1=c_2\leq \theta_{\min}^2$ holds. \hfill\hbox{\rule{3pt}{6pt}}\\ \begin{theorem}\label{gdrg} Let $\Gamma$ be a geometric distance-regular graph~with smallest eigenvalue $-3$. Then $\Gamma$ satisfies one of the following. \begin{enumerate} \item[(i)] $k=3$ and $\Gamma$ is one of the following graphs: the Heawood graph, the Pappus graph, Tutte's $8$-cage, the Desargues graph, Tutte's $12$-cage, the Foster graph, $K_{3,3}$, $H(3,2)$. \item[(ii)] A Steiner graph $S_3(\alpha-3)$, i.e., a geometric strongly regular graph with parameters $\left(\frac{(2\alpha-3)(\alpha-2)}{3},3\alpha-9,\alpha,9 \right)$, where $\alpha\geq 6$ and $\alpha \equiv 0,2~~(\mbox{mod}~3)$. \item[(iii)] A Latin square graph $LS_3(\alpha)$, i.e., a geometric strongly regular graph with parameters $(\alpha ^2,3(\alpha-1),\alpha,6)$, where $\alpha \geq 4$. \item[(iv)] The generalized $2D$-gon of order $(s,2)$, where $(D,s)=(2,2),(2,4),(3,8)$. \item[(v)] One of the two generalized hexagons of order $(2,2)$ with $\iota(\Gamma)=\{6,4,4;1,1,3\}$. \item[(vi)] A generalized octagon of order $(4,2)$ with $\iota(\Gamma)=\{12,8,8,8;1,1,1,3\}$. \item[(vii)] The Johnson graph $J(\alpha,3)$, where $\alpha \geq 6$. \item[(viii)] $D=3$ and $\iota(\Gamma)=\{3\alpha +3,2\alpha +2, \alpha +2-\beta ;1,2,3\beta\}$, where $\alpha \geq \beta \geq 1$. \item[(ix)] The halved Foster graph with $\iota(\Gamma)=\{6,4,2,1;1,1,4,6\}$. \item[(x)] $D=\mbox{\em {\texttt h}}+2\geq 4$ and $$(c_i,a_i,b_i)=\left\{ \begin{array}{ll} (1,\alpha,2\alpha+2)& \mbox{ for }1\leq i\leq \mbox{\em {\texttt h}}\\ (2,2\alpha+\beta-1,\alpha-\beta+2) & \mbox{ for }i=\mbox{\em {\texttt h}} +1\\ (3\beta,3\alpha-3\beta+3,0) & \mbox{ for }i=\mbox{\em {\texttt h}} +2 \end{array} \right., \mbox{~where~} \alpha \geq \beta \geq 2.$$ \item[(xi)] $D=\mbox{\em {\texttt h}}+2\geq 3$ and $$(c_i,a_i,b_i)=\left\{ \begin{array}{ll} (1,\alpha,2\alpha+2)& \mbox{ for }1\leq i\leq \mbox{\em {\texttt h}} \\ (1,\alpha+2\beta-2,2\alpha-2\beta+4) & \mbox{ for }i=\mbox{\em {\texttt h}} +1\\ (3\beta,3\alpha-3\beta+3,0) & \mbox{ for }i=\mbox{\em {\texttt h}} +2 \end{array} \right., \mbox{~where~} \alpha\geq \beta \geq 2.$$ \item[(xii)] A distance-$2$ graph of a distance-biregular graph with vertices of valency $3$ and $$(c_i,a_i,b_i)=\left\{ \begin{array}{ll} (1,\alpha,2\alpha+2)& \mbox{ for }1\leq i\leq \mbox{\em {\texttt h}}\\ (1,\alpha+2,2\alpha)& \mbox{ for }i=\mbox{\em {\texttt h}} +1\\ (4,2\alpha-1,\alpha)& \mbox{ for }\mbox{\em {\texttt h}} +2\leq i\leq D-2\\ (4,2\alpha+\beta-3,\alpha-\beta+2) & \mbox{ for }i=D-1\\ (3\beta,3\alpha-3\beta+3,0) & \mbox{ for }i=D \end{array} \right., \mbox{~where~} \alpha\geq \beta \mbox{~and~} \beta\in \{2,3\}.$$ \end{enumerate} \end{theorem} \noindent{\em Proof: } Let $\Gamma$ be geometric with respect to $\mathcal{C}$. As $\theta_{\min}=-3$, we have $k\equiv 0$ (mod $3$). If $k=3$ then $\Gamma$ satisfies (i) by \cite{k=3} (cf.\cite[Theorem 7.5.1]{bcn}). In the rest of the proof, we assume $k\geq 6$ and let $D=D(\Gamma)$. We divide the proof into two cases, ({\bf Case 1: $c_2\geq 2$}) and ({\bf Case 2: $c_2 = 1$}).\\ \noindent {\bf Case 1: $c_2 \geq 2$}\\ By (\ref{basic-ineq-psi1}) with $\theta_{\min}=-3$, we find $\psi_1\in \{1,2,3\}$.\\ First suppose $\psi_1=1$, so that $\Gamma$ is locally a disjoint union of three cliques of size $a_1+1$ and $k=3(a_1+1)$. By \cite[Theorem 3.1]{yamazaki}, $\Gamma$ satisfies either ($c_2=2$ and $2\leq D \leq 3$) or ($c_2=3$ and $D=2$). If $c_2=2$ and $D=2$ then $-3$ is not the smallest eigenvalue of the matrix $L_1(\Gamma)$ in (\ref{mtx-L}), which contradicts to $\theta_{\min}=-3$. If $c_2=2$ and $D =3$ then $\tau_2=2$ and $\tau_3=3$ by Lemma \ref{geo-para} (ii) and (\ref{tau-D}), respectively, and thus $(c_1,a_1,b_1)=(1,a_1,2a_1+2)$, $(c_2,a_2,b_2)=(2,2a_1-1+\psi_2,a_1+2-\psi_2)$ and $(c_3,a_3,0)=(3\psi_2,3a_1+3-3\psi_2,0)$ all hold by Lemma \ref{geo-para}. Now, $\Gamma$ satisfies (viii). If $c_2=3$ and $D =2$, then $\Gamma$ is the generalized quadrangle of order $(s,2)$, where $s=2,4$ (cf. \cite[Theorem 6.5.1]{bcn} and \cite[Theorem 1]{rnp(t=2)}).\\ Next suppose $\psi_1=2$, so that $\tau_2\in \{2,3\}$, $b_1=\frac{2(k-3)}{3}$ and $c_2=2\tau_2$ all follow by (\ref{basic-ineq-psi1}) and Lemma \ref{geo-para}. If $D \geq 3$ then $\Gamma$ is the Johnson graph $J(\alpha,3)~(\alpha \geq 6)$ of diameter $3$ by \cite[Theorem 7.1]{-m} and \cite[Remark 2 (ii)]{DCG2}. Now, we consider $D =2$. Then, $\tau_2=3$ by (\ref{tau-D}), and $\Gamma$ is a strongly regular graph with parameters $(a_1^2,3(a_1-1),a_1,6)$, where $a_1\geq 4$ as $k\geq 6$ and $\Gamma$ is geometric. Hence, (iii) follows as $\Gamma$ is the line graph of a $2-(3\alpha,3,1)$-transversal design, where $\mathcal{C}$ and $V(\Gamma)$ are the set of points and lines respectively (See Remark \ref{geo-rmk} (b)).\\ Finally, we consider $\psi_1=3$. Then $c_2=\tau_2 \psi_1=9$ holds by Lemma \ref{psi-bd}. From Lemma \ref{geo-para} (i) with $\theta_{\min}+\tau_2=0$, $D=2$ follows, and thus $(c_1,a_1,b_1)=(1,a_1,2a_1-10)$ and $(c_2,a_2,b_2)=(9,3a_1-18,0)$. Since $\Gamma$ is geometric, $\Gamma$ is a Steiner graph $S_3(\alpha-3)$ and $\Gamma$ satisfies (ii), where the restriction on $a_1$ is obtained from $k\geq 6$ and the fact that $|V(\Gamma)|$ is a positive integer (See \cite[p.396]{neumaier-m} and Remark \ref{geo-rmk}). This completes the proof of {\bf Case 1}.\\ \noindent {\bf Case 2: $c_2 = 1$}\\ From the conditions $c_2=\tau_2 \psi_1=1$ and $\theta_{\min}=-3$, $\Gamma$ is locally a disjoint union of three cliques of size $a_1+1$. If $a_1\leq 1$ then $k\in \{3,6\}$ follows from $|C|\in \{2,3\}$ for any Delsarte clique $C$ in $\Gamma$. By \cite{a=1k=6}, $\Gamma$ satisfies (v) or (ix). \\ From now on, we assume $a_1\geq 2$. First suppose $c_{{\texttt h}+1}\geq 2$, where ${\texttt h}={\texttt h}(\Gamma)$ is the head of $\Gamma$ in (\ref{head}). Then by (\ref{cD}) and \cite[Theorem 3.1]{yamazaki}, $\Gamma$ satisfies either ($c_{{\texttt h}+1}=3$ and $D={\texttt h}+1$) or ($c_{{\texttt h}+1}=2$ and $D={\texttt h}+2$). For the case $c_{{\texttt h} +1}=3$, $\Gamma$ is a generalized $2D$-gon of order $(s,2)$, where $(D,s)=(3,8),(4,4)$ (cf. \cite[Section 6.5]{bcn} and \cite[Theorem 1]{rnp(t=2)}). If $c_{{\texttt h}+1}=2$, then we find $\psi_{{\texttt h}}=1$ and $\tau_{{\texttt h}+1}=2$ by $c_{{\texttt h}}=\psi_{{\texttt h}-1}\tau_{{\texttt h}}=1$ and \[a_1=a_{{\texttt h}}=\tau_{{\texttt h}}(a_1+1-\psi_{{\texttt h}-1})+(3-\tau_{{\texttt h}})(\psi_{{\texttt h}}-1),\] from which (x) holds by (\ref{tau-D}), Lemma \ref{geo-para} and \cite[Proposition 2]{rnp(t=2)}. Next suppose $c_{{\texttt h}+1}=1$. By (\ref{cD}) and \cite[Theorem 4.1]{yamazaki}, $\Gamma$ satisfies either $D={\texttt h}+2$ or (xii). For the case $D={\texttt h}+2$ with $c_{{\texttt h}+1}=1$, (xi) follows by (\ref{tau-D}) and Lemma \ref{geo-para}. This completes the proof of Theorem \ref{gdrg}. \hfill\hbox{\rule{3pt}{6pt}}\\ We remark on the distance-regular graph s in Theorem \ref{gdrg}. \begin{remark}\label{geo-rmk} \begin{enumerate} \item[(a)] The line graph of a Steiner triple system on $2\alpha-3$ points for any integer $\alpha \geq 6$ satisfying $\alpha \equiv 0,2$ (mod~$3$), which is called a {\em Steiner graph} $S_3(\alpha-3)$, is a strongly regular graph~given in (ii). With the fact that a Steiner triple system on $v$ points exists for each integer $v$ satisfying $v\equiv 1$ or $3$ (mod $6$), Wilson showed in \cite{wilson74} and \cite{wilson75} that there are super-exponentially many Steiner triple systems for an admissible number of points, hence so are strongly regular graph s in (ii) (cf. \cite[p.~209]{cameronsrg}, \cite[Lemma 4.1]{neumaier-m}). \item[(b)] The line graph of a $2-(mn,m,1)$-transversal design ($n\geq m+1$) is called a Latin square graph $LS_m(n)$ (See \cite[p.396]{neumaier-m}). In particular, a Latin square graph $LS_3(\alpha)$ is a geometric strongly regular graph in (iii). Since there are more than exponentially many Latin squares of order $\alpha$, so are such strongly regular graphs in (iii) (cf. \cite[p.~210]{cameronsrg}, \cite[Lemma 4.2]{neumaier-m}). \item[(c)] In the list of \cite[Chapter 14]{bcn}, only the Hamming graph $H(3,\alpha+2)$, the Doob graph of diameter $3$ and the intersection array $\{45,30,7;1,2,27\}$ satisfy (viii). No distance-regular graph~with the last array, $\{45,30,7;1,2,27\}$, is known. We can also check that if $\Gamma$ satisfies (viii) then the eigenvalues of $\Gamma$ are integers. \end{enumerate} \end{remark} \noindent {\em Proof of Theorem \ref{main-cor}: } It is straightforward from Theorem \ref{main-thm} and Theorem \ref{gdrg}. \hfill\hbox{\rule{3pt}{6pt}}\\ \begin{center} {\bf Acknowledgements} \end{center} The author was supported by the Korea Research Foundation Grant funded by the Korean Government(MOEHRD, Basic Research Promotion Fund) KRF-2008-359-C00002. The author would like to thank Jack Koolen for his valuable comments, and Jongyook Park for his careful reading. \vspace{5mm}
1,116,691,501,058
arxiv
\section{INTRODUCTION} {\em Ab initio} calculations have shed light on many questions in physics, chemistry, and materials science, including chemical reactions in solution \cite{SprikBlumberger05,SprikBlumberger06} and at surfaces.\cite{Radeke97,Norskov-Greeley02,Gross02} However, first principles calculations have offered less insight in the complex and multi-faceted field of electrochemistry, despite the potential scientific and technological impact of advances in this field. Because the fundamental microscopic mechanisms involved in oxidation and reduction at electrode surfaces are often unknown and are difficult to determine experimentally, \cite{Shi06} rich scientific opportunities are available for theoretical study. From a technological perspective, practicable first principles calculations could become a vital tool to direct the experimental search for better catalysts with significant potential societal impact: as just one example, economically viable replacement of gasoline powered engines with fuel cells in personal transport systems requires systems operating at a cost of \$35/kW, whereas the current cost is \$294/kW, \cite{FuelCellAnalysis} due mostly to the expense of platinum-based catalyst materials. The primary challenge which distinguishes theoretical study of electrochemical systems is that including the liquid electrolyte, which critically influences the functioning of the electrochemical cell, requires detailed thermodynamic sampling of all possible internal molecular configurations of the fluid. Such critical influences include (a) screening of charged systems, (b) establishment of an absolute potential reference for oxidation and reduction potentials, and (c) voltage-dependence of fundamental microscopic processes, including the nature of reaction pathways and transition states. While there have been attempts at the full {\em ab initio} molecular dynamics approach to this challenge,\cite{Gross09,Gross10,SprikBlumberger05} such calculations are necessarily of the heroic type, require tremendous computational resources, and do not lend themselves to systematic studies of multiple reactions within a series of many candidate systems. Such studies require development of an alternate approach to first-principles study of electrochemistry. \subsection{Previous approaches} One response to the aforementioned challenges is to avoid the issue and lessen the computational cost either by forgoing electronic structure calculation entirely or by neglecting the thermodynamic sampling of the environment. Some studies have employed classical molecular dynamics with interatomic potentials; \cite{Chandler09,Pounds} however, such semi-empirical techniques often perform poorly when describing chemical reactions involving electron-transfer, which are central to oxidation and reduction reactions. The latter approach -- single configuration {\em ab initio} calculations -- neglects key phenomena associated with the presence of an electrolyte liquid in equilibrium. The most direct single configuration {\em ab initio} approach pursued to date is to study the relevant reactions on a surface in vacuum and to study trends and correlations with the behavior in electrochemical systems.\cite{Norskov04, Norskov2009NatChem} Some of these studies are done in a constant charge or constant potential ensemble\cite{Lozovoi03} to allow variation of the applied electrode potential. This approach, however, does not include critical physical effects of the electrolyte such as the dielectric response of the liquid environment and the presence of high concentrations of ions in the supporting electrolyte. In response, an intermediate approach is to include a layer or few layers of explicit water molecules into the calculation.\cite{Neurock06,Neurock07,KarlsbergNorskov07,Rossmeisl11} Such an approach is problematic for a number of reasons. First, actual electrochemical systems can have rather long ionic screening lengths (30 \AA~ for an ionic concentration of 0.01 M), which would require large amounts of explicit water. Second, simulation of the actual effects of dipolar and ionic screening in the fluid requires extensive sampling of phase space, corresponding to very long run times. Indeed, in some references, only one layer of frozen water without thermal or time sampling is included. \cite{Norskov07} Moreover, as most reactions of interest occur at potentials away from the potential of zero charge, such calculations must include a net charge, which can be problematic in typical solid-state periodic supercell calculations. One may compensate for this charge with a uniform charged background extending throughout the unit cell, both the liquid and the solid regions,\cite{Neurock07} but this distribution does not reflect the electrochemical reality. Other methods include an explicit reference electrode with a corresponding negative surface charge to keep the unit cell neutral,\cite{Lozovoi03} but this requires a somewhat arbitrary choice of where to place the compensating electrode and may not lead to realistic potential profiles. More recently, modeling the electrolyte by a layer of explicit hydrogen atoms was shown to provide a source of electrons for charged surface calculations while keeping the unit cell neutral.\cite{Norskov10} Again, however, this approach requires either judicious choice of the locations of the corresponding protons which make up the corresponding reference electrode or computationally intensive thermodynamic sampling. Another broad approach constructs an approximate {\em a posteriori} continuum model\cite{GygiFattebert} for both the dielectric response of the water molecules and the Debye screening effects of the ions and performs {\em ab initio} calculations where the electrostatic potential is determined by solving Poisson-Boltzmann-like equations.\cite{Otani06,AndersonPRB08,MarzariDabo} Explicit inclusion of a few layers of explicit water molecules and ionic species within the {\em ab initio} calculations can further enhance the reliability of this approach without dramatic additional computational cost. While including explicitly the most recognized physical effects of the electrolyte, such Poisson-Boltzmann-like approaches do not arise from an exact underlying theory. Thus, they may disregard physically relevant effects, such as the non-locality and non-linearity of the dielectric response of liquid water and the surface tension associated with formation of the liquid-solid interface. We note, for instance, that a typical electrochemical field strength would be a 0.1 V drop over a double layer width of 3 Angstroms, or 300 MV/m, a field at which the bulk dielectric constant of water is reduced by about one-third, strongly indicating that non-linear dielectric saturation effects are present in actual electrochemical systems, particularly near the liquid-solid interface, and ultimately should be captured naturally for an {\em ab initio} theory to be truly predictive and reliable. \subsection{Joint density-functional theory approach} This work begins by placing the aforementioned modified Poisson-Boltzmann approaches on a firm theoretical footing within an, {\em in principle}, exact density-functional theory formalism, and then describes the path to including all of the aforementioned effects in a fully rigorous {\em ab initio} density functional. The work then goes on to elucidate the fundamental physics underlying electrochemistry and provide techniques for computation of fundamental electrochemical quantities from a formal perspective. The work then shifts focus and introduces an extremely simplified functional for initial exploration of the potential of our overall approach for practical calculations. The equations which result at this high level of simplification resemble those introduced by others\cite{AndersonPRB08,MarzariDabo} from an {\em a posteriori} perspective, thus putting those works on a firmer theoretical footing and showing them in context as approximate versions of a rigorous underlying approach. We then work within this simplified framework to explore -- in more depth than previously in the literature -- fundamental physical effects in electrochemistry, including the microscopic behavior of the electrostatic potential near an electrode surface, the structure of the electrochemical double layer, differential capacitances, and potentials of zero charge across a series of metals. The encouraging results which we obtain even with this highly simplified functional indicate that the overall framework is sound for the exploration of physical electrochemical phenomena and strongly suggests that the more accurate functionals under present development \cite{Lischner10} will yield accurate, fully {\em ab initio} results. Section~II begins by laying out our theoretical framework, Section~III describes connections between experimental electrochemical observables and microscopic {\em ab initio} computables. Section~IV introduces a simple approximate functional which offers a computationally efficient means of bridging connections to experimental electrochemistry. Section V provides specific details about electronic structure calculations of transition metal surfaces. Finally, Section~VI presents electrochemical results for those metallic surfaces obtained with our simplified functional and Section~VII concludes the paper. The appendices include technical information regarding implementation of our functional within a pseudopotential framework. \section{THEORETICAL FRAMEWORK} As described in the Introduction, much of the challenge in performing realistic {\em ab initio} electrochemistry calculations comes not only from the need to include explicitly the atoms composing the environment but also from the need to perform thermodynamic averaging over the locations of those atoms. Recently, however, it was proved rigorously that one can compute exact free-energies by including the environment in a joint density-functional theory framework. \cite{Petrosyan05,Petrosyan07} Specifically, this previous work shows that the free energy $A$ of an explicit quantum mechanical system with its nuclei at fixed locations while in thermodynamic equilibrium with a liquid environment (including full quantum mechanical treatment of the environment electrons and nuclei), can be obtained by the following variational principle, \cite{Petrosyan07} \begin{flalign} A&=\min_{n(r),\{N_{\alpha}(r)\}}\{G[n(r),\{N_{\alpha}(r)\},V(r) ]\nonumber\\&-\int d^3r V(r)n(r)\} \label{FullFunc} \end{flalign} where $G[n(r),\{N_\alpha(r)\},V(r)]$ is a universal functional of the electron density of the explicit system $n(r)$, the densities of the nuclei of the various atomic species in the environment $\{N_\alpha(r)\}$, and the electrostatic potential from the nuclei of the explicit system $V(r)$. The functional $G[n(r),\{N_\alpha(r)\},V(r)]$ is universal in the sense that it depends only on the nature of the environment and that its dependence on the explicit system is only through the electrostatic potential of the nuclei included in $V(r)$ and the electron density of the explicit system $n(r)$. With this functional dependence established, one can then separate the functional into large, known portions and a smaller coupling term ultimately to be approximated,\cite{Petrosyan07} \begin{flalign} G[n(r),\{N_\alpha(r)\},V(r)]&\equiv A_{KS}[n(r)]\nonumber +\Omega_{lq}[\{N_\alpha(r)\}]\\& +\Delta A[n(r),\{N_\alpha(r)\},V(r)] \label{eq:fJDFT} \end{flalign} where $A_{KS}[n(r)]$ and $\Omega_{lq}[\{N_\alpha(r)\}]$ are, respectively, the standard universal Kohn-Sham electron-density functional of the explicit solute system in isolation (including its nuclei and their interaction with its electrons) and the "classical" density-functional for the liquid solvent environment in isolation. The remainder, $\Delta A[n(r),\{N_\alpha(r)\},V(r)]$ is then the coupling term between the solute and solvent. For $A_{KS}[n(r)]$, one can employ any of the popular approximations to electronic density functional theory such as the local-density approximation (LDA), or more sophisticated functionals such as the generalized-gradient approximation (GGA). \cite{PW91} On the other hand, functionals $\Omega_{lq}[\{N_\alpha(r)\}]$ for liquid solvents such as water are generally less-well developed, though the field has progressed significantly over the past few years. For example, one recent, numerically efficient functional for liquid water reproduces many of the important factors determining the interaction between the liquid and a solute, including the linear {\em and nonlinear} non-local dielectric response, the experimental site-site correlation functions, the surface tension, the bulk modulus of the liquid and the variation of this modulus with pressure, the density of the liquid and the vapor phase, and liquid-vapor coexistence \cite{Lischner10}. A framework employing such a functional would be more reliable than the modified Poisson-Boltzmann approaches available to date, which do not incorporate any of these effects except for the linear local dielectric response appropriate to macroscopic fields. Inclusion of the densities of any ions in the electrolyte environment among the $\{N_\alpha(r)\}$ is a natural way to include their effects into $\Omega_{lq}[\{N_\alpha(r)\}]$ and provide ionic screening into the overall framework. Finally, developing approximate forms for the coupling $\Delta A[n(r),\{N_\alpha(r)\},V(r)]$ in (\ref{eq:fJDFT}) remains an open area of research. In an early attempt, Petrosyan and co-workers\cite{Petrosyan07} employed a simplified $\Omega_{lq}[\{N_\alpha(r)\}]$ using a single density field $N(r)$ to describe the fluid. In that preliminary work, because such an $N(r)$ gives no explicit sense of the orientation of the liquid molecules, the tendency of these molecules to orient and screen long-range electric fields was included {\em a posteriori} into a simplified linear (but nonlocal) response function. In a more complete framework with explicit distributions for the oxygen and hydrogen sites among the $\{N_\alpha(r)\}$ the full non-local and non-linear dielectric response can be handled completely {\em a priori}.\cite{Lischner10} Beyond long-range screening effects, the coupling $\Delta A[n(r),\{N_\alpha(r)\},V(r)]$ must also include effects from direct contact between the solvent molecules and the solute electrons. Because the overlap between the molecular and electron densities is small, the lowest-order coupling, very similar to the ``molecular'' pseudopotentials of the type introduced by Kim {\em et al.},\cite{Cho96} would be a reasonable starting point. Using such a pseudopotential approach (with only the densities of the oxygen atoms of the water molecules), Petrosyan and coworkers \cite{Petrosyan07} obtained good agreement (2 kcal/mole) with experimental solvation energies, without any fitting of parameters to solvation data. Combining a coupling functional $\Delta A$ similar to that of Petrosyan and coworkers with more explicit functionals $\Omega_{lq}[\{N_\alpha(r)\}]$ for the liquid\cite{Lischner10} and standard electron density functionals $A_{KS}[n(r)]$ for the electrons, is thus a quite promising pathway to highly accurate {\em ab initio} description of systems in equilibrium with an electrolyte environment. \section{CONNECTIONS TO ELECTROCHEMISTRY} Turning now to the topic of electrochemistry, we present a general theoretical framework to relate the results of {\em ab initio} calculations to experimentally measurable quantities, beginning with a brief review of the electrochemical concepts. \subsection{Electrochemical potential} In the electrochemical literature, the {\em electrochemical potential} $\bar{\mu}$ of the electrons in a given electrode is defined as the energy required to move electrons from a reference reservoir to the working electrode. This potential is often conceptualized as a sum of two terms, $\bar{\mu}=\mu_{int}-F\Phi$, where $\mu_{int}$ is the purely ”chemical” potential (due to concentration gradient and temperature,chemical bonding, etc.), $\Phi$ is the external, macroscopic electrostatic potential, and F is Faraday's constant. (Note that $F=N_A e$ has the numerical value of unity in atomic units, where chemical potentials are measured {\em per particle} rather than {\em per mole}.) In the physics literature, this definition for $\bar{\mu}$ (when measured {\em per particle}) corresponds precisely to the ``chemical potential for electrons,'' which appears for instance in the Fermi occupancy function $f=[e^{(\epsilon-\bar{\mu})/k_B T}+1]^{-1}$. \subsection{Electrode potential} In a simple, two-electrode electrochemical cell, the driving force for chemical reactions occurring at the electrode surface is a voltage applied between the reference electrode and working electrode. In the electrochemical literature, this voltage is known as the {\em electrode potential} ${\cal E}$, defined as the electromotive force applied to a cell consisting of a working electrode and a reference electrode. In atomic units (where the charge of an electron is unity), the electrode potential is thus equivalent to the energy (per fundamental charge $e$) supplied to transfer charge (generally in the form of electrons) from the reference to the working electrode, assuming no dissipative losses. Under conditions where diffusion of molecules and reactions occurring in the solution are minimal, this energy is completely transferred to the electrons in the system, causing a corresponding change in the electrochemical potential of the electrons in the working electrode. An idealized two-terminal electrochemical cell controls the chemical potential of a working electrode $\bar{\mu}^{(W)}$ through application of an electrode potential ${\cal E}$ (voltage) between it and a reference electrode of known chemical potential $\bar{\mu}^{(R)}$ (See Figure~\ref{Figure1}(a)). With the application of the electrode potential ${\cal E}$, the energy {\em cost} to the electrochemical cell, under reversible (lossless) conditions, to move a single electron from the reference electrode to the working electrode is $dU=-\bar{\mu}^{(R)}+ \bar{\mu}^{(W)}+{\cal E}$. Here, the electrode potential appears with a positive sign, because to move a negative charge from the negative to positive terminal requires a net investment of energy, and thus {\em cost} to the electrochemical cell, against the source of the potential ${\cal E}$. Under equilibrium conditions, we must have $dU=0$, so that ${\cal E}=\bar{\mu}^{(R)}-\bar{\mu}^{(W)}$. As Section~IV shows, the electrostatic model which we employ for ionic screening in this work establishes a fixed reference such that the microscopic {\em electron} potential $\phi$ (the Coulomb potential energy of an electron at a given point) is zero deep in the liquid environment far from the electrode (See Figure~\ref{Figure1}(b)) -- implying that the macroscopic {\em electrostatic} potential $\Phi$ (which differs in overall sign from $\phi$) there is also zero. A convenient working electrode thus corresponds to electrons solvated deep in the fluid, which will have electrochemical potential $\bar{\mu}^{(R)}=\mu_{int}^{(s)}-F \Phi = \mu_{int}^{(s)}$, where $\mu_{int}^{(s)}$ corresponds to the solvation energy of an electron in the liquid. Referring the scale of the electrochemical potential to such solvated electrons (so that $\mu_{int}^{(s)}\equiv 0$), we then have $\bar{\mu}^{(R)}=0$, so that ${\cal E}=-\bar{\mu}^{(W)}$. In sum, the opposite of the electronic chemical potential in our {\em ab initio} calculations corresponds precisely to the electrode potential relative to solvated electrons. In practice, the choice of approximate density functionals $\Omega_{lq}[\{N_\alpha(r)\}]$ and $\Delta A[n(r),\{N_\alpha(r)\},V(r)]$ sets the value of the electron solvation energy; each model fluid corresponds to a different reference electrode of solvated electrons. Section~VII demonstrates the establishment of the electrochemical potential of such a model reference electrode relative to the standard hydrogen electrode (SHE). \begin{figure} \centering \subfloat[]{\label{fig:1a}\includegraphics[width=5cm]{figure1a.eps}} \subfloat[]{\label{fig:1b}\includegraphics[width=7.5cm]{figure1b.eps}} \caption{(a) Schematic of an electrochemical cell. The working electrode is explicitly modeled while the reference electrode is fixed at zero. (b) Relationship between the microscopic electron potential $\langle\phi(z)\rangle$ (averaged over the directions parallel to the surface), electrochemical potential, and applied potential for a Pt (111) surface. The large variations in potential to the left of $z_{Pt}$ correspond to the electrons and ionic cores comprising the metal while the decay into the fluid region is visible to the right of $z_{Pt}$.} \label{Figure1} \end{figure} \subsection{Potential of zero charge (PZC) and differential capacitance.} For any given working electrode, a specific number of electrons, and thus electronic chemical potential $\bar{\mu}$, is required to keep the system electrically neutral. The corresponding electrode potential (${\cal E}=-\bar{\mu}$) is known as the potential of zero charge. Adsorbed ions from the electrolyte or other contaminants on the electrode surface create uncertainty in the experimental determination of the potential of zero charge. One advantage to {\em ab initio} calculation is the ability to separate the contribution due to adsorbed species from the contribution of the electrochemical double layer, the latter being defined as the potential of zero free charge (PZFC). Experimentally, only the potential of zero total charge (PZTC), which includes the effects of surface coverage, may be measured directly, and the potential of zero free charge can only be inferred.\cite{Cuesta} {\em Ab initio} approaches such as ours allow for the possibility of controlled addition of adsorbed species and direct study of these issues. At other values of the electrode potential ${\cal E}$, the system develops a charge per unit surface area $\sigma \equiv Q/A$. From the relationship between these two quantities $\sigma({\cal E})$, one can then determine the differential capacitance per unit area ${\cal C} \equiv \frac{d\sigma}{d{\cal E}}$. The total differential capacitance of a metal is determined by both the density of states of the metal surface ${\cal C}_{\mbox{DOS}}$, also known as the quantum capacitance,\cite{QCap} and the capacitance associated with the fluid ${\cal C}_{\mbox{fl}}$. These capacitances act in series, so that full differential capacitance is given by \begin{flalign} {\cal C}^{-1}={\cal C}_{\mbox{fl}}^{-1}+{\cal C}_{\mbox{DOS}}^{-1}. \label{Capseries} \end{flalign} In typical systems, $C_{\mbox{DOS}} \sim 100-1000 \mu F/cm^2$ is larger than the fluid capacitance (typically ${\cal C}_{\mbox{fl}} \sim 15-100 \mu F/cm^2$), so when the two are placed in series, the fluid capacitance dominates. The fluid capacitance may be further decomposed into two capacitors acting in series, \begin{flalign} {\cal C}_{\mbox{fl}}^{-1}={\cal C}_{\Delta}^{-1}+{\cal C}_{\kappa}^{-1}, \label{CapFl} \end{flalign} as in the Gouy-Chapman-Stern model for the electrochemical double layer.\cite{Gouy,Chapman,Stern} The surface charge on the electrode and the first layer of oppositely charged ions behave like a parallel plate capacitor with distance $\Delta$ between the plates. $\Delta$ indicates the distance from the electrode surface to the first layer of ions -- called the outer Helmholtz layer for non-adsorbing electrolytes. The capacitance per unit area for this simple model is ${\cal C}_{\Delta}=\frac{\epsilon_0}{\Delta}$, analogous to the Helmholtz capacitance. For a gap size $\Delta \sim 0.5~$\AA, this model leads to a ``gap'' capacitance of about 20 $\mu F/cm^2$. Additional capacitance arises from the diffuse ions in the liquid. where the model for this capacitance ${\cal C}_{\kappa}=\epsilon_b\epsilon_0\kappa \mbox{cosh}(\frac{e\phi(\Delta)}{2k_BT})$ is also well-known from the electrochemistry literature.\cite{BardFaulkner} In the limit where most of the voltage drop is found in the outer Helmholtz layer ($\phi(\Delta)\sim k_BT$), this expression reduces to a constant value which depends only on the concentration of ions in the electrolyte and the bulk dielectric constant of the fluid $\epsilon_b$: ${\cal C}_{\kappa}=\frac{\epsilon_b\epsilon_0}{\kappa^{-1}}$. For water with a 1.0 M ionic concentration, the ``ion'' capacitance is ${\cal C}_{\kappa}=$240 $\mu F/cm^2$, an order of magnitude larger than the ``gap'' capacitance. At this high ionic concentration, the ``gap'' (Helmholtz) capacitance dominates not only the fluid capacitance, but also the total capacitance. For lower concentrations of ions, the magnitude of the ``ion'' capacitance becomes more comparable to the ``gap'' capacitance and voltage-dependent nonlinear effects in the fluid could become important. \subsection{Cyclic voltammetry} A powerful technique for electrochemical analysis is the cyclic voltammogram, in which current is measured as a function of voltage swept cyclically at a constant rate. Such data yield detailed information about electron transfer in complicated electrode reactions, with sharp peaks corresponding to oxidation or reduction potentials for chemical reactions taking place at the electrode surface. Because current is a time-varying quantity and density-functional theory does not include information about time dependence and reaction rates, careful reasoning must be employed to compare {\em ab initio} calculations to experimental current-potential curves. Previous work has correlated surface coverage of adsorbed hydrogen with current in order to predict cyclic voltammograms for hydrogen evolution on platinum electrodes.\cite{KarlsbergNorskov07} This simple model for a cyclic voltammogram is intrinsically limited at a full monolayer of hydrogen adsorption, rather than by the more realistic presence of mass transport and diffusion effects, but nonetheless provides useful comparisons to experimental data. Using a similar approach, our framework gives the predicted current density $J$ directly through the chain rule as $$J=\frac{d\sigma}{dt}=\frac{d{\cal E}}{dt} \frac{d\sigma}{d{\cal E}} \equiv K {\cal C}({\cal E}),$$ where $K=\frac{d\cal E}{dt}$ is the voltage sweep rate, and ${\cal C}({\cal E})$ is the differential capacitance per unit area at electrode potential ${\cal E}$, as defined above. For the bare metal surfaces with no adsorbates studied in Section VII of this work, only the double layer region structure is visible, but the technique may be generalized to study chemical reactions at the electrode surface. The current density curve is simply proportional to the differential capacitance per unit area ${\cal C}$ as long as the state of the system varies adiabatically and the voltage sweep rate is significantly slower than the reaction rate. In the adiabatic limit, features in the charge-potential curves calculated for reaction intermediates and transition states can be compared directly with peaks in cyclic voltammograms to predict oxidation and reduction potentials from first principles. \section{IMPLICIT SOLVENT MODELS} For computational expediency and to explore the performance of the overall framework for quantities of electrochemical interest, we now introduce a highly approximate functional. Despite its simplicity, we find that the model below leads to very promising results for a number of physical quantities of direct interest in electrochemical systems. The first step in this approximation is to minimize with respect to the liquid nuclear density fields in the fully rigorous functional \cite{Petrosyan05} so that Eq. (\ref{FullFunc}) becomes \begin{flalign} \tilde{A}&=\min_{n(r)}(A_{KS}[n(r),\{Z_I,R_I\}]\nonumber\\&+\Delta \tilde{ A}[n(r),\{Z_I,R_I\}]), \end{flalign} with the effects of the liquid environment all appearing in the new term \begin{flalign} \Delta \tilde{A}[n(r),\{Z_I,R_I\}] &\equiv \min_{N_{\alpha}(r)}(\Omega_{lq}[N_\alpha(r)]\nonumber\\& +\Delta A[n(r),N_\alpha(r),\{Z_I,R_I\}]), \end{flalign} where $Z_I$ and $R_I$ are the charges and positions of the surface nuclei (and those of any explicitly included adsorbed species). This minimization process leaves a functional in terms of {\em only} the properties of the explicit system and incorporates all of the solvent effects implicitly. Up to this point, this theory is in principle exact, although the exact form of $\Delta \tilde{A}[n(r),\{Z_I,R_I\}]$ is unknown. For practical calculations this functional must be approximated in a way which captures the underlying physics with sufficient accuracy. \subsection{Approximate functional} In this initial work, we assume that the important interactions between the solvent environment and the explicit solute electronic system are all electrostatic in nature. Our rationale for this choice is the fact that most electrochemical processes are driven by (a) the surface charge on the electrode and the screening due to the dielectric response of the liquid solvent and (b) the rearrangement of ions in the supporting electrolyte. To incorporate these effects, we calculate the {\em electron} potential $\phi(r)$ (the Coulomb potential energy of an electron at a given point, which equals $-e$ times the {\em electrostatic} potential) due to the electronic and atomic core charges of the electrode and couple this potential to a spatially local and linear description of the liquid electrolyte environment, yielding \begin{flalign} \tilde{A}[n(r),\phi(r)]&= A_{TXC}[n(r)]\nonumber\\ & +\int d^3r\{\phi(r)\left(n(r)-N(r,\{Z_I,R_I\})\right)\nonumber\\ & -\frac{\epsilon(r)}{8\pi}|\nabla\phi(r)|^2-\frac{\epsilon_b\kappa^2(r)}{8\pi}(\phi(r))^2\}, \label{ApproxFunc} \end{flalign} where $A_{TXC}[n(r)]$ is the Kohn-Sham single-particle kinetic plus exchange correlation energy, $n(r)$ is the full electron density of the explicit system (including both core and valence electrons), and $N(r,\{R_I,Z_I\})$ is the nuclear particle density of the explicit solute system with nuclei of atomic number $Z_I$ at positions $R_I$, $\epsilon_b$ is the bulk dielectric constant of the solvent, and $\epsilon(r)$ and $\kappa(r)$ are local values, respectively, of the dielectric constant and the inverse Debye screening length due to the presence of ions in the fluid. We emphasize that, despite the compact notation in (\ref{ApproxFunc}), in practice we employ standard Kohn-Sham orbitals to capture the kinetic energy and, as the appendices detail, we employ atomic pseudopotentials rather than direct nuclear potentials, so that $N(r,\{R_I,Z_I\})$ does not consist in practice of a set of Dirac $\delta$-functions. To determine local values of the quantities $\epsilon(r)$ and $\kappa(r)$ above, we relate them directly to the local average density of the solvent $N_{lq}(r)$ as \begin{flalign} \epsilon(r)&\equiv 1+\frac{N_{lq}(r)}{N_b}(\epsilon_b-1)\nonumber\\ \kappa^2(r) &\equiv \kappa_b^2 \frac{N_{lq}(r)}{N_b}, \label{eps-kapp} \end{flalign} where $N_b$ and $\epsilon_b$ are, respectively, the bulk liquid number density (molecules per unit volume) and the bulk dielectric constant, and $\kappa_b^2=\frac{e^2}{\epsilon_b\,k_BT}\sum_i N_i Z_i^2$ is the square of the inverse Debye screening length in the bulk fluid, where $Z_i$ and $N_i=c_i N_A$ are the valences and number densities of the various ionic species. Finally, our model for the local liquid density depends on the full solute electron density $n(r)$ at each point through the relation \begin{equation} N_{lq}(n)\equiv\frac{N_b}{2}\mbox{erfc}\left(\frac{\ln{(n/n_0)}}{\sqrt{2}\gamma}\right), \label{Nl} \end{equation} a form which varies smoothly (with transition width $\gamma$) from the bulk liquid density $N_b$ in the bulk solvent where the electron density from the explicit system is less than a transition value $n_0$ to zero inside the cavity region associated with the solute, defined as those points where $n(r) > n_0$. This form for $N_{lq}(n)$ reproduces solvation energies of small molecules in water without ionic screening to within 2 kcal/mol,\cite{Petrosyan05} when the parameters in Eq.~(\ref{Nl}) have values $\gamma=0.6$ and $n_0=4.73\times 10^{-3}\ $\AA$^{-3}$. The stationary point of the functional in Eq.~(\ref{ApproxFunc}) determines the physical state of the system and is actually a saddle point which is a minimum with respect to changes in $n(r)$ (or, equivalently, the Kohn-Sham orbitals) and a maximum with respect to changes in $\phi(r)$. Setting to zero the variation of Eq.~(\ref{ApproxFunc}) with respect to the single-particle orbitals generates the usual Kohn-Sham, Schr\"{o}dinger-like, single-particle equations with $\phi(r)$ replacing the Hartree and nuclear potentials and results in the modified Poisson-Boltzmann equation, \begin{flalign} \nabla \cdot \left( \epsilon(r) \nabla \phi(r) \right) - \epsilon_b\kappa^2(r)\phi(r) \nonumber\\=-4\pi\left(n(r)-N(r,\{R_I,Z_I\})\right). \label{mPB} \end{flalign} Self-consistent solution of this modified Poisson-Boltzmann equation for $\phi(r)$ along with solution of the corresponding traditional Kohn-Sham equations defines the final equilibrium state of the system. \begin{figure} \centering \subfloat[]{\label{fig:2a}\includegraphics[width=8.6cm]{figure2a.eps}} \subfloat[]{\label{fig:2b}\includegraphics[width=8.6cm]{figure2b.eps}} \caption{Microscopic and model quantities for Pt(111) surface in equilibrium with electrolyte: (a) Pt atoms (white), electron density $n(r)$ (green), and fluid density $N_{lq}(r)$ (blue) in a slice passing from surface (left) into the fluid (right) with $z_{Pt}=~5.95~$\AA$~$ indicating the end of the metal, (b) dielectric constant $\langle\epsilon(z)\rangle$ and screening length $\langle\kappa^{-1}(z)\rangle$ (averaged over the planes parallel to the surface) for ionic concentrations of 1.0 M and 0.1 M along a line passing from surface into the fluid. Position $z-z_{Pt}$ measures distance from the end of the metal slab. (See Sections~V~and~VI.)} \label{Figure2} \end{figure} Figure~\ref{Figure2} illustrates the various concepts in this model using actual results from a calculation of the Pt(111) surface, described in Sections~V~and~VI. Figure~\ref{Figure2}(a) shows the electron $n(r)$ and liquid $N_{lq}(r)$ densities in a slice through the system which passes through the metal (left, $z < z_{Pt}$ ) and the fluid (right, $z > z_{Pt}$). We define the end of the metal surface $z_{\mbox{metal}}$ by the covalent radius of the last row of metal atoms ($z_{Pt}=5.95~$\AA). The ionic cores and the itinerant valence electrons in the metal are visible, as well as the gap between the surface and the bulk of the fluid. As shown in Figure~\ref{Figure2}(b), the local functions for the dielectric constant $\epsilon(r)$ and the {\em inverse} Debye screening length $\kappa(r)$ respect the correct physical limiting values: $\epsilon_b$ and $\kappa_b$ in the bulk solvent and $\epsilon=1$ and $\kappa=0$ within the surface. The rapid increase in dielectric constant for $0~$\AA$<z-z_{Pt}<1~$\AA$~$ corresponds to the appearance of fluid on the right side and results in the localization of significant charge from the fluid at this location. The inverse screening length $\kappa$ depends on the concentration of ions in the electrolyte through the bulk liquid value $\kappa_b$. Figure~\ref{Figure2}(b) shows screening length as a function of distance from the metal surface for both 0.1 and 1.0 molar bulk ionic concentrations. The large screening lengths at positions less than $z_{Pt}$ ensure proper vacuum-like behavior within the metal surface, where all electrons are explicit and thus no implicit screening should appear. \subsection{Asymptotic behavior of electrostatic potential} Unlike the standard Poisson equation, which has no unique solution for periodic systems because the zero of potential is an arbitrary constant, the modified Poisson-Boltzmann equation (\ref{mPB}) has a unique solution in periodic systems. To establish this, we integrate the differential equation (\ref{mPB}) over the unit cell. The first term, which is the integral of an exact derivative, vanishes. The remaining terms then give the condition, \begin{equation} \int\kappa^2(r)\phi(r)dV=\frac{4\pi}{\epsilon_b}\left(Q_n-Q_N\right), \end{equation} where $Q_n$ and $Q_N$ are the total number of electronic and nuclear charges in the cell, respectively. Any two $\phi(r)$ which differ by a constant $C$ both can be valid solutions only if $C \int \kappa^2(r) \ dV =0$, so that we must have $C=0$ as long as $\kappa(r)$ is non-zero at any location in the unit cell. Thus, any amount of screening at any location in space in the calculation eliminates the usual indeterminacy of $\phi(r)$ by an additive constant, thereby establishing an absolute reference for the zero of the potential. To establish the nature of this reference potential, we first note that deep in the fluid, far from the electronic system, the electron density approaches $n(r)=0$ and the dielectric constant and screening lengths attain their constant bulk values $\epsilon(r) \rightarrow \epsilon_b$ and $\kappa(r) \rightarrow \kappa_b$. Under these conditions, the Green's function impulse response of (\ref{mPB}) to a unit point charge is \begin{equation} \phi(r) = \frac{\exp\left(-\kappa_b\,r\right)}{\epsilon_b\ r}, \label{greensfunction} \end{equation} a Coulomb potential screened by the dielectric response of the solvent and exponentially screened by the presence of ions. Next, we rearrange (\ref{mPB}) so that the left-hand side has the same impulse response as the bulk of the fluid but with a modified source term, \begin{flalign} \epsilon_b \nabla^2 \phi(r) - \epsilon_b \kappa_b^2 \phi(r)=\nonumber\\-4\pi\left(\rho_{\mbox{sol}}(r)+\rho_{\mbox{ext}}(r) \right) \label{mPB-rearrange} \end{flalign} where we have defined \begin{flalign} \rho_{\mbox{sol}}(r)&\equiv n(r)-N(r)\nonumber\\ \rho_{\mbox{ext}}(r)&\equiv-\frac{1}{4\pi}(\left(\epsilon_b-\epsilon(r)\right)\nabla^2\phi(r)\nonumber\\& -(\nabla \epsilon(r))\cdot (\nabla \phi(r)) + \epsilon_b \left(\kappa^2(r)-\kappa_b^2\right) \phi(r)). \end{flalign} The key step now is to note that all source terms clearly vanish in the bulk of the fluid where $\rho_{\mbox{sol}}(r) \rightarrow 0$, $\epsilon(r) \rightarrow \epsilon_b = \mbox{constant}$, and $\kappa(r) \rightarrow \kappa_b$. From the exponential decay of the Green's function (\ref{greensfunction}) and the vanishing of $\rho_{\mbox{sol}}+\rho_{\mbox{ext}}$ in the bulk region of the fluid, we immediately conclude that $\phi(r) \rightarrow 0$ deep in the fluid region, thereby establishing that the absolute reference of zero potential corresponds to the energy of an electron solvated deep in the fluid region. \subsection{Future Improvements} While offering a computationally efficient and simple way to study electrochemistry, the approximate functional (\ref{ApproxFunc}) is highly simplified and possesses several limitations which the more rigorous approach of Section~II overcomes by coupling of an explicit solvent model for $\Omega_{lq}[N_\alpha(r)]$\cite{Lischner10} to the electronic system through an approach similar to the molecular pseudopotentials proposed by Kim {\em et al.}\cite{Cho96} Such limitations include the fact that because we employ a linearized Poisson-Boltzmann equation, we do not include the nonlinear dielectric response of the fluid (which other approaches in the literature to date also ignore\cite{AndersonPRB08,MarzariDabo}) or nonlinear saturation effects in the ionic concentrations, both of which become important for potentials greater than a few hundred mV. Despite these limitations, we remain encouraged by the promising results we obtain below for this simple functional and optimistic about the improvements that working within a more rigorous framework would provide. \section{ELECTRONIC STRUCTURE METHODOLOGY} All calculations undertaken in this work and presented in Section~VI were all performed within the DFT++ framework \cite{Ismail-Beigi} as implemented in the open-source code JDFTx. \cite{JDFTx} They employed the local-density or generalized-gradient \cite{PW91} approximations using a plane-wave basis within periodic boundary conditions. The specific materials under study in this paper were platinum, silver, copper, and gold. The (111), (110), and (100) surfaces of each of these metals were computed within a supercell representation with a distance of 10 times the lattice constant of each metal (in all cases around 30 \AA) between surface slabs of thickness of 5 atomic layers. For these initial calculations, we were very conservative in employing such large regions between slabs to absolutely eliminate electrostatic supercell image effects between slabs. We strongly suspect that smaller supercells can be used in the future. All calculations presented employ optimized\cite{Opium} norm-conserving Kleinman-Bylander pseudopotentials\cite{KB} with single non-local projectors in the {\it s}, {\it p}, and {\it d} channels, a plane-wave cutoff energy of 30~H, and employ a $8\times 8\times 1$ $k$-point Monkhorst-Pack\cite{MP} grids to sample the Brillouin zone. The JDFTx-calculated lattice constants of the bulk metals within both exchange-correlation approximations when using $8 \times 8 \times 8$ $k$-point grids are shown in Table \ref{lattconsts}. Clearly, the LDA and GGA lattice constants both agree well with the experiment. Except where comparisons are specifically made with LDA results, all calculations in this work employ GGA for exchange and correlation. \begin{table}[ht] \caption{ Cubic lattice constant (\AA) in conventional face-centered cubic unit cell} \centering \begin{tabular}{c c c c} \hline\hline Metal & LDA & GGA & Experiment\cite{CRCHandbook} \\ [0.5ex] \hline Pt & 3.93 & 3.94 & 3.92 \\ Cu & 3.55 & 3.67 & 3.61 \\ Ag & 4.07 & 4.13 & 4.09 \\ Au & 4.05 & 4.14 & 4.08 \\ [1ex] \hline \end{tabular} \label{lattconsts} \end{table} \section{RESULTS} To evaluate the promise of our approach, we begin by studying the fundamental behaviors of transition metal surfaces in equilibrium with an electrolyte environment as a function of applied potential. We find that even our initial highly simplified form of joint density-functional theory reproduces with surprising accuracy a wide range of fundamental physical phenomena related to electrochemistry. Such transition metal systems, especially platinum, are of electrochemical interest as potential catalysts for both the oxygen reduction reaction (ORR) and th hydrogen evolution reaction (HER). Molecular dynamics studies of the platinum system in solution, both at the classical \cite{Chandler09} and {\em ab initio} \cite{Gross09,Norskov08} levels, to date have not fully accounted for ionic screening in the electrolyte, which is essential to capturing the complex structure of the electrochemical double layer and the establishment of a consistent reference potential. For the initial exploratory studies presented in this manuscript, we focus on pristine surfaces without adsorbates in order to establish clearly the relationship between theoretical and experimental quantities and to lay groundwork for future systematic comparison of potential catalyst materials. Unless otherwise specified, we carry out our calculations with screening lengths of 3~\AA, corresponding to monovalent ionic concentrations of 1.0~M. We employ these high concentrations because most electrochemical cells include a supporting electrolyte with high ionic concentration chosen to provide strong screening while avoiding (to the extent possible) interaction with and adsorption on the electrode. Note that, because our present model includes only ionic concentrations and no other species-specific details about the ions in the electrolyte, our results correspond to neutral pH. Future work will readily explore pH and adsorption effects by including protons and other explicit ions in the electronic-structure portions of the calculation. One great advantage of the present theoretical approach is the ability to separate the role of the non-adsorbing ions in the supporting electrolyte from the role of the adsorbing ions that interact directly with the surface. \subsection{Treatment of charged surfaces in periodic boundary conditions} The application of voltage essential to the {\em ab initio} study of electrochemistry requires precise treatment of charged surfaces not accessible to common electronic structure approaches due to singularities associated with the Coulomb interaction. In the case of a vacuum environment, the electrostatic potential $\phi(r)$ of even a neutral electrode approaches a physically indeterminate constant which varies with the choice of supercell. As is well-known, this difficulty compounds radically when a net charge is placed on the surface, resulting in a formally infinite average electrostatic potential in a periodically repeated system. By default, most electronic structure packages designed for use with periodic systems treat this singularity by setting the $G=0$ Fourier component of $\phi(r)$ to zero, equivalent to incorporating a uniform, neutralizing charge background throughout the region of the computation. This solution to the Coulomb infinity is not realistic in electrochemical applications where the actual compensating charge appears in the fluid and should not be present in the interior of the electrode. Another option which has been employed in the electrochemical context\cite{Otani06} is to include an oppositely charged counter-electrode located away from the working electrode in the ”vacuum” region of the calculation. However, including an explicit density-functional electrode is often computationally prohibitive as it requires doubling the number of electrons and atoms and requires a large supercell to prevent image interaction. Implicit inclusion of a counter-electrode through either coulomb truncation or an external charge distribution \cite{Otani06} requires an arbitrary choice of the distribution of external charges representing the counter-electrode, and such arbitrary choices may result in unphysical electrostatic potentials, even in the presense of a few explicit layers of neutral liquid molecules. One realistic choice is to employ Debye screening as in Eq. \ref{mPB}. This approach ensures that the long-range decay of $\phi(r)$ into the fluid corresponds to the behavior of the actual physical system, that the fluid response contains precisely the correct amount of compensating charge, and that the potential approaches an absolute reference, even in a periodic system. Another more explicit, and hence computationally expensive, option employed in the electrochemical literature is to add a few layers of explicit water molecules to the surface and then include explicit counter-ions (protons) located in the first water layer.\cite{Rossmeisl11} This approach models some of the most important effects of the actual physical distribution of counter-ions, which really should contain both localized and diffuse components, by considering only the first layer of localized ions. Figure~\ref{Figure3}(a) contrasts the potential profiles resulting from the aforementioned approaches in actual calculations of a Pt(111) electrode surface. Figure~\ref{Figure3}(a) displays the microscopic local electron potential energy function $\langle \phi(z)\rangle$ for a surface at applied voltage ${\cal E}=-1.09 V$ vs PZC which corresponds to a charge of $\sigma=-18\ \mathrm{\mu C/cm^2}$. The screened electron potentials generated by solution of Eq. \ref{mPB} at two different ionic strengths ($c=1.0$~M and $c=0.1$~M) are compared to potential profiles for a similarly charged surface in vacuum, with the net charge in the system neutralized either by imposing a uniform background charge or by placing an oppositely charged counter electrode at one Debye screening length from the metal surface. The two charge-compensated vacuum calculations clearly do not correspond to the electrochemical behavior, with far wider potential variations than expected. Figure~\ref{Figure3}(b) shows a detailed view of the macroscopic {\em electrostatic} potential $\langle \Phi(z)\rangle$ for the same charged surface (obtained by subtracting the microscopic {\em electron} potential of the neutral surface and switching the sign to reflect electrochemical convention) for the JDFT calculated charged surface at two different ionic strengths. The charge-compensated vacuum calculations would be off the scale of this figure, while the macroscopic electrostatic potential for the JDFT calculations obtains the value of the applied potential within the electrode and then approaches a well-established reference value of zero with the correct asymptotic behavior in the fluid region. \begin{figure} \centering \subfloat[]{\label{fig:3a}\includegraphics[width=7.5cm]{figure3a.eps}} \subfloat[]{\label{fig:3b}\includegraphics[width=7.5cm]{figure3b.eps}} \subfloat[]{\label{fig:3c}\includegraphics[width=7cm]{figure3c.eps}} \caption{Microscopic electron potential energies $\langle \phi(z)\rangle$ and macroscopic electrostatic potentials $\langle\Phi(z)\rangle$ averaged in planes for the Pt (111) surface as a function of distance $z-z_{Pt}$ from the end of the metal surface: (a) $\langle \phi(z)\rangle$ for surface with applied voltage ${\cal E}=-1.09\ \mathrm{V}$ vs. PZC in vacuum (green dashed) and in monovalent electrolytes of $c=1.0\ \mathrm{M}$ (red) and $c=0.1\ \mathrm{M}$ (blue) where the dotted lines represent calculations with an explicit counter-electrode and the solid lines are JDFT calculations (b) close-up view of $\langle\Phi(z)\rangle$ for JDFT calculations with $c=1.0\ \mathrm{M}$ (red) and $c=0.1\ \mathrm{M}$ (blue) and applied voltage ${\cal E}=-1.09\ \mathrm{V}$ vs. PZC (almost indistinguishable in the previous plot) (c) Variation of $\langle\Phi(z)\rangle$ in JDFT monovalent electrolyte of $c=1.0\ \mathrm{M}$ with ${\cal E}=\{-1.09,-0.55,0.0,0.55,1.09\}\ \mathrm{V}$ vs. PZC.} \label{Figure3} \end{figure} \subsection{Electrochemical double layer structure} The Gouy-Chapman-Stern model, described in Section III(C), offers a well-known prediction of the structure of the electrochemical double layer, to which the potentials from our model correspond precisely. The electrostatic potential profiles in the standard electrochemical picture include an initial, capacitor-like linear drop in $\langle\Phi(z)\rangle$ due to the outer Helmholtz layer (the Stern region), followed by a characteristic exponential decay to zero deep in the fluid (the diffuse Gouy-Chapman region). Our model naturally captures this behavior as a result of (a) the localization of the dielectric response and screening to the liquid region as described by $N_{lq}(r)$ through Eq. (\ref{eps-kapp}) and (b) the separation between the fluid and regions of high explicit electron density $n(r)$ through the definition of $N_{lq}(r)\equiv N_{lq}\left(n(r)\right)$ via Eq. (\ref{Nl}). Both the Stern and Gouy-Chapman regions are clearly evident in Figures~\ref{Figure3}(b,c). We find the dielectric constant transition region appearing in Figure~\ref{Figure2}(b), approximately the width of a water molecule, to be essential to the accurate reproduction of the double layer structure. The potentials for charged surfaces in Figures~\ref{Figure3}(b,c) first show a linear decay in the region $0~$\AA$<z-z_{Pt}<\Delta$, corresponding to the ``gap'' between the end of the surface electron distribution ($z_{Pt}$) and the beginning of the fluid region, precisely the behavior we should expect in the Stern region. For a Pt(111) surface at applied voltage -1.09~V vs. PZC, $\Delta=0.6~$\AA, but the width of this gap is voltage-dependent (as shown in Figure~\ref{Figure4}(b)) and also varies with metal and crystal face. After the gap region, for $\Delta<z-z_{Pt}<\Delta+\gamma$ (where $\gamma=0.6$ as in \ref{Nl}), the dielectric constant in Figure~\ref{Figure2}(b) changes rapidly from about 10 to the bulk value $\epsilon_b\sim 80$, defining a transition region which ensures that no significant diffuse decay in the potential occurs until beyond the outer Helmholtz layer, thereby allowing proper formation of the diffuse Gouy-Chapman region for $z-z_{Pt}>\Delta+\gamma$. We emphasize that we have not added these phenomena into our calculations {\em a posteriori}, but that they occur naturally as a consequence of our microscopic, albeit approximate, {\em ab initio} approach. \subsection{Charging of surfaces with electrode potential} To explore the effects of electrode potential on the surface charge and electronic structure, Figure~\ref{Figure4}(a) shows the surface charge $\sigma$ as a function of potential ${\cal E}$ for a series of transition metal surfaces for an electrolyte of monovalent ionic strength $ c=1.0\ \mathrm{M}$, without adsorption of ions to the surface. We find the average double layer capacitance of the Pt(111) surface -- the slope of the corresponding $\sigma-{\cal E}$ curve in Figure~\ref{Figure4}(a) -- to be ${\cal C}$=19 $\mathrm{\mu F/cm^2}$, in excellent agreement with the experimental value of 20 $\mathrm{\mu F/cm^2}$.\cite{expCap} Indeed, we find that a significant fraction of our total capacitance is due to dielectric and screening effects in the fluid; this agreement again supports our model for the electrolyte. The remainder is associated with the ``quantum capacitance'' or density of states ${\cal C}_{\mbox{DOS}}$ of the surface slab in our supercell calculations. Closer inspection of the charge versus potential data reveals that the slope is not quite constant as a function of voltage. Indeed, taking the numerical derivatives of the curves in Figure~\ref{Figure4}(a) yields values for the differential capacitance that exhibit an approximately linear dependence on voltage. This voltage-dependence constrasts with studies performed using a different technique to produce voltage-independent constant values for the capacitance\cite{Norskov08}, which not only was limited to producing a constant value for the capacitance, but also required computationally demanding thermodynamic sampling to model the fluid. To understand the origin of the above voltage-dependence of the capacitance, we employ the series model for differential capacitance in (\ref{Capseries}) and (\ref{CapFl}), in which the total capacitance per unit area ${\cal C}$ is modeled as a series combination of the capacitance associated with the density of states of the metal, a Stern capacitance (${\cal C}_{\Delta}$) across a gap of width $\Delta$, and the (constant) Gouy-Chapman capacitance associated with the inverse screening length $\kappa$. We can then extract the ``gap'' capacitance as \begin{flalign} \frac{\Delta}{\epsilon_0}\sim{\cal C}_{\Delta}^{-1} \equiv {\cal C}^{-1}-{\cal C}_{\mbox{DOS}}^{-1}-\frac{\kappa^{-1}}{\epsilon_b\epsilon_0}. \label{Cdelta} \end{flalign} To verify that the voltage-dependence of this contribution indeed correlates to changes in the gap associated with the Stern layer, we make an independent definition of the width of the gap as $\Delta\equiv z_c-z_{Metal}$, where $z_c$ represents the location where the presence of our model fluid becomes significant and $z_{Metal}$ represents the location of the surface of the metal. Specifically, we define $z_c$ as the point where the planar average of the inverse dielectric constant has fallen by half from its value in the electrode (as in Figure~\ref{Figure4}(b)) since the polarization of the fluid becomes significant when $\langle\epsilon^{-1}(z_c)\rangle<0.5$. We determine $z_{Metal}$ from the covalent radii of the metal surface atoms, but note that the specific choice of $z_{Metal}$ is unimportant in the analysis to follow. Figure~\ref{Figure4}(c) correlates the inverse gap capacitance ${\cal C}_{\Delta}^{-1}$ from the right hand side of Eq.~\ref{Cdelta} with the values of $\Delta$ defined as in the previous paragraph. There is a striking linear trend with a slope within about ten percent of $\epsilon_0^{-1}$, validating that the primary contribution to the voltage-dependence of the differential capacitance within this model is from changes in the gap between the fluid surface and where the dielectric screening begins. The ultimate origin of this effect within the present approximation (in which the dielectric constant is determined by the electron density through Eq.~\ref{eps-kapp}) can be traced to the increase in surface electrons with decreasing applied potential, which moves the location of the fluid transition further away from the metal surface. In fact, the experimentally determined capacitance of Pt(111) due to {\em only} the double layer\cite{expCap} (after subtracting the effects of counter-ion adsorption) has a voltage-dependence quite similar to our prediction. Since the distance of closest approach of the fluid to the metal surface is determined by van der Waals interactions and the addition of more electrons could indeed strengthen the repulsion, the qualitative voltage-dependence of the ``double-layer'' capacitance even at this simple level of approximation may indeed be capturing some aspects of the underlying physics. The double-layer capacitance notwithstanding, in physical systems the total capacitance is dominated by the effects of adsorption of counter-ions, and so the qualitative voltage-dependence of the capacitance at this simple level of approximation has limited practical relevance. Nonetheless, it is an important feature of the electrochemical interface for those modified Poisson-Boltzmann approaches in which the cavities are determined by contours of the electron density. Future work in this area could capture the ``ion-adsorption'' portion of the capacitance either by including explicit counter-ions within the electronic structure portion of the calculation or by choosing a classical fluid functional that includes a microscopic description of the counter-ions. \begin{figure} \centering \subfloat[]{\label{fig:4a}\includegraphics[width=8.6cm]{figure4a.eps}} \subfloat[]{\label{fig:4b}\includegraphics[width=8.6cm]{figure4b.eps}} \subfloat[]{\label{fig:4c}\includegraphics[width=8.6cm]{figure4c.eps}} \caption{(a) Surface charge $\sigma$ as a function of applied voltage ${\cal E}$ for a series of transition metal surfaces in an electrolyte of monovalent ionic strength $c=1.0\ \mathrm{M}$ (b) Inverse dielectric constant $\epsilon^{-1}$ as a function of distance from a Pt(111) surface for multiple values of applied voltage (c) Inverse gap capacitance ${\cal C}_{\Delta}^{-1}$ as a function of the distance from the metal surface at which the fluid begins $\Delta$. The solid line indicates the best fit to the data with slope constrained to $\epsilon_0^{-1}$} \label{Figure4} \end{figure} \subsection{Potentials of Zero Charge and reference to Standard Hydrogen Electrode} To connect our potential scale (relative to an electron solvated in our model fluid) to a standard potential scale employed in the literature and to confirm the reliability of our model, Figures~\ref{Figure5}(a,b) show our {\em ab initio} predictions for potentials of zero charge versus experimental values relative to the standard hydrogen electrode (SHE).\cite{Trasatti} Within both the local density (LDA)\cite{Kohn-Sham} and generalized gradient (GGA)\cite{PW91} approximations to the electronic exchange-correlation energy, we have calculated the potentials of zero charge for various crystalline surfaces of Ag, Au, and Cu, three commonly studied metals. We performed a least-squares linear fit to the intercept of our data, leaving the slope fixed at unity. (Note that the experimental data for Cu in $\mathrm{NaF}$ electrolyte were not included in the fit, due to concerns discussed below.) The excellent agreement between our results (with a constant offset) and the experimental data indicates that joint density-functional theory accurately predicts trends in potentials of zero charge, and encourages us that it can establish oxidation and reduction potentials in the future. The improved agreement of GGA (rms error: 0.058 V) over LDA (rms error: 0.108 V) underscores the importance of gradient-corrections to this type of surface calculation. The strong linear correlation with unit slope between the theoretical and experimental data in Figures~\ref{Figure5}(a,b) indicates that the simplified Poisson-Boltzmann approach reproduces potentials of zero charge well relative to some absolute reference. The single parameter in the fit for each of the two panels (namely, the vertical intercepts of each fit line) establishes the absolute relationship between our zero of potential (implicit in each set of theoretical results) and the zero of potential on the standard hydrogen-electrode scale (implicit in the experimental data). Specifically, we find that our zero of potential sits at -4.91~V relative to the SHE for LDA and -4.52~V relative to the SHE for GGA. Intriguingly, these values are close to the experimentally determined location of {\em vacuum} relative to the the standard hydrogen electrode reference (-4.44~V)\cite{Trasatti}; in fact, the GGA reference value is within a tenth of a volt. This apparent alignment is not altogether surprising due to the following argument: (1) our method measures the difference in energy between an electron in the electrode and an electron solvated deep in our model electrolyte, so that our potentials of zero charge are measured relative to a “solvated” electron reference; (2) the energy of a solvated electron relative to vacuum {\em within the presentlt considered linearized Poisson-Boltzmann model} is zero because this approximation includes only electrostatic effects; and (3) because the calculated potentials of zero charge in the figures are thus {\em relative to vacuum}, the difference between our calculated results and the experimental results should represent the constant difference between the vacuum and SHE references. Consideration of the breakdown of the potential of zero charge into physically meaningful quantities explains the difference between the LDA and GGA results and elucidates the apparent success of the rather simple modified Poisson-Boltzmann approach in predicting PZC's. Transferring an electron from a metal surface to a reference electrode requires, first, removal of the electron from the surface and, then, transport of the electron through the relevant interfacial layers of the liquid. The energy associated with the fomer process is the work function, and the energy associated with the latter relates to the intrinsic dipole of the liquid-metal interface. As is well-known, there is an approximately constant shift between the predictions of the LDA and GGA exchange and correlation functionals for work functions of metals. In fact, Fall {\em et al.} report that GGA metal work functions are approximately 0.4 V lower than the LDA work functions,\cite{Fall2000} corresponding well to the differences we find between the vertical intercepts of Figures~\ref{Figure5}(a,b). Next, to aid consideration of the intrinsic dipole of the interface, Figure~\ref{Figure5}(c) explicitly compares our predictions for work function with our predictions for potential of zero charge, including also the corresponding experimental data for both quantities. (To place all values on a consistent scale of potential, which we choose to be vacuum, we have added the experimentally determined 4.44~V difference between SHE and vacuum to the experimental PZC's.) The data in Figure~\ref{Figure5}(c) suggest that the vacuum work functions are harder to predict than potentials of zero charge, possibly due to difficulty determining the value of the reference potential in the vacuum region, an issue not present in our fluid calculations due to the screening in Eq. \ref{mPB}. The figure also indicates an approximately constant shift from vacuum work function to potential of zero charge, suggestive of a roughly constant interfacial dipole for each of the metal surfaces. However, the shift is not exactly constant: both the experimental and theoretical data exhibit significant fluctuations (on the order of 0.1 V) in the shift between work function and PZC from one metal surface to another. Because the PZCs are determined to within a significantly smaller level of fluctuation (0.06 V), these data indicate that the Poisson-Boltzmann model captures not merely a constant interfacial dipole, but also a significant fraction of the fluctuation in this dipole from surface to surface. We note that Tripkovic {\em et al.} have also calculated the potentials of zero charge for transition metal surfaces.\cite{Rossmeisl11} However, that approach requires calculation of several layers of explicit water within the electronic structure portion of the calculation, and those authors find the resulting potentials of zero charge to be dependent on the exact structure chosen for the water layers. However, while differing orientations of water molecules at the interface may result in significant local fluctuations in the instantaneous PZC, the experimentally measured potential of zero charge is a temporal and spatial thermodynamic average over all liquid electrolyte configurations rather than the value from any single configuration. Direct comparison to experimental potentials of zero charge therefore should involve calculation of a thermodynamic average. As a matter of principle, derivatives of the free energy (which the JDFT framework provides directly) yield thermodynamic averages. Therefore, an exact free-energy functional would predict the exact, thermodynamically averaged potential of zero charge, and classical liquid functionals, which capture more microscopic details of the equilibrium liquid configuration\cite{Lischner10} than the present model, would be an ideal choice for future in-depth studies. Indeed, such functionals are capable of capturing the relevant electrostatic effects even when a single configuration of water molecules dominates the thermodynamic average. (In such cases, minimization of the free-energy functional results in localized site densities $N_\alpha(r)$ representing the dominant liquid configuration.) Of course, in cases of actual charge-transfer reactions between the surface and the liquid, the (relatively few molecules) involved in the actual transfer must be included within the explicit electronic density-functional theory, whereas the other electrolyte molecules may still be handled accurately within the more computationally efficient liquid density-functional theory. There is also reason to be sanguine regarding the ability of the modified Poisson-Boltzmann approximation pursued in this work to capture interfacial dipole effects. The macroscopic dielectric constant contained within the present model describes primarily the orientational polarizability of water, so that the liquid bound charges resulting from the minimization of the free energy should reflect the most dominant configurations of water molecules in the thermodynamic average, even if only a single configuration dominates. On the electrode side, the image charges corresponding to the bound charge also naturally appear, as a consequence of both the electrostatic coupling in our model and the metallic nature of the surface described within electronic density-functional theory. From an optimistic perspective, it is quite possible that a significant portion of the electrostatics of the surface dipole would be captured even at the simplified level of a Poisson-Boltzmann description. Ultimately, quantification of how much of the effect is captured may only be verified by comparison to experiment. For the systems so far considered, the excellent {\em a priori} agreement between experimental measurements and our theoretical predictions indicates that the relevant effects are indeed captured quite well. It appears that even a simple continuum model (which only accounts for the effects of bound charge at the interface and the corresponding image charges within the metal) can predict accurately key electrochemical observables such as potential of zero charge. Certainly, for more detailed future studies, we would recommend exploring the performance of more explicit functionals. However, the apparent accuracy and computational simplicity of the current Poisson-Boltzmann approach render it well-suited for high-throughput studies of electrochemical behavior as a function of electrode potential. \begin{figure} \begin{center} \subfloat[LDA]{\label{fig:5a}\includegraphics[width=6.3cm]{figure5a.eps}} \subfloat[GGA]{\label{fig:5b}\includegraphics[width=6.5cm]{figure5b.eps}} \subfloat[Comparison to Work Functions]{\label{fig:5c}\includegraphics[width=8.6cm]{figure5c.eps}} \end{center} \caption{Comparisons of {\em ab initio} predictions and experimental data\cite{Trasatti} for potentials of zero free charge (PZCs) and vacuum work functions: (a) {\em ab initio} LDA predictions versus experimental PZC relative to SHE, (b) {\em ab initio} GGA predictions versus experimental PZC relative to SHE, (c) {\em ab initio} GGA vacuum work functions (solid line with squares) and PZC's (solid line with circles), experimental work functions (dotted line) and PZC's (dashed line) versus vacuum for the same series of surfaces. Best linear fits with unit slope (dark diagonal solid lines in (a) and (b)).} \label{Figure5} \end{figure} As a further example of the utility of the Poisson-Boltzmann approach, the potential of zero charge calculation for copper illustrates how this theory can be used as a highly controlled {\em in-situ} probe of electrochemical systems, with the ability to isolate physical effects which are not possible to separate in the experiment. Specifically, in Figures~\ref{Figure5} (a) and (b), for copper there are experimental values for two different electrolytes, NaF and $\mathrm{KClO_{4}}$, which are both claimed to be noninteracting with the metal surface.\cite{Trasatti} Clearly, our theoretical values, which correspond to potentials of zero {\em free} charge without adsorption of or chemical reaction with ions from the electrolyte, agree more favorably with experimental data for the Cu surface in $\mathrm{KClO_{4}}$ than for the NaF electrolyte. Our results suggest that future experimental exploration is warranted to investigate potential interactions between the NaF electrolyte and the copper surfaces, or into other possible causes of the discrepancy in potentials of zero charge. Perhaps some polycrystalline impurities caused the experimental potentials of zero charge of the supposed single-crystalline faces to become much more similar than our calculations and the $\mathrm{KClO_{4}}$ data indicate they should be. {\em Ab initio} calculations offer an avenue to study each of these potential causes independently and to elucidate the mechanisms underlying the apparent experimental disagreement. Finally, although potentials of zero charge are quite readily observed in experiments for less reactive metals such as silver and gold, measurement of the potential of zero charge for platinum can be difficult because platinum is easily contaminated by adsorbates. For this reason, more convoluted methods are employed to determine an experimental value for the potential of zero charge for platinum. For instance, one may turn to ultra-high vacuum methods, where, by definition, no molecules are adsorbed on the surface, and one may then attempt to estimate the effect of the solution on the potential of zero free charge. \cite{MichaelWeaver} Alternately, one may employ cyclic voltammograms to estimate the charge due to adsorbates and then extrapolate the potential of zero free charge. \cite{Cuesta} Our {\em ab initio} method, however, gives the values for non-contaminated potentials of zero free charge directly, provided we establish the relation of our zero reference potential relative to that of the standard hydrogen electrode, which we have done above. For uncontaminated platinum, our method yields the potentials of zero free charge shown in Table \ref{PZCPt}. Compared to other references in the literature, the best agreement with our results is from an experiment which extrapolates the potential of zero charge from ultra-high vacuum, eliminating the effects of unknown adsorbates on the clean surface.\cite{MichaelWeaver} As with the results for Cu, the significantly better agreement of our calculations with this latter experimental approach suggests that perhaps future experiments which measure potential of zero charge should reconsider the effect of possible contaminants when extrapolating values for the potential of zero free charge. \begin{table}[ht] \caption{Platinum Potentials of Zero Free Charge (V~vs~SHE)} \centering \begin{tabular}{c c c c} \hline\hline & (110) & (100) & (111) \\ [0.5ex] \hline LDA & 0.31 & 0.70 & 0.71 \\ GGA & 0.40 & 0.79 & 0.82 \\ \hline \end{tabular} \label{PZCPt} \end{table} \section{CONCLUSION} In this work, we extend joint density-functional theory (JDFT) -- which combines liquid and electronic free-energy functionals into a single variational principle for a solvated quantum system -- to include ionic liquids. We describe the theoretical innovations and technical details required to implement this framework for study of the voltage-dependence of surface systems within standard electronic structure software. We establish a connection to the fundamental electrochemistry of metallic surfaces, accurately predicting not only potentials of zero charge for a number of crystalline surfaces for various metals but also an independent value for the standard hydrogen electrode relative to vacuum. Furthermore, we show how future innovations in free energy functionals could lead to even more accurate predictions, demonstrating the promise of the joint density-functional approach to predict experimental observables and capture subtle electrochemical behavior without the computational complexity required by molecular dynamics simulations. These advantages render joint density-functional theory an ideal choice for high-throughput screening calculations and other applications in materials design. We have built extensively upon the framework of joint density-functional theory in the implicit solvent approximation,\cite{Petrosyan05} extending it to include charged ions in a liquid electrolyte. Beginning with an implicit model for the fluid density $N_{lq}(r)$ in terms of the electronic density of the surface, $N_{lq}(r)=N_{lq}(n(r))$, we include an ionic screening length tied to the fluid density $N_{lq}(r)$ in the same way as done in previously successful models for the dielectric constant. We also solve a previously unrecognized difficulty by including model core electron densities within the surface to prevent artificial penetration of liquid density into the ionic cores, which lack electrons in typical pseudopotential treatments of the solid. Inclusion of this ionic screening allows us to provide a consistent zero reference of potential and to resolve many difficulties associated with net charges in periodic supercell calculations, thereby enabling study of electrochemical behavior as a function of applied voltage. With the framework to include electrode potential within joint density-functional theory calculations thus in place, we then establish clear connections between microscopic computables and experimental observables. We identify the electronic chemical potential of density-functional theory calculations with the applied voltage in electrochemical cells, and thereby extract a numerical value of 4.52~V (within the GGA exchange-correlation functional) for the value of the standard hydrogen electrode relative to vacuum, which compares quite favorably to the best-accepted experimental value of 4.44~V.\cite{Trasatti} We also show that joint density-functional theory reproduces, {\em a priori}, the subtle voltage-dependent behaviors expected for a microscopic electrostatic potential within the Gouy-Chapman-Stern model and we extract potentials of zero free charge for a series of metals commonly studied in electrochemical contexts, often finding agreement with experimental values to within hundredths of volts. This qualitatively correct prediction of electrochemical behavior and encouraging agreement with experiment demonstrate the capabilities of even a simple approximation within the joint density-functional theory framework, and we expect future improvements to the free-energy functional to be able to describe more complex electrochemical phenomena. Future work should also generalize the approximate functional to include nonlinear saturation effects in ionic screening within the current modified Poisson-Boltzmann approach, with an approach along the lines of other works. \cite{MarzariDabo} In electrochemical experiments, the differential capacitance of charged metal surfaces often exhibits a minimum at the potential of zero charge\cite{BardFaulkner} (not seen in the linear continuum theory), and more advanced theories including such nonlinear effects should be able to capture this more subtle behavior. Additionally, recent developments in classical density functionals for liquid water \cite{Lischner10} now can be implemented to study electrochemical systems. Such classical density functionals can be extended to include realistic descriptions of ions and are capable of capturing other essential behaviors of electrolyte fluids, including features in the ion-ion and ion-water correlation functions due to differences in the structure of the anion and the cation. \cite{Bazant} Finally, in systems where electrochemical charge-transfer reactions are important or where chemical bonds of the fluid molecules are expected to break, the relatively few reactant molecules should be treated within the explicit electronic structure portion of the calculation, with the remaining vast majority of non-reacting molecules handled within the more computationally efficient liquid density-functional theory. With advances such as those described above, joint density-functional theory holds promise to become a useful and versatile complement to the toolbox of currently available techniques for first principles study of electrochemistry. Unlike {\em ab initio} molecular dynamics (or any other theory involving explicit water molecules), this computationally efficient theory is not prohibitive for larger system sizes. In fact, as the system size grows, the fraction of calculation time spent solving the modified Poisson-Boltzmann equation actually decreases, meaning that for larger systems, the calculation is only nominally more expensive than calculations of the corresponding systems carried out in a vacuum environment. Also, because thermodynamic integration is not required, the joint density-functional theory approach yields equilibrium properties directly and has a clear advantage over molecular dynamics simulations for calculation of free energies. Immediate applications include the study of molecules on metallic electrode surfaces as a function of applied potential and prediction of the basic properties of novel catalyst and catalyst support materials. These calculations could inform future materials design by offering an opportunity to screen novel complex oxides and intermetallic materials in the presence of the true electrochemical environment, thereby elucidating the fundamental physical processes underlying fuel cells and liquid-phase Graetzel solar cells. \begin{acknowledgments} The authors would like to acknowledge Ravishankar Sundararaman for modifying the software to streamline calculations at fixed voltage and Juan Feliu for providing the most up-to-date information regarding electrochemistry of single-crystalline metallic surfaces. \vspace{5mm} This material is based on work supported by: \vspace{5mm} The Energy Materials Center at Cornell, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Science under Award Number DE-SC0001086. \vspace{5mm} The Cornell Integrative Graduate Education and Research Traineeship (IGERT) Program in the Nanoscale Control of Surfaces and Interfaces, supported by the National Science Foundation under NSF Award DGE-0654193, the Cornell Center for Materials Research, and Cornell University. \vspace{5mm} K. Letchworth-Weaver also acknowledges support from an National Science Foundation Graduate Research Fellowship. \end{acknowledgments}
1,116,691,501,059
arxiv
\section{Introduction} Quantum computation offers a new and exciting perspective to information processing, as it has been found that certain problems can be solved more efficiently on a quantum computer than on a classical device. Despite considerable effort it is however not fully understood which features of quantum mechanics are responsible for the apparent speedup. Basic questions regarding the nature and power of quantum computation remain largely unanswered to date. The existence of various models for quantum computation, among them the quantum Turing machine \cite{Deu85,quantumTM}, the circuit model \cite{BBC+95,CircuitDeutsch,Yao93}, adiabatic quantum computation \cite{FGG+01,latorre} and measurement-based quantum computation \cite{1-way1,1way1long,RBB03,GC99,Leu04,Ni03,PJ04,gross,BM08}, seems to indicate that a straightforward answer to these fundamental issues might be difficult to obtain. On the other hand, the different nature of the models allows one to study these fundamental issues from different perspectives, and it turns out that some models are better suited than others to study a certain aspect. For instance, the model of {\em measurement-based quantum computation}, with the one-way model \cite{1-way1} as most prominent representative, seems to be particularly well suited to investigate the role of entanglement in quantum computation. Such an investigation has been initiated in \cite{maarten} and further developed in \cite{universalityI}. In one-way or measurement-based quantum computation (MQC) --which we use synonymously throughout this article-- a highly entangled resource state, e.g. the 2D cluster state \cite{cluster1}, is processed by sequences of single-qubit local measurements. As has been shown in \cite{1way1long}, a proper choice of measurement directions allows one to generate --up to irrelevant local unitary correction operation-- {\em any} quantum state deterministically and exactly on the unmeasured qubits. In this paper we aim at investigating the generalization of these previous results to the case in which stochastic and/or approximate quantum computation is allowed. The 2D cluster state is called a {\em universal resource} for MQC. In MQC, the role of entanglement is particularly highlighted, as all entanglement required in the computation already needs to be present in the initial resource state. This derives from the fact that no entanglement measure increases under local operations and classical communication (LOCC). This insight was recently used in \cite{maarten,universalityI} to investigate which other quantum states are universal resources for MQC. Entanglement-based criteria for universality have been established and many --otherwise highly entangled-- resource states, including GHZ states \cite{GHZstates}, W states \cite{DurVC00-wstate} and 1D cluster states \cite{cluster1}, have been shown to be not universal for MQC. One should, however, emphasize that this does not mean that such non-universal resource states are useless for quantum information processing, as they might still serve to perform some specific quantum computation or as a resource for some other task. On the positive side, several other states have been identified to be universal resources for MQC \cite{gross,universalityI}. Notice that we use the term ``universality'' in its strongest form, i.e. we consider the generation of quantum states (universal state preparator). This has been termed CQ-universality (where CQ stands for classical input, quantum output) in Ref. \cite{universalityI} and we refer the interested reader to said work for an extended discussion on the different notions of universality. \subsection{Approximate and stochastic universality} In this article we will extend the results on universality obtained in \cite{maarten,universalityI} to a more general and realistic setting, which is motivated by experimental reality. More precisely, we will consider the {\em approximate} and {\em probabilistic} generation of quantum states from a given resource state, in contrast to the exact and deterministic generation discussed in \cite{maarten,universalityI}. In this work we therefore focus on the case in which the desired output states are required to be generated only with finite accuracy (that is the output of the computation is required to be within some distance $\epsilon$ of the desired state), and with probability $1-\delta$. Such an extension needs to be considered naturally whenever the resource states are noisy, e.g. due to an imperfect generation process or due to decoherence, but also if the local operations used to process the state are imperfect. The latter may again be reflected in noisy single qubit operations, but may also result from a restriction to a finite number of measurement settings or local unitary operations. In all these cases, the resulting states can only be an approximation of the desired state. In addition, one might be interested in the generation of states with a probability of success (arbitrary) close to one --which we will call quasi-deterministic--, or even only with some (arbitrary) small success probability. In fact, similar issues are implicitly considered when one refers to universal gate sets in the circuit model for quantum computation: any finite universal gate set allows one to approximate any state with arbitrary accuracy. Notice that the issue of probabilistic computation has been deeply studied both in classical computation theory \cite{papadim} and in the quantum setting \cite{nielsenchuang}. On the one hand, if it is known when the computation succeeded, which happens, say, with probability $p$, then $O(1/p)$ repetitions allow one to obtain a valid, confirmed outcome. On the other hand, even if it is not known whether the computation succeeded or not, but only that the correct outcome is obtained with some probability $p>1/2$, this is still sufficient to extract the correct (classical) result of the computation with arbitrary high probability by considering many repetitions. The first scenario also applies without changes to the case where quantum states should be generated (CQ universality). The second scenario is restricted to the extraction of classical outputs (CC universality), while the resulting quantum states are in fact mixed. \subsection{Summary of results} We find that --analogously to the exact, deterministic case-- entanglement based criteria for approximate and stochastic universality can be obtained. To formulate these criteria, we need to consider $\epsilon$-measures of entanglement \cite{epsilon} and compute their extremal values over all states. Given any distance $D$ on the set of states, and any entanglement measure $E$, the $\epsilon$-measure of a state $E_{\epsilon}(\rho)$ is defined as the minimal amount of entanglement of all states which are $\epsilon$-close (with respect to $D$) to $\rho$, i.e. have a distance smaller than $\epsilon$ to $\rho$. We find the following necessary criteria for efficient, approximate stochastic universality: \begin{itemize} \item For any entanglement measure $E$ which is a strong extendable entanglement monotone (see below for exact definition), we have that an approximate, stochastic universal resource $\Sigma$ which allows one to obtain an $\epsilon$-approximation of any state with probability larger than $1-\delta$, must have an amount of entanglement that is larger or equal than $(1-\delta)$ times the maximum of the corresponding $\epsilon$-measure $E_{\epsilon}$ over all states of arbitrary size, $E(\Sigma) \geq (1-\delta) {\rm max}_{\rho}E_\epsilon(\rho)$. Roughly speaking, this means that any approximate, stochastic resource needs to be maximally entangled with respect to all such entanglement measures. \item If one takes the efficiency of computation into account, we find that for any strong extendable entanglement monotone, the entanglement of the resource states does not only need to reach the maximum value of the corresponding $\epsilon$-measure over all states, but needs to grow sufficiently fast with the system size. \end{itemize} These two criteria allow one to rule out a large number of states as being not universal in an approximate an stochastic sense, e.g. GHZ states, W states and 1D cluster states. On the positive side, we present a number of approximate, quasi-deterministic resource. We find: \begin{itemize} \item There exist efficient, approximate quasi-determinist universal resources that are not believed to be exact, deterministic universal. For example, a 2D cluster state where particles are missing with a certain probability is an exact, quasi-deterministic universal resource, while an approximate 2D cluster state is an approximate deterministic universal resource. \item Any state that is sufficiently close to an approximate stochastic universal resource is still an approximate stochastic universal resource, and the parameters quantifying approximation and stochasticity are quadratically related to the original ones. \end{itemize} In particular, this last observation has implications in realistic (experimental) scenarios, where the preparation of the initial entangled states is imperfect. These errors in the preparation procedure still allow for the state to be used for MQC, in the approximate and stochastic scenario. While this might be considered intuititve and results of this type were already known for the 1-way model (where the initial state is a 2D cluster state) \cite{raussendorfPhD,nielsendawson2005,aliferisleung2006}, in this paper we extend the observation to all approximate stochastic universal resources, computing an explicit expression for the interplay between the different parameters. \subsection{Guideline through the paper} The paper is organized as follows. In Section \ref{sec:entanglement} we review some of the basic concepts, related to distance and entanglement measures respectively, which we use in the remaining of the paper. In Section \ref{sec:universality} we recall the definition of universal resources for measurement-based quantum computation, and see how the definition can be generalized to the approximate and stochastic case. In Section \ref{sec:nogo} we first review some of the results found in \cite{universalityI} and then show how they can be generalized in a very natural way obtaining necessary criteria for universal resources in the approximate and stochastic case. In this Section we also show how the issue of efficiency can be included in the analysis, obtaining in this way stronger versions of the above-mentioned criteria. Finally, in Section \ref{sec:examples}, we give some experimentally relevant examples of resources that are approximate deterministic, exact stochastic and approximate stochastic universal, but not exact deterministic universal. In particular, we show that any family of states that is close to a universal family is still approximate stochastic universal. Section \ref{sec:conclusions} summarizes and concludes our results. \section{Entanglement monotones} \label{sec:entanglement} In this section we review some essential features of entanglement monotones which are relevant in the study of universality in MQC. In Section \ref{subsec:axioms} we review the basic conditions which a function must satisfy in order to be considered an ``entanglement monotone''. Furthermore, we show how these conditions lead to the definitions of different ``types'' of entanglement measures. The distinction between different types of entanglement measures will be necessary to allow for a proper formulation of entanglement-based criteria for approximate and stochastic universality, as we will do in section \ref{sec:nogo}. In section \ref{subsec:epsilonmeasures} we consider a general class of monotones called ``epsilon-measures''. This class of measures was introduced in \cite{epsilon} in order to study the entanglement in states which are only known up to some approximation. For this reason they are suitable quantities to consider in the study of \emph{approximate} universality. In Section \ref{subsec:exmeasures}, we focus on two examples of existing entanglement measures, namely the geometric measure and the Schmidt-rank width. We discuss in which sense these quantities are monotones, and we discuss their associated $\epsilon$-measures. \subsection{Properties of entanglement monotones} \label{subsec:axioms} The first examples of entanglement measures were built by first considering a particular application of entanglement (such as, e.g., distillation) and then deriving a quantifier based on such an operation. This approach led to measures that, while naturally having a clear physical interpretation, were often very hard to compute. To evaluate, for example, the entanglement of distillation \cite{BDSW1996} it is necessary to optimize over all purification protocols. A different approach to the problem, that one might define ``axiomatic'', has been proposed in \cite{VPRK1997}. The starting point of this work was the idea that an entanglement measure is some mathematical quantity that should somehow capture the essential features that we associate with entanglement. With this idea in mind, it is possible to identify a set of conditions that must be satisfied by any such measure $E$. The most fundamental of these conditions are: \begin{itemize} \item[P1.]{{\it Vanishing on separable states}}: separable states do not contain entanglement, therefore we require that $E(\sigma_{sep})=0$. \item[P2.]{{\it Monotonicity under LOCC}}: entanglement cannot increase under LOCC, $E(\Lambda_{\textrm{LOCC}}[\rho])\leq E(\rho)$. \end{itemize} Here $\Lambda_{\textrm{LOCC}}$ denotes an LOCC transformation. Note that property P2 also implies that $E$ is invariant under local unitaries. Aside from these two postulates, other additional requirements for entanglement measures have been formulated. In particular, the following are among the most commonly found in literature. \begin{itemize} \item[P3.] \label{item1} {{\it Convexity}}: $E(p\rho_1+(1-p)\rho_2)\leq pE(\rho_1)+(1-p)E(\rho_2)$. \item[P4.] \label{item2}{\it Monotonicity on average under LOCC}: this condition is stronger than the monotonicity condition seen above, and is sometimes referred to as \emph{strong monotonicity}. It requires that the following holds true \begin{equation} \label{eq:strongmonotonicity} E(\rho)\geq\sum_i p_i E(\rho_i), \end{equation} where $\rho_i$ are the possible outputs of some LOCC protocol acting on $\rho$, and occur with probability $p_i$. \item[P5.] {\it Trivial extendability:} in this case, one aims at comparing entanglement in states of different system size. The condition of trivial extendability states the following: let $|\psi\rangle$ be an $N$-qubit state; then one requires that $E(|\psi\rangle|0\rangle)= E(|\psi\rangle)$. Here $|\psi\rangle|0\rangle$ is considered as an $(N+1)$-party state (and \emph{not} as an ancilla to one of the initial $N$ parties), where the $(N+1)$-th party is disentangled from the rest of the system. \end{itemize} Conditions P3 and P4 are often found in literature as necessary requirements for entanglement measures. Condition P5 has been introduced more recently \cite{universalityI}, in the context of the study of universality in MQC. Other different requirements have been formulated, and for a more detailed analysis of them we refer to \cite{pleniovirmani}. Depending on the set of conditions that are satisfied by the quantity $E$, we can define different types of measures. In particular, we can distinguish the following types, which we will use in the following sections. \begin{defi} \ \begin{description} \item[Weak entanglement monotone.] A real function $E$ is called a weak entanglement monotone if it satisfies conditions P1 to P3. \item[Strong entanglement monotone.] A real function $E$ is called a strong entanglement monotone if it satisfies conditions P1 to P4. \item[Extendable weak/strong monotone.] An extendable weak (strong) monotone is a weak (strong) entanglement monotone which additionally satisfies condition P5. \end{description} \end{defi} Note that, in all these definitions, we are imposing the convexity of the function. This condition is not always deemed necessary, but the measures we consider in the following satisfy it. We also remark that every strong entanglement monotone is also a weak monotone. The notion of an extendable monotone was introduced in \cite{universalityI} under the name ``type II monotone''. We now define another property, related to monotonicity under LOCC operations, that will be relevant in the analysis of resources for approximate measurement-based quantum computation. \begin{itemize} \item[P6.]{\it Weak non-increasing under LOCC}: a function $E$ is weakly non-increasing under LOCC if, for any state $\rho$ and for any LOCC protocol $\Lambda_\textrm{LOCC}:\rho\to\{p_i,\rho_i\}$, we have $E(\rho)\geq\min_i E(\rho_i)$. \end{itemize} In other words, monotones satisfying P6 are such that at least one of the outputs of an LOCC protocol acting on an initial state $\rho$ has entanglement smaller than that of $\rho$. Such a condition is trivially satisfied by any strong entanglement monotone. We conjectured \cite{cat_thesis} that P6 is implied by weak monotonicity, but this still has not been proved. To end this section, we will introduce two quantities associated to any entanglement measure $E$, which play a fundamental role both in \cite{universalityI} and in the results contained in Section \ref{sec:nogo}. The first notion is the \emph{asymptotic entanglement of a family of states}. Let $\Sigma=\{\sigma_i\}_i$ be an (infinitely large) family of many-qubit states, and $E$ be entanglement monotone defined on $N$-qubit states, for all $N$. We define the asymptotic entanglement $E(\Sigma)$ of the family as \begin{equation} E(\Sigma)=\sup_{\sigma\in\Sigma} E(\sigma). \end{equation} The case $E(\Sigma)=\infty$ is allowed. Second, the \emph{asymptotic entanglement $E^*$ of $E$} is defined as \begin{equation} E^*=\sup_{\rho\in{\mathcal{S}}} E(\rho), \end{equation} where the supremum is taken over all $N$-qubit states, for all $N\in{\mathbb{N}}$. The case $E^*=\infty$ is allowed. Note that, if $E$ is convex, one can restrict the set over which the supremum is taken to only the set of pure states (thus recovering the definition found in \cite{universalityI}). \subsection{$\epsilon$-measures of entanglement} \label{subsec:epsilonmeasures} The $\epsilon$-monotones \cite{epsilon} are a class of entanglement monotones which can be associated to any existing monotone, and which depend on a precision parameter $\epsilon$. They have been introduced to address the issue of quantifying the entanglement contained in a state which is only partially known as in the case of, for example, a state prepared using an imperfect apparatus. Given any entanglement measure $E$, its $\epsilon$-version is defined as \begin{equation} \label{epsilon} E^{(D)}_\epsilon(\rho)=\min\{E(\sigma)~|~D(\sigma,\rho)\leq\epsilon\}, \end{equation} where $D$ is a distance on the set ${\mathcal{S}}$ of states which is convex and contractive under completely positive trace preserving maps \cite{foot1}, and $\sigma,\rho\in{\mathcal{S}}$. To lighten notation we will often omit the superscript in ``$E^{(D)}_{\epsilon}$'' referring to the distance measure $D$ when writing down an $\epsilon$-measure, and we will simply write $E_{\epsilon}$. The quantity $E_\epsilon$ quantifies the ``guaranteed'' entanglement contained in a state since, by definition, any state $\sigma$ within an $\epsilon$-distance of the desired state $\rho$ has entanglement $E(\sigma)\geq E_\epsilon(\rho)$. In the following we will see that the $\epsilon$-measure of a state is the crucial quantity to consider when studying approximate preparation of such a state. Indeed, if we aim at preparing a state which is $\epsilon$-close to $\rho$, then $E_\epsilon(\rho)$ is the minimum entanglement that we must be able to obtain from the initial resource state. In the remainder of this section, we highlight some relevant properties of $\epsilon$-monotones. First, it has been shown \cite{epsilon} that $E_\epsilon$ is always a weak entanglement monotone if $E$ is. Moreover, also property P5 illustrated above is inherited by the $\epsilon$-version of a monotone satisfying it. Therefore, the $\epsilon$-version of an extendable weak monotone is again an extendable weak monotone. On the other side, the $\epsilon$-version of an entanglement measure is never a strong monotone. We refer to \cite{epsilon} for details. Computing the asymptotic entanglement $E_{\epsilon}^*$ for arbitrary $\epsilon$ may be a difficult task. Nevertheless, it is often tractable to compute the asymptotic entanglement $E_\epsilon^*$ when we are interested in the limit $\epsilon\to0$. This is particularly true in the case of continuous measures, where the following observation holds true. \begin{prop} \label{prop:Estar} If $E$ is bounded (for any fixed dimension), convex, and continuous then $\lim_{\epsilon\to 0^+}E_\epsilon^*=E^*$. \end{prop} \begin{proof} Let $E^*\in(0,\infty]$. To prove the statement we have to show that, for any $\mu>0$, there exists $\bar{\epsilon}(\mu)>0$ such that $\epsilon\leq\bar{\epsilon}(\mu)\Rightarrow E_\epsilon^*\geq E^*-\mu$. Consider that, for any state $\rho$, we have that $\epsilon'\leq\epsilon\Rightarrow E_{\epsilon'}(\rho)\geqE_\epsilon(\rho)$, which implies that $\epsilon'\leq\epsilon\Rightarrow E_{\epsilon'}^*\geqE_\epsilon^*$. Moreover, from the definition of $E_\epsilon^*$ it follows that, for any state $\rho$ and for any choice of $\epsilon$, $E^*_\epsilon\geqE_\epsilon(\rho)$. This implies that, $\forall \epsilon \leq \bar{\epsilon}(\mu)$ and $\forall \rho$, we have $E_\epsilon^*\geq E_{\bar{\epsilon}(\mu)}^*\geq E_{\bar{\epsilon}(\mu)}(\rho)$. Therefore, it is sufficient to prove that \begin{equation*} \forall \mu>0~,~\exists \bar{\epsilon}(\mu),~\rho(\mu)\textrm{ such that }E_{\bar{\epsilon}(\mu)}(\rho(\mu))\geq E^*-\mu. \end{equation*} In order to do so, we first recall that, since the family $\Psi_C=\{\ket{C_{N_i}}\}_i$ of two-dimensional cluster states (on $N_i=i\times i$ qubits) is exact and deterministic universal, we have that $E(\Psi_C)=E^*$, for any entanglement measure $E$ \cite{universalityI}. This implies that, for any $\mu>0$, there exists $N(\mu):=N_{i(\mu)}$ such that $E(\ket{C_{N(\mu)}})\geq E^*-\mu/2$. In \cite{epsilon}, it has been shown that, if $E$ satisfies the hypotheses above, then $E_\epsilon$ is continuous in $\epsilon$ and $\rho$. Hence, it is always possible to find an $\bar{\epsilon}(\mu,N(\mu))>0$ such that $E_{\bar{\epsilon}(\mu,N(\mu))}(\ket{C_{N(\mu)}})\geq E(\ket{C_{N(\mu)}})-\mu/2$. We have thus that, for any $\mu>0$, there exists a state $\ket{C_{N(\mu)}}$ and an $\bar{\epsilon}(\mu,N(\mu))>0$ such that \begin{equation*} \begin{split} E_{\bar{\epsilon}(\mu,N(\mu))}^*&\geq E_{\bar{\epsilon}(\mu,N(\mu))}(\ket{C_{N(\mu)}})\\ &\geq E(\ket{C_{N(\mu)}})-\mu/2\geq E^*-\mu. \end{split} \end{equation*} \end{proof} In the case of discontinuous measures, such as the $\chi$-width \cite{maarten1} or the Schmidt measure \cite{hans}, one has to compute $E_\epsilon^*$ on a case by case basis. We will elaborate on the case of the $\chi$-width in section \ref{subsec:exmeasures}. \subsection{Two entanglement measures} \label{subsec:exmeasures} In this Section we consider two explicit examples of entanglement measures that we use in Section \ref{sec:nogo} to construct criteria for approximate, non-deterministic universality. These are the geometric measure of entanglement and the Schmidt-rank width. We discuss in which sense these quantities are entanglement measures, what their asymptotic entanglement is, and how the $\epsilon$-versions of these measures behave. \subsubsection{Geometric measure of entanglement} The \emph{geometric measure} of entanglement was first introduced as a bipartite entanglement measure in \cite{shimony} and then generalized in \cite{barnumlinden, weigoldbart} to the multipartite setting. The intuition behind this measure is that the more entangled a state is, the more distinguishable it is from a separable state. The monotone can be defined as follows. Let $|\psi\rangle$ be a state of $N$ qubits, and let $\pi(|\psi\rangle)$ denote the maximum fidelity between $|\psi\rangle$ and a factorized state on $N$ qubits \begin{equation} \pi(|\psi\rangle)=\max_{|\varphi\rangle=\ket{\phi_1}\otimes\cdots\otimes\ket{\phi_N}} |\langle\psi|\varphi\rangle|^2. \end{equation} The geometric measure $E_G$ is defined by \begin{equation} E_G(|\psi\rangle) = 1- \pi(|\psi\rangle), \end{equation} This measure, defined for pure states, can be generalized to the case of mixed states by the convex roof construction, that is: \begin{equation} E_G(\rho)=\min_{\{p_i,\ket{\psi_i}\}_i}\sum_i p_i E_G(\ket{\psi_i}), \end{equation} where the minimum is taken over all $\{p_i,\ket{\psi_i}\}_i$ such that $\rho=\sum_i p_i\pro{\psi_i}$. One can verify that such measure satisfies conditions P1 to P5 and is, thus, an extendable strong entanglement monotone (and therefore also an extendable weak monotone). Next we consider the $\epsilon$-version of the geometric measure, and we focus on $\epsilon$-measures based on distances that are ``strictly related to the fidelity''. \begin{defi} \label{strictlyrelated} A distance $D$ on the set of states is said to be \emph{strictly related to the fidelity} if, for any two states $\rho$ and $\sigma$, $D(\rho,\sigma)\leq\epsilon \Rightarrow F(\rho,\sigma)\geq 1-\eta(\epsilon)$, with $0\leq\eta(\epsilon)\leq1$ a strictly monotonically increasing function of $\epsilon$ (for $\epsilon\geq0$ and $\epsilon$ less than the maximum value that $D$ can assume) such that $\eta(0)=0$. \end{defi} \noindent An example of such a measure is the trace distance. The following is a technical result, which is a lower bound for $(E_G)_\epsilon(|\psi\rangle)$ in terms of $E_G(|\psi\rangle)$. \begin{prop} \label{cor:geom} Let $D$ be a distance measure that is strictly related to the fidelity. Further, let $(E_G)_\epsilon$ denote the corresponding $\epsilon$-geometric measure. Then, for any pure state $|\psi\rangle$ and for any choice of $\epsilon>0$ such that $\eta=\eta(\epsilon)\lesssim 0.44$, the quantity $(E_G)_\epsilon(|\psi\rangle)$ is not smaller than \begin{equation} \begin{split} \left[1-\left(\frac{3\sqrt{\eta}}{2 E_G(|\psi\rangle)}\right)^{2/3}\right]\left[E_G(|\psi\rangle)-(18E_G(|\psi\rangle)\eta)^{1/3}\right]. \end{split} \end{equation} \end{prop} The proof of Proposition \ref{cor:geom} rather involved and will be given in Appendix \ref{sec:appendix_geo}. The above result can be used to bound the asymptotic $\epsilon$-geometric entanglement $(E_G)_\epsilon^*$. We have: \begin{prop} \label{lemma:geomepsstar} Let $D$ be a distance measure that is strictly related to the fidelity, and let $\epsilon>0$ be such that $\eta(\epsilon)\leq 0.44$, where $\eta(\epsilon)$ is such that $D(\rho,\sigma)\leq\epsilon \Rightarrow F(\rho,\sigma)\geq1-\eta(\epsilon)$. If $(E_G)_\epsilon$ denotes the $\epsilon$-geometric measure with respect to distance $D$, then \begin{equation} (E_G)_\epsilon^*\geq 1- 4\eta^{1/3}+3.4\eta^{2/3} \end{equation} \end{prop} \begin{proof} Since $(E_G)_\epsilon^*$ is defined as the supremum over all possible states, we have \begin{equation*} (E_G)_\epsilon^*\geq (E_G)_\epsilon(\Psi_C), \end{equation*} where $\Psi_C=\{\ket{C_{N_i}}\}_i$ is the family of two-dimensional cluster states on $N_i=i\times i$ qubits. The geometric measure for this class of states has been computed \cite{MMV07}, and we have $E_G(\ket{C_{N_i}})=1-2^{-N_i/2}$. In order to prove the statement, we apply Proposition \ref{cor:geom} to obtain \begin{equation} \begin{split} (E_G)_\epsilon^* &\geq (E_G)_\epsilon(\Psi_C)\\ &\geq \sup_N\left\{ \left[1-\left(\frac{3\sqrt{\eta}}{2 (1-2^{-N/2})}\right)^{2/3}\right]\right.\\ &~~~~~~~~~~~~~\left.\left[1-2^{-N/2}-(18(1-2^{-N/2})\eta)^{1/3}\right]\right\}\\ &= \left[1-\left(\frac{3\sqrt{\eta}}{2}\right)^{2/3}\right]\left[1-(18\eta)^{1/3}\right]\\ &= 1-(\frac{9\eta}{4})^{1/3}-(18\eta)^{1/3}+(\frac{81}{2}\eta^2)^{1/3}\\ &\geq 1 - 4\eta^{1/3}+3.4\eta^{2/3}. \end{split} \end{equation} \end{proof} Note that this result implies that \begin{equation}\lim_{\epsilon\to 0} (E_G)_\epsilon^* = 1.\end{equation} The latter also follows immediately from Proposition \ref{prop:Estar}. \subsubsection{Schmidt-rank width} The \emph{Schmidt-rank width} is an entanglement monotone which has been introduced and investigated in \cite{maarten, maarten1, universalityI}. It has been proved that this measure is an extendable strong entanglement monotone, and it can be used to assess whether resources for MQC admit an efficient classical simulation \cite{maarten1}. The \emph{Schmidt-rank width} $\chi_\textrm{wd}$ of a pure state $|\psi\rangle$ computes the minimum Schmidt rank $\chi$ of $|\psi\rangle$, where the minimum is taken over a specific class of bipartitions of the system. More precisely, $\chi_\textrm{wd}(|\psi\rangle)$ is defined as follows. \begin{figure} \begin{center} \includegraphics[width=0.3\textwidth]{subcubic.eps} \caption[Example of subcubic tree]{\footnotesize{(a) Example of a subcubic tree $T$ with six leaves. (b) Tree $T\backslash e$ obtained from $T$ by removing edge $e$, and induced bipartition.}} \end{center} \label{fig:subcubic} \end{figure} Let $|\psi\rangle$ be an $N$-partite state. We consider a subcubic tree $T$, i.e. a graph with no cycles, where each vertex has exactly 1 or 3 incident edges, with $N$ leaves ($N$ vertices with only 1 incident edge), which we identify with the $N$ parties of the system (see Figure \ref{fig:subcubic}). If $e=\{i,j\}$ is an arbitrary edge of $T$, we denote by $T\backslash e$ the graph obtained by deleting the edge $e$ from $T$. The graph then consists of two connected components, which naturally induce a bipartition $(A_T^e, B_T^e)$ of the system. If $\chi_{A_T^e,B_T^e}(|\psi\rangle)$ is the Schmidt rank of $|\psi\rangle$, with respect to the bipartition $(A_T^e, B_T^e)$, the \emph{Schmidt-rank width} of $|\psi\rangle$ is given by \begin{equation} \chi_\textrm{wd}(|\psi\rangle)=\min_T\max_{e\in T} \chi_{A_T^e,B_T^e}(|\psi\rangle), \end{equation} where the minimum is taken over all subcubic trees $T$ with $N$ leaves (identified with the $N$ parties of the system), and $\chi_{A_T^e,B_T^e}(|\psi\rangle)$ is the Schmidt rank of $|\psi\rangle$ with respect to the bipartition $(A_T^e, B_T^e)$. The Schmidt rank width may be generalized to mixed states by a convex roof construction. Note that the Schmidt-rank width is not continuous, such that Proposition \ref{prop:Estar} cannot be used to compute the asymptotic behavior of its $\epsilon$-version in the limit $\epsilon\to0$. However, it is still relatively easy to gain insight in this matter, in the following way. First, note that \begin{equation} \label{E_wd} \chi_\textrm{wd}(|\psi\rangle)\geq E_\textrm{wd}(|\psi\rangle) \end{equation} for every state $|\psi\rangle$. Here $E_\textrm{wd}(|\psi\rangle)$ denotes the entropic entanglement width, as defined in \cite{universalityI}. The entropic entanglement width is defined via the same optimization procedure as the Schmidt-rank width, now with the entanglement entropy as the ``basic measure''. Note that (\ref{E_wd}) implies that \begin{equation} (\chi_\textrm{wd})_\epsilon(|\psi\rangle)\geq (E_\textrm{wd})_\epsilon(|\psi\rangle), \end{equation} and thus $(\chi_\textrm{wd})_\epsilon^*\geq (E_\textrm{wd})_\epsilon^*$. Furthermore, as the entropic entanglement width is a weak monotone which is moreover continuous, and since $E_\textrm{wd}^*=\infty$, one has \begin{equation} (E_\textrm{wd})_\epsilon^* \xrightarrow{\epsilon\to0} E_\textrm{wd}^*=\infty\end{equation} due to Proposition \ref{prop:Estar}. We can therefore conclude that also \begin{equation} \lim_{\epsilon\to0} (\chi_\textrm{wd})_\epsilon^* =\infty. \end{equation} \section{Universality in MQC} \label{sec:universality} In the one-way model of computation, information is processed by means of single qubit measurements on an initial highly entangled state. In the original proposal \cite{1-way1}, this state was chosen to be a cluster state, but there is no reason to assume that this is the only possible choice. Indeed, in recent works it has been shown that also other states can be used as a resource for measurement-based quantum computation \cite{universalityI,gross}. Following \cite{universalityI}, in this work we consider the case in which any LOCC operation can be performed on the initial state. This corresponds to allowing two way classical communication, whereas the original scheme only requires one-way communication. We report here the definition of universal CQ resources used in \cite{universalityI}, and on which the following discussion will be based. \begin{defi}[Exact universal resources.] A family $\Sigma=\{\sigma_i\}_i$ of states is called a universal resource for measurement-based quantum computation if, for every $N$ and for every $N$-qubit quantum state $\ket{\phi_\textrm{out}}$, there exists an $M$-qubit resource state $\sigma\in\Sigma$ and an LOCC protocol $\Lambda_\textrm{LOCC}$ that acts in the following way \begin{equation} \sigma\xrightarrow{\Lambda_\textrm{LOCC}}P_\textrm{out} \otimes P_0^{\otimes (M-N)}, \end{equation} where $P_\textrm{out}=\pro{\phi_\textrm{out}}$ and $P_0=\pro{0}$. \end{defi} \subsection{$\epsilon$-approximate $\delta$-stochastic universality } While previous works have considered the characterization of exact universal resources for MQC, we are here more interested in considering weaker forms of universality, where the output state can be generated stochastically (with some finite success probability) or with some finite accuracy. Note that the nature of the resource might not be the only reason for which exact universality cannot be achieved. Indeed, as well as the circuit model with a finite gate basis, one can consider the case of one-way quantum computation where there are, e.g., only finite possible measurement directions \cite{foot2}. Also, one must consider the fact that any experimental implementation will introduce some source of error in the computation. In order to take these factors into account, in the following we define the concepts of $\delta$-stochastic and $\epsilon$-approximate universality. In a realistic scenario, one is expected to be interested mainly in approximate stochastic (or quasi-deterministic) universality. \begin{defi}[$\epsilon$-approximate $\delta$-stochastic universal resources] A family of states $\Sigma=\{\sigma_i\}_i$ is called $\epsilon$-approximate, relatively to a distance measure $D$, and $\delta$-stochastic universal if for every $N$ and for every $N$-qubit quantum state $|\phi_{\rm out}\rangle$, there exists an $M$ qubit state $\sigma \in \Sigma$ and an LOCC protocol with output branches $\{p_i,\rho_i\}$ such that the sum of the probabilities $p_i$ for the branches where $D_i=D(\rho_i,P_\textrm{out}\otimes P_0^{\otimes M-N})\leq\epsilon$ (where $P_\textrm{out}=\pro{\phi_\textrm{out}}$ and $P_0=\pro{0}$) is larger than $1-\delta$. \end{defi} First, as regards the case of $\delta$-stochastic universality, we do not require that the output state is generated deterministically, but it is sufficient that this happens under stochastic LOCC (SLOCC) with sufficiently high probability, that is \begin{equation*} p_{\textrm{success}} = \sum_{i:\rho_i=P_\textrm{out}\otimes P_0^{\otimes (M-N)}}p_i\geq 1- \delta . \end{equation*} In particular, when $\delta$ can be made arbitrary small, we may call this quasi-deterministic universality, which is stronger than $\delta$-stochastic universality for a fixed $\delta$ . Second, as regards $\epsilon$-approximate universality, we require the output of the computation is generated approximately with accuracy $\epsilon$, as is the case for the quantum circuits built from a finite universal set of elementary gates. Precisely, $D$ can be any distance measure on the set of states, that is contractive under LOCC and convex. The choice of the appropriate measure might depend on the task for which the output state is required (see, for example, the related discussion in \cite{nielsendistance}). \subsection{Efficient universality} We now consider the issue of how to generalize the concept of efficient universality to the approximate and stochastic cases. In order to do so, let us first recall the definition of exact efficient universality \cite{universalityI}. \begin{defi}[Exact efficient universal resources] A family of states $\Sigma=\{\sigma_i\}_i$ is called an efficient exact universal resource for measurement-based quantum computation if, for every $N$ and for every $N$-qubit quantum state $\ket{\phi_\textrm{out}}$ which can be obtained by a poly-sized quantum circuit, there exists an $M$-qubit state $\sigma\in\Sigma$, with $M\leq{\mathcal{O}}(\textrm{poly}(N))$, such that the transformation $\sigma\to\ket{\phi_\textrm{out}}\ket{0}^{\otimes (M-N)}$ is possible by means of LOCC in time that is at most $\textrm{poly}(N)$ and using classical processing that is polynomially bounded in space and time. \end{defi} This definition can be easily extended to the approximate and stochastic case, when the desired accuracy $\epsilon$ and success probability $\delta$ are fixed. In this case one has the following \begin{defi}[Efficient $\epsilon$-approximate $\delta$-stochastic universal resources] Let $|\phi_\textrm{out}\rangle$ be any $N$--qubit quantum state that can be generated efficiently, i.e. with a poly--sized quantum circuit, from a product state in the network model, and let $P_\textrm{out}=\pro{\phi_\textrm{out}}$. A family of states $\Sigma=\{\sigma_i\}_i$ is efficient $\epsilon$-approximate (with respect to some distance $D$) $\delta$-stochastic universal if there exists an $M$-qubit state $\sigma\in\Sigma$, with $M\leq{\mathcal{O}}(\textrm{poly}(N))$, and an LOCC protocol with output branches $\{p_i,\rho_i\}_i$ such that \begin{equation*} \sum_{i:D(\rho_i,P_\textrm{out}\otimes P_0^{\otimes(M-N)})\leq\epsilon}p_i\geq 1-\delta \end{equation*} (with $P_0=\pro{0}$) that can be implemented in ${\mathcal{O}}(\textrm{poly}(N))$ time, using classical side processing that is bounded in space and time by $\textrm{poly}(N)$. \end{defi} For approximate and stochastic computation, in many cases it is meaningful and interesting to take into account also the scaling of the overhead with the desired accuracy $\epsilon$ and success probability $\delta$. In the circuit model, the scaling with the accuracy is determined by the Solovay-Kitaev theorem \cite{kitaev}. Similarly, in the one-way model we require that the scaling of the overhead in spatial, temporal and computational resources with $\epsilon$ is $O({\rm poly}(m,\log(1/\epsilon)))$ for states that can be produced with $m$ gates in the network model. Notice that we allow for a polynomial increase of resources with respect to the number of elementary gates $m$, as it is also done in the definition of {\em exact} efficient universality. It follows that any state that can be generated efficiently in the network model, i.e. with ${\rm poly}(m)$ elementary gates, should be approximated with accuracy $\epsilon$ with overhead that scales $O({\rm poly}(m,\log(1/\epsilon)))$ in the measurement-based model. As regards the scaling with the probability parameter $\delta$, we claim that it should be treated in a way analogous to the accuracy, based on the following observation. Let us consider the following observation, in which we see that the two parameters $\delta$ and $\epsilon$ indeed play the same role when we try to determine the fidelity between the desired output of a computation on an $\epsilon$-approximate $\delta$-stochastic resource and the real output of the protocol. \begin{obs} Let us consider a universal $\epsilon$-approximate $\delta$-stochastic resource $\Sigma=\{\sigma_i\}_i$, and let $\Lambda_\textrm{LOCC}$, such that $\sigma\to\{p_i,\rho_i\}_i$, be the LOCC protocol for some output $\ket{\phi_\textrm{out}}$. We can, almost equivalently, consider $\Lambda_\textrm{LOCC}$ to be performing the following transformation: $\sigma\to\rho=\sum_i p_i\rho_i$. Computing the fidelity between the desired output state $\ket{\phi_\textrm{out}}$ and $\rho$, one finds that this leads to the bound $F(\rho,\ket{\phi_\textrm{out}})\geq(1-\epsilon)(1-\delta)$. \end{obs} However, since the counterpart of the Solovay-Kitaev theorem for the success probability $\delta$ has not been found, it is not clear how we could attain an efficient scaling by ${\rm poly}(\log(1/\delta))$ in practice. Thus, we provide here a natural definition for efficient approximate and stochastic universal resources \cite{foot3}. \begin{defi}[Efficient approximate stochastic universal resources] Let $|\phi_\textrm{out}\rangle$ be any $N$--qubit quantum state that can be generated efficiently, i.e. with a poly--sized quantum circuit, from a product state in the network model, and let $P_\textrm{out}=\pro{\phi_\textrm{out}}$. A family of states $\Sigma=\{\sigma_i\}_i$ is efficient approximate (with respect to some distance $D$) stochastic universal if, for all $\epsilon, \delta>0$, there exists an $M$-qubit state $\sigma\in\Sigma$, with $M\leq{\mathcal{O}}(\textrm{poly}(N,\frac{1}{\delta},\frac{1}{\epsilon}))$, and an LOCC protocol with output branches $\{p_i,\rho_i\}_i$ such that \begin{equation*} \sum_{i:D(\rho_i,P_\textrm{out}\otimes P_0^{\otimes(M-N)})}p_i\geq 1-\delta \end{equation*} (with $P_0=\pro{0}$) that can be implemented in ${\mathcal{O}}(\textrm{poly}(N,\frac{1}{\delta},\frac{1}{\epsilon}))$ time, using classical side processing that is bounded in space and time by $\textrm{poly}(N,\frac{1}{\delta},\frac{1}{\epsilon})$. \end{defi} \section{Criteria for universality and no-go results} \label{sec:nogo} In this Section we prove some necessary conditions for $\epsilon$-approximate $\delta$-stochastic universality, based on some entanglement properties of the resource. These results can be interpreted as a generalization of the ones obtained in \cite{universalityI}, even though in some cases they require stronger assumptions on the entanglement monotone used to quantify the entanglement of the resource. In \cite{universalityI} it was noticed that any deterministic exact universal resource $\Sigma$ must be such that, for any extendable entanglement measure $E$, $E(\Sigma)= E^*$. By evaluating $E^*$ in the case of different entanglement measures it was possible to show how some families of states (e.g. W states, 1-dimensional systems,...) could not be exact deterministic universal. In the case of $\epsilon$-approximate and $\delta$-stochastic universality, we show that a similar (but, naturally, weaker) result still holds true, where $E^*$ is substituted with $E_\epsilon^*$. As we shall see in the following, though, in these more general cases it is necessary to consider entanglement measures $E$ satisfying some properties in addition to those required from the measures considered in \cite{universalityI}. While we are interested in the most general case of $\epsilon$-approximate and $\delta$-stochastic resources, we shall first treat the issue of $\epsilon$-approximate deterministic (i.e. $\epsilon$-approximate and $\delta$-stochastic, with $\delta=0$) resources separately. \subsection{$\epsilon$-approximate deterministic universality} \begin{teo}[Criterion for $\epsilon$-approximate deterministic universality] \label{teo:approxdet} Let $E$ be an extendable monotone that is weakly non-increasing under LOCC (as defined in Section \ref{subsec:axioms}), and let $\Sigma=\{\sigma_i\}_i$ be an $\epsilon$-approximate universal resource, with respect to some distance $D$. Then $E(\Sigma)\geqE_\epsilon^{*}$. Furthermore, if $\Sigma$ is an approximate universal resource, then \begin{equation} E(\Sigma)\geq\lim_{\epsilon\to0^+}E_\epsilon^{*}. \end{equation} \end{teo} \begin{proof} Let us fix the distance measure $D$, let $\ket{\phi_\textrm{out}}$ be any $N$-qubit state and $P_\textrm{out}=\pro{\phi_\textrm{out}}$. Since $\Sigma$ is $\epsilon$-approximate deterministic universal, there exist an $M$-qubit state $\sigma\in\Sigma$ and an LOCC protocol $\sigma\to\{p_i,\rho_i\}$ such that $D(\rho_i,P_\textrm{out}) \leq\epsilon~\forall i$. Thus, for all $i$, \begin{equation} \label{eqapprdet1} \begin{split} E(\rho_i)&\geq \min_\rho\{E(\rho)| D(\rho,P_\textrm{out}\otimes P_0^{\otimes(M-N)})\leq\epsilon\}\\ &=E_\epsilon(\ket{\phi_\textrm{out}}\otimes\ket{0}^{\otimes{M-N}})\geqE_\epsilon(\ket{\phi_\textrm{out}}), \end{split} \end{equation} where $P_0=\pro{0}$ and in the last inequality we have used that, since $E$ is an extendable monotone, also $E_\epsilon$ is \cite{epsilon}. Since (\ref{eqapprdet1}) holds for all $\rho_i$, and we have assumed that $E$ is weakly non-increasing under LOCC, we have: \begin{equation} E(\sigma)\geq\min_i E(\rho_i)\geqE_\epsilon(\ket{\phi_\textrm{out}})~. \end{equation} The first part of the theorem is proved by considering the fact that $\ket{\phi_\textrm{out}}$ is allowed to be any state. The second part of the theorem follows from the fact that $E_\epsilon$, and thus $E_\epsilon^*$, is monotonically non-increasing with $\epsilon$. If $\Sigma$ is an approximate deterministic universal resource, then the previous result must hold true for any value of $\epsilon >0$. \end{proof} As we have mentioned above, computing $E_\epsilon^*$ can in general be a hard task. Nevertheless, we have seen how this is possible at least in some particular cases. Whenever this happens, we can use Theorem \ref{teo:approxdet} to generalize the results obtained in the exact deterministic case also to the approximate (or even $\epsilon$-approximate) deterministic one, and show that some classes of states are not universal even in the approximate cases. These include, for examples, all graph states whose underlying graph has bounded rank-width, such as, e.g., tree graphs or cycle graphs (which have bounded $\chi$-width) \cite{maarten,maarten1}. Moreover, Proposition \ref{lemma:geomepsstar} also allows us to show that the family of $W$ states \cite{DurVC00-wstate} is not $\epsilon$-approximate universal for values of $\epsilon$ smaller than some finite $\bar\epsilon$. \begin{es} Let us consider the family $\Psi_W=\{\ket{W_N}\}_N$, where $\ket{W_N}$ is the $N$-qubit $W$ state \begin{equation*} \ket{W_N}=\frac{1}{\sqrt{N}}\sum_{i=1}^N\ket{e_{N,i}}, \end{equation*} and where $\ket{e_{N,i}}$ is defined to be the $N$-qubit computational basis state with a $\ket{1}$ in the $i$-th position, and $\ket{0}$ elsewhere. If $D$ is a distance measure that is strictly related to the fidelity (see Definition \ref{strictlyrelated}), then we have that $\Psi_W$ is not an $\epsilon$-approximate universal resource for any $\epsilon<\bar\epsilon$, where $\bar\epsilon$ depends on the choice of distance and is such that $\eta(\bar\epsilon)\simeq 0.1$\%, where $\eta$ is defined as in Definition \ref{strictlyrelated}. \end{es} \begin{proof} If $\pi(W_N)$ is defined as in Section \ref{subsec:exmeasures}, we can consider $\pi(\Psi_W)=\sup_{W_N\in\Psi_W}\pi(W_N)$. Since it can be shown (c.f. Ref.~\cite{universalityI}) that $\pi(\Psi_W)=1/{\mathrm{e}}$, it follows that \begin{equation*} E_G(\Psi_W)=1-\frac{1}{\mathrm{e}}. \end{equation*} The statement follows immediately from Proposition \ref{lemma:geomepsstar}, since we have that \begin{equation} E_G^{(\epsilon)^*}\geq 1- 4\eta^{1/3}+3.4\eta^{2/3}>1-1/{\mathrm{e}}=E_G(\Psi_W), \end{equation} for any choice of $\epsilon$ such that $\eta(\epsilon)\lesssim0.1\%$, and where $\eta=\eta(\epsilon)$ is such that $D(\rho,\sigma)\leq\epsilon \Rightarrow F(\rho,\sigma)\geq1-\eta(\epsilon)$. \end{proof} \subsection{$\epsilon$-approximate $\delta$-stochastic universality} Let us consider, now, the case of $\epsilon$-approximate and $\delta$-stochastic universality. Also in this case we can formulate a criterion which generalizes the results obtained in \cite{universalityI} for exact deterministic universal resources, even though it is necessary to impose further requirements on the entanglement measure from which the criterion is derived. \begin{teo}[Criterion for $\epsilon$-approximate $\delta$-stochastic universality] \label{teo:apprstoc} Let $E$ be an extendable strong monotone, and let $\Sigma=\{\sigma_i\}_i$ be an $\epsilon$-approximate (with respect to a distance $D$) $\delta$-stochastic universal resource. Then \begin{equation} E(\Sigma)\geq(1-\delta)E_\epsilon^*, \end{equation} where $E_\epsilon$ is the $\epsilon$-generalization of $E$ with respect to $D$. \end{teo} \begin{proof} Let us fix the distance measure $D$, let $\ket{\phi_\textrm{out}}$ be an $N$-qubit quantum state and $P_\textrm{out}=\pro{\phi_\textrm{out}}$. Since $\Sigma$ is $\epsilon$-approximate deterministic universal, there exist an $M$-qubit state $\sigma\in\Sigma$ and an LOCC protocol $\sigma\to\{p_i,\rho_i\}$ such that \begin{equation*} \sum_{\epsilon-{\rm close}}p_i\geq(1-\delta), \end{equation*} where $P_0=\pro{0}$ and where the sum is taken over all indices $i$ such that $D(\rho_i,P_\textrm{out}\otimes P_0^{\otimes(M-N)})\leq \epsilon$. We have then \begin{equation*} \begin{split} E(\sigma)&\geq\sum_i p_i E(\rho_i)\geq\sum_{\epsilon-{\rm close}}p_i E(\rho_i)\\ &\geq \sum_{\epsilon-{\rm close}} p_i \min\{E(\rho)|D(P_\textrm{out}\otimes P_0^{\otimes(M-N)},\rho)\leq\epsilon\}\\ &= \sum_{\epsilon-{\rm close}} p_i E_\epsilon(P_\textrm{out}\otimes P_0^{\otimes(M-N)}) \\ & \geq (1-\delta)E_\epsilon(P_\textrm{out}\otimes P_0^{\otimes(M-N)})\\ &\geq(1-\delta)E_\epsilon(\ket{\phi_\textrm{out}}), \end{split} \end{equation*} where in the first inequality we have used the fact that $E$ is a strong monotone, and the last inequality follows from the fact that $E$ and, consequently, $E_\epsilon$ are extendable monotones. \end{proof} An immediate consequence of this Theorem is the following \begin{cor} \label{cor:apprstoc} Let us consider an strong monotone $E$, and let $E_\epsilon$ be its $\epsilon$-generalization (with respect to some distance $D$) such that $E_\epsilon^*=\infty$. Then any $\epsilon$-approximate $\delta$-stochastic universal family of resources $\Sigma$ is such that \begin{equation*} E(\Sigma)=\infty, \end{equation*} for all fixed values of $\delta<1$. \end{cor} Note that this implies that those families that were shown not to be approximate deterministic universal in the previous family are also not approximate $\delta$-stochastic universal, for all values of $\delta<1$. \subsection{Efficiency in the approximate and stochastic case} In the previous paragraphs, we have only considered criteria for universality, without taking efficiency into account. We will see now how these criteria can be strengthened to become necessary conditions for efficient $\epsilon$-approximate and $\delta$-stochastic universality. In order to do so, the strategy will be analogous to the one followed in \cite{universalityI} in the exact and deterministic case, and is based on the following observation: {\bf Observation. }{\it A set of states $\Sigma=\{\sigma_i\}_i$ is an efficient $\epsilon$-approximate $\delta$-stochastic resource if and only if all 2-dimensional cluster states $\ket{C_{d\times d}}$ (for all $d$) can be prepared efficiently from the set $\Sigma$ by LOCC with success probability $p\geq(1-\delta)$ and with accuracy $\epsilon$. } \begin{proof} The necessity of the condition is immediate. The sufficiency follows from the fact that a family composed of states each of which is close to a cluster state is $\epsilon$-approximate $\delta$-stochastic universal for any choice of $\epsilon$ and $\delta$ (see Section \ref{subsec:approxresuniv}). \end{proof} As in the exact deterministic case, we see that the \emph{scaling} of entanglement plays a major role when one considers efficiency-related issues. \begin{teo}[Criterion for efficient $\epsilon$-approximate $\delta$-stochastic universality] \label{teo:efficientfam} Let $\Sigma=\{\sigma_i\}_i$ be an $\epsilon$-approximate (with respect to some distance $D$) $\delta$-stochastic universal family, where $\sigma_i$ is a state on $N_i$ qubits. Let us consider an extendable strong entanglement monotone $E$, and let $f_\epsilon$ be a function such that, for every 2-dimensional cluster state $\ket{C_{d\times d}}$ on $N=d^2$ qubits, one has \begin{equation*} E_\epsilon(\ket{C_{d\times d}})\geq f_\epsilon(N), \end{equation*} where $E_\epsilon$ is the $\epsilon$-generalization of $E$ with respect to $D$.\\ If $E(\sigma_i)$ scales as $\log f_\epsilon(N_i)$, then $\Sigma$ cannot be an efficient $\epsilon$-approximate and $\delta$-stochastic universal resource. \end{teo} \begin{proof} Since we have assumed that $\Sigma$ is an $\epsilon$-approximate $\delta$-stochastic universal resource, for any $N=d^2$ there must exist a $g(N)$-qubit state $\sigma_{g(N)}\in\Sigma$ and an LOCC protocol $\sigma_{g(N)}\to\{p_i,\rho_i\}_i$ such that \begin{equation*} \sum_{\epsilon-{\rm close}}p_i\geq(1-\delta), \end{equation*} where the sum is taken over the indices $i$ such that $D(|\varphi_i\rangle,\ket{C_{d\times d}}\ket{0}^{\otimes(g(N)-N)})\leq\epsilon$. From what we have already seen (see the proof of Theorem \ref{teo:apprstoc}), it follows that, necessarily \begin{equation*} E(\sigma_{g(N)})\geq(1-\delta)E_\epsilon(\ket{C_{d\times d}})\geq(1-\delta)f_\epsilon(N). \end{equation*} In order for $\Sigma$ to be an efficient resource, though, it is necessary that $g(N)$ is at most polynomial in $N$, and thus, following an argument parallel to that in Theorem 9 of \cite{universalityI}, we can conclude that $E(\Sigma)$ cannot scale logarithmically with $f_\epsilon(N)$. \end{proof} We emphasize that the family of two-dimensional cluster states in principle does not play a distinguished role in Theorem \ref{teo:efficientfam}, in the sense that it can be replaced -without weakening or strengthening the result- by any arbitrary efficient universal family or, in fact, any family of states which themselves also can efficiently be prepared. \begin{es} Based on the criterion by Theorem~\ref{teo:efficientfam}, the states whose Schmidt-rank width have a polylogarithmic scaling in $N$ are not efficient exact deterministic universal resources as shown in Ref.~\cite{universalityI}, nor efficient $\epsilon$-approximate and $\delta$-stochastic universal resources. These include the cluster state on the 2D stripe $ d \times \log d$, and the cluster state on the faulty 2D lattice with a site occupation probability $p \leq p_{c}$, as mentioned later in Sec.~\ref{sec:examples}. \end{es} \section{Examples of $\epsilon$-approximate and/or $\delta$-stochastic universal resources} \label{sec:examples} In this section we provide examples of families of states that are universal resource states when we relax our requirements for universal MQC to $\epsilon$-approximate and/or $\delta$-stochastic universality. \subsection{2D cluster state with holes as an exact quasi-deterministic resource} Our model is a faulty 2D cluster state in which qubits get entangled after qubits are prepared with partial losses (called holes here) in the background 2D square lattice with total size $M = N^2$, where $N$ is the side length. The lattice-site occupation probability is denoted as $p_{\rm site}$, and thus the hole probability is given by $1-p_{\rm site}$. We assume here that every hole occurs independently according to the probability, and the locations of these holes are heralded. It is conceivable, for example in the implementations by optical lattice, that we may be able to check whether atoms are stored for each site before creating the 2D cluster state, and thus without destroying entanglement. That is why, our faulty 2D cluster state with holes is considered to be {\it a pure graph state} corresponding to a specific configuration of holes, in contrast with the statistical ensemble (classical mixture of several configurations) characterized by $p_{\rm site}$. All statistical statements, such as the percolation phenomenon, are meant to hold true almost with certainty (more precisely, with probability approaching unity in the thermodynamical limit), for all the possible realizations of the configuration of holes with a given $p_{\rm site}$. \begin{es}[\cite{BEF+07}] A family of 2D cluster states with holes (characterized by increasing total size $M$) is an efficient exact quasi-deterministic universal resource if and only if the site occupation probability $p_{\rm site}$ is greater than the percolation threshold $p_c = 0.5927 \ldots$ of the 2D square lattice. \end{es} \begin{proof} The detailed proof is available in Ref.~\cite{BEF+07}, in which the phase transition of the computational power of the 2D cluster state with holes was proved at the above mentioned threshold $p_c$. See also a preceding work \cite{KRE07} for the use of percolation theory to prepare cluster states by non-deterministic gates. In the supercritical phase ($p_{\rm site} > p_c$), it has been shown that if a preprocessing by polynomial-time classical computation is provided, we can construct an LOCC conversion which concentrates a perfect 2D cluster state from a faulty cluster state {\it with a constant overhead} (depending only on $p_{\rm site}$). Such an LOCC conversion works almost with certainty (namely, with success probability of LOCC conversion approaching unity exponentially in $L$), and will produce the 2D cluster state with fidelity exactly one when it is available. That is why the resource is efficient exact quasi-deterministic universal. \end{proof} \subsection{Deformed 2D cluster state as an exact quasi-deterministic resource} \label{subsec:deformed} We now give an example of universal resources which is not a graph state. Let us consider a local deformation of the 2D $N\times N$ cluster state $|C_{N\times N}\rangle$, \begin{equation} |dC_{N\times N}\rangle = \left(\frac{2}{1+\lambda^2}\right)^{N^2/2} \Lambda^{\otimes N^2} |C_{N\times N}\rangle, \end{equation} where $\Lambda={\rm diag}(1,\lambda)$ is the local deformation parametrized by $\lambda$ such that, without loss of generality, $ 0 \leq \lambda \leq 1$. We call it a deformed 2D cluster state whereby the perfect 2D cluster state corresponds to $\lambda = 1$. The deformed 2D cluster state can be seen as a ``noisy'' 2D cluster state resulting probabilistically from the local filtering operation $\Lambda$. Note however that the fidelity with the perfect 2D cluster state is $\left(\frac{(1+\lambda)^2}{2(1+\lambda^2)}\right)^M$, i.e., exponentially small in the number $M = N^2$ of the total qubits, so that the inverse transformation to the perfect 2D cluster state (with the same size) will succeed only with an exponentially small probability. Nevertheless, we show that {\it one single copy} of such a system can be an efficient resource, regardless of its size $M$, when $\lambda$ lies above a certain threshold. \begin{es} A family of the 2D deformed cluster states (with the total size $M$ increasing) is an efficient exact quasi-deterministic universal resource if the deformation parameter $\lambda$ is larger than $0.6490 \ldots$ . \end{es} \begin{proof} We show that one can convert the deformed cluster state $|dC_{N\times N}\rangle $ by means of LOCC {\it deterministically} into a graph state corresponding to a 2D $N\times N$ square lattice with holes. We apply local 2--outcome measurements described by POVM $\{\Lambda^{-1}={\rm diag}(\lambda,1),\ \overline{\Lambda^{-1}}={\rm diag}(\sqrt{1-\lambda^2},0)\}$ at each qubit. If the outcome $\Lambda^{-1}$ occurs, we successfully ``undo'' the effect of deformation, while when the outcome $\overline{\Lambda^{-1}}$ happens, the qubit is projected into $|0\rangle$ so that it corresponds to a deletion of the vertex with attached edges (i.e., a hole) in the 2D cluster state. The probability of these successful events, which is independent of the position of qubits, determines the site occupation probability, \begin{equation} p_{\rm site}=\frac{2\lambda^2}{1+\lambda^2}. \end{equation} It should be noted that this expression is independent of the system size $M$. According to the threshold $p_c$ of the 2D cluster state with holes, it is now clear that if $\lambda > \lambda_c \approx 0.6490 \ldots$ the resulting resource is efficient exact quasi-deterministic universal, so is true for the original deformed 2D cluster state. We remark that here $\lambda > 0.6490 \ldots$ is merely a sufficient condition for being efficiently universal. \end{proof} \subsection{A noisy cluster state as an $\epsilon$-approximate deterministic resource} \label{subsec:approxresuniv} \begin{es} Let $\Sigma=\{\sigma_i\}_i$ be a family of mixed states such that, for all $i$, $\sigma_i=(1-p)\pro{C_{N_i}}+p\pro{\tilde C_{N_i}}$, where $\ket{C_{N_i}}$ is the 2-dimensional cluster state on $N_i=i\times i$ qubits, and $\ket{\tilde C_{N_i}}$ is obtained from $\ket{C_{N_i}}$ by applying a phase flip $\sigma_z$ on a single qubit, so that $\ket{\tilde C_{N_i}}$ has a $-1$ eigenvalue only at the corresponding stabilizer operator. Note that $p$ is independent of the total system size $N_i$, because of the (unrealistic) assumption that only one phase flip can happen. Let $D$ be a convex distance measure on the set of states such that $D(\rho,\sigma)\leq 1$ for all $\rho$ and $\sigma$ \cite{foot4}. Then $\Sigma$ is an $\epsilon$-approximate deterministic universal resource, relatively to $D$, for $\epsilon\geq p$. \end{es} \begin{proof} Let us consider any output state $\ket{\phi_\textrm{out}}$ and let $P_\textrm{out}$ be the projector onto such a state. Since the family of cluster states is exact and deterministic universal, then there exist a state $\ket{C_{N_i}}$ and an LOCC protocol that, acting on $\ket{C_{N_i}}$, generates the state $\ket{\phi_\textrm{out}}$. This means that there exists an LOCC protocol $\Lambda_\textrm{LOCC}$ such that $\Lambda_\textrm{LOCC}[\pro{C_{N_i}}]=\sum_k p_k P_\textrm{out}^{(A)} \otimes P_k^{(R)}$, where $P_k=\pro{k}$ are projectors onto orthogonal states of some register $R$. We have thus \begin{equation*} \begin{split} & \Lambda_\textrm{LOCC}[\sigma_i] \\ &=(1-p)\Lambda_\textrm{LOCC}[\pro{C_{N_i}}]+p\Lambda_\textrm{LOCC}[\pro{\tilde C_{N_i}}]\\ &=(1-p)\sum_k p_k P_\textrm{out}^{(A)}\otimes P_k^{(R)}+p\sum_k \tilde p_k \tau^{(A)}_k\otimes P_k^{(R)} \end{split} \end{equation*} where $\Lambda_\textrm{LOCC}[\pro{\tilde C_{N_i}}]=\sum_k \tilde p_k \tau^{(A)}_k\otimes P_k^{(R)}$. Since both $\ket{C_{N_i}}$ and $\ket{\tilde C_{N_i}}$ are 2-dimensional cluster states on $N_i$ qubits, we have that the probability of each output branch is the same and is given by \cite{1way1long} \begin{equation*} p_k=\tilde p_k=\frac{1}{2^{N_i-m}}, \end{equation*} where $N_i-m$ is the number of qubits that are measured. We can thus write the final state of the system plus the register as \begin{equation*} \Lambda_\textrm{LOCC}[\sigma_i]=\sum_k \frac{1}{2^{N_i-m}} [(1-p)P_\textrm{out}^{(A)}+ p \tau^{(A)}_k]\otimes P_k^{(A)}. \end{equation*} The $k$-th output branch, thus, yields a state $\rho_k=(1-p)P_\textrm{out}+\tau^{(k)}$ such that \begin{equation*} \begin{split} D(\rho_k,P_\textrm{out})&=D((1-p)P_\textrm{out}+p\tau^{(k)},P_\textrm{out})\\ &\leq(1-p)D(P_\textrm{out},P_\textrm{out})+pD(\tau^{(k)},P_\textrm{out})\\ &\leq p, \end{split} \end{equation*} where the first inequality derives from the convexity of the distance $D$, and the second one follows from the fact that $D(\rho,\sigma)\leq 1$. Since this holds for all the output branches, we obtain that the state $\ket{\phi_\textrm{out}}$ has been produced $\epsilon$-approximately (for any $\epsilon\geq p$) and deterministically. The proof is completed by noticing that the argument holds for any desired output state $\ket{\phi_\textrm{out}}$. \end{proof} We remark that a similar result holds not only for mixtures of two cluster states, but also for states of the form \begin{equation} \sigma_i = (1-p)\pro{C_{N_i}}+p \sum_{\bm k} \lambda_{\bm k} \pro{C_{N_i}^{\bm k}}, \end{equation} where $\sum \lambda_{\bm k}=1$, ${\bm k}$ is a binary vector of length $N_i$ where $k_j \in \{0,1\}$ corresponds to qubit $j$, and $C_{N_i}^{\bm k}$ is a 2D cluster state which is obtained from $\ket{C_{N_i}}$ by applying $(\sigma_z^{j})^{k_j}$ to qubit $j$, i.e. $|C_{N_i}^{\bm k}\rangle = \prod (\sigma_z^{j})^{k_j} \ket{C_{N_i}}$. Notice that the $\ket{C_{N_i}^{\bm k}}$ form a basis, and hence the noise term can also be the identity. Also the action of local Pauli noise channels acting on the individual qubits leads to states of this form \cite{graphstatereview}. The key insight is again that the success probability for each branch is the same for all noise terms, leading to a distance $D(\rho_k,P) \leq p$ for the output states, independent of the measurement outcomes. We also mention that a similar resource with the subsections~ \ref{subsec:deformed} and \ref{subsec:approxresuniv} has been considered recently in Ref.~\cite{BBD+08} through the analysis of the thermal state for the cluster-state Hamiltonian with a local $\sigma_{z}$ field. \subsection{Stability of universal resources} Let us consider a scenario in which one wants to experimentally implement some measurement-based computation. In this case, it is natural to assume that the initial resource cannot be prepared exactly. In the following Theorem~\ref{teo:resources}, we analyze this case, giving a proof of the stability of universal resources under initial perturbation, and determining an expression for the worsening of the probability and accuracy parameters as a function of the error in the initial preparation. Furthermore this also formally proves (taking into account the effect on both parameters $\epsilon$ and $\delta$) the intuitive idea that the computation on the approximate states can take place by means of the same LOCC protocol, thus the exact knowledge of the state is not necessary. This also implies that if computation on the original states was efficient, then it remains so also on the new states. Notice however that we do not consider here the case in which the LOCC protocol itself is faulty. \begin{teo} \label{teo:resources} Let $D$ be a convex, bounded distance measure strictly related to the fidelity, such that the maximum distance between any two states be unity \cite{foot5}. Let us consider an (efficient) $\epsilon$-approximate (with respect to $D$) $\delta$-stochastic universal resource $\Gamma=\{\gamma_i\}_i$, with $\delta+\epsilon<1$. Moreover, let $\Sigma=\{\sigma_j\}_j$ be a family of states such that, for any $\gamma\in\Gamma$, there exists a state $\sigma\in\Sigma$ with $D(\sigma,\gamma)\leq\mu$ (for some $\mu\leq 1-\delta-\epsilon$). Then $\Sigma$ is an (efficient) $\epsilon'$-approximate $\delta'$-stochastic universal resource for any choice of $\epsilon'$ and $\delta'$ such that \begin{equation*} \delta' \eta(\epsilon')\geq\eta(\epsilon+\delta+\mu), \end{equation*} where $\eta(\epsilon)$ is such that $D(\rho,\sigma)\leq\epsilon\Rightarrow F(\rho,\sigma)\geq 1-\eta(\epsilon)$ \cite{foot6}. \end{teo} Its proof is given in Appendix \ref{appendixB}. Note that, in general, $\delta'$ and $\epsilon'$ will have to be (polynomially) larger than $\delta$ and $\epsilon$. If $D$ is the trace distance, then we have $\eta(\epsilon)\geq \epsilon$ thus obtaining that one can always find $\epsilon'$ and $\delta'$ satisfying the condition: \begin{equation*} \delta'\epsilon'\geq \epsilon+\delta+\mu. \end{equation*} Note that the condition we have found implies that $\delta'$ and $\epsilon'$ must be larger than, respectively, $\delta$ and $\epsilon$. More importantly, though, Theorem \ref{teo:resources} implies that, whenever $\Gamma$ is an (efficient) deterministic exact universal resource, then one can choose any $\delta '$ and $\epsilon '$ such that $\delta '\epsilon '\geq\mu$. We have thus the following \begin{cor} Let $\Sigma=\{\sigma_i\}_i$ be an (efficient) exact deterministic universal resource and $D$ be any distance measure strictly related to fidelity. Then, for every $\delta, \epsilon>0$ there exists a $\mu>0$ such that any family $\tilde\Sigma=\{\tilde\sigma_i\}_i$ with $D(\sigma_i,\tilde\sigma_i)\leq \mu$, $\forall i$ is an (efficient) $\epsilon$-approximate (with respect to $D$) $\delta$-stochastic universal resource. Furthermore, if output $\ket{\phi_\textrm{out}}$ is obtained by applying LOCC protocol $\Lambda_\textrm{LOCC}$ on a state $\sigma_i\in\Sigma$, then the same protocol can be used on the corresponding state $\tilde\sigma_i\in\tilde\Sigma$ to produce an output that, with probability $p\geq(1-\delta)$, is within distance $\epsilon$ from $\ket{\phi_\textrm{out}}$. \end{cor} This implies, in particular, that any family composed of states that are close enough to, e.g., a cluster state is $\epsilon$-approximate $\delta$-stochastic universal for some non-trivial choice of $\epsilon$ and $\delta$. \section{Conclusions and outlook} \label{sec:conclusions} In this paper we have studied the issues of approximate and stochastic universality in measurement-based quantum computation. We have defined the concepts of approximate and stochastic universality, and shown how these concepts are not equivalent to each other by providing examples of resources that are approximate and deterministic universal, or exact stochastic universal. Generalizing the results obtained in \cite{universalityI}, we have presented entanglement-based criteria that must be satisfied by any approximate (stochastic) universal resource. Moreover we have shown that such criteria are strong enough to allow us to discard some well-known families of states as non-universal, including e.g. GHZ states, W-states and 1D cluster states. The issue of efficiency has also been discussed, and we have shown how the previous results can be strengthened to include the request that a universal family of resources also allows for efficient computations. We found that entanglement needs to grow sufficiently fast for any approximate stochastic universal resource. On the other side, we have provided examples of resources that are approximate and/or stochastic universal. In particular, we have studied the case of a family of states that is only an approximation of some ($\epsilon$-approximate and/or $\delta$-stochastic) universal family. We have given a formal proof of the fact that such a family is always $\epsilon'$-approximate and $\delta'$-stochastic universal, and found an explicit bound for the scaling of the parameters $\epsilon'$ and $\delta'$ as functions of the original parameters $\epsilon$ and $\delta$, and of the degree of approximation of the family itself. The proof also formalizes the intuitive idea that the computation on the approximate family can be performed by means of the same protocol that was devised for the exact family. In particular this means that if the initial resource was efficient universal, then also the approximate one is. While we have found that basically any well behaved entanglement monotone can be used to obtain criteria for approximate and stochastic universality, one of the quantities considered in \cite{universalityI}, the entropic entanglement width, does not fall under this category as it is not an entanglement monotone (in the terminology of \cite{universalityI}, more precisely, not a type-I monotone). For this measure it is not clear whether the results obtained for the exact, deterministic case can be lifted to the approximate, stochastic case. This affects in particular results about non-universality of states with a bounded or logarithmically growing block-wise entanglement, such as ground states of strongly correlated 1D quantum systems. We have also not touched the issue of encoded universality \cite{universalityI}, where the desired quantum states need only be generated in an encoded form. Also in this case it should be possible to obtain entanglement based criteria for approximate stochastic encoded universality, using the methods and techniques developed in this paper. Finally, we would like to comment in relation to the results presented in \cite{GFE08,BMW08}, where it is shown that a randomly chosen generic pure state (in other words the majority of all states) is no more useful as a resource for measurement-based quantum computation than a string of random {\it classical} bits, despite the fact that the former is colloquially often said to be almost maximally entangled. Particularly related to the results presented in this paper, is the fact (proved in Ref.~\cite{GFE08}) that a family of states $|\psi_{M}\rangle$ on $M$ qubits, whose geometric measure scales as $E_G(|\psi_{M}\rangle) \geq 1 - 2^{-M + {\mathcal O}(\log_2 M)}$ cannot provide a super-polynomial speed-up over classical computation with the aid of randomness and thus it is conceivably not a universal resource (unless the class of decision problems solvable by a probabilistic Turing machine in polynomial time with bounded error (BPP) coincides with the class of decision problems solvable by a quantum computer in polynomial time with bounded error (BQP)). Note that it is required that the scaling of the geometric measure is even faster (by a constant factor in the front of $M$ in the exponent) than that of the cluster state for any spatial dimension, $E_G(|C_M\rangle) = 1 - 2^{-\lfloor M/2 \rfloor}$ \cite{MMV07}, and thus these states $|\psi_{M}\rangle$ can be considered highly entangled with regards to this measure (in the sense that such a family would not fail the criterion for universality based on the geometric measure). There are two kinds of examples in Ref.~\cite{GFE08} which are shown to have such a scaling of the geometric measure. The first example is given by generic Haar-random pure states. It is not clear for us whether they also pass the necessary conditions illustrated in the previous sections if one considers other entanglement measures, although it is possible. However, it should be noted that these states already inherit ``unphysical'' complexity as resource states since it might not be possible to prepare them in a time polynomial in $M$. The second, efficiently preparable, example is given by a tree tensor network state. While in \cite{GFE08} it is shown that these states have indeed high geometric measure, it should be noted that its Schmidt-rank width is bounded without reaching the maximum (because of the constant tree width \cite{foot7}). We could therefore interpret that its uselessness (as a universal resource for MQC) originates from being {\it too little} entangled in terms of the Schmidt-rank width: the family would in fact fail the criteria illustrated in the previous sections when one bases them on this entanglement measure. It would be interesting to see whether it is possible to find necessary criteria such as the ones shown in this work that allows us to discard random pure states (and some pseudo random pure states which are efficiently preparable in case they are not universal either (cf.~\cite{Low09})) as non-universal. It is possible that such states already fail the criteria for some existing entanglement measure (other than the geometric measure), but it might prove necessary to identify a new one in order to obtain this result. We note that randomness in the description of the resource does not necessarily taint its usefulness immediately, as can be seen for instance by our Example~3. \section*{Acknowledgements} C.M. and M.P. thank M. Bremner and B. Kraus for discussions. A.M. acknowledges helpful discussions about Refs.~\cite{GFE08,BMW08} with D. Gross, J. Eisert, S. Flammia, and Z. Ji. We acknowledge support by the Austrian Science Fund (FWF), in particular through the Lise Meitner Program (M.P.), and the EU (OLAQUI,SCALA,QICS). The research at the Perimeter Institute is supported by the Government of Canada through Industry Canada and by Ontario-MRI.
1,116,691,501,060
arxiv
\section{Introduction} Coverage is a widely studied research area and one of the most important problem in wireless sensor networks (WSNs) for monitoring, surveillance, etc. Based on the subject to be covered by a set of sensors, it is classified into three categories, such as point coverage, area coverage and barrier coverage. In point coverage \cite{Gu09,LU05}, a set of points are covered, whereas in area coverage \cite{CardeiM05,Saravi09,WangCP03}, all points inside a bounded area are covered. But in barrier coverage, barriers of sensors are required for an appropriate model of coverage for applications like detecting intruders when they cross borders or detecting spread of pollutants, chemicals when sensors are deployed around critical regions. In most of barrier coverage literatures \cite{Chen07,Kumar05,Liu08,Saipulla09,YangQ09} static sensors are usually used for continuous monitoring the borders or boundaries. But there are applications \cite{Du2010}, where time-variant coverage at every point on a boundary is sufficient instead of monitoring all along. For these kind of applications, deployment of static sensors at the boundary is not cost effective in terms of resource utilization for periodic monitoring every point on the boundary. The time-variant coverage can be solved efficiently by utilizing less number of resources i.e., mobile sensors with appropriate movement strategy. The cost involvement aspect of this solution is mobility and storage capacity of the mobile sensors. This type of coverage problems, where time-variant coverage is sufficient for periodic petrol inspections are termed as {\it sweep coverage}. In point sweep coverage problem \cite{Hung10,Du2010,Gorain2014,Li11,Xi09}, a given set of discrete points are monitored by a set of mobile sensors at least once within a given period of time. This time period is termed as {\it sweep period}. The primary motivation of these sweep coverage problems is to utilize minimum number of mobile sensors for achieving the goal. But finding minimum number of mobile sensors for the sweep coverage of a given set of discrete points on a plane is NP-hard \cite{Li11}. The area sweep coverage problem is introduced in the article \cite{Gorain2014}, where it is proved that the problem is NP-complete. In this article, we formulate different variation of barrier sweep coverage problems for covering finite length curves on a plane. \subsection{Contribution} In this paper, our contributions on sweep coverage problems are as follows: \begin{itemize} \item We introduce barrier sweep coverage problems for covering finite length curves on a plane. We solve the problem optimally with minimum number of mobile sensors for a finite length curve. \item We define an energy restricted barrier sweep coverage problem and propose a $\frac{13}{3}$-approximation algorithm for a finite length curve. \item A $2$-approximation algorithm is proposed for solving the sweep coverage problem for multiple curves for a special case where each mobile sensor visits all points of each curve. For the general version of the problem, a 5-approximation algorithm is proposed. \item We formulate a data gathering problem by a set of data mules and a $3$-approximation algorithm is proposed to solve the NP-hard problem. \item Performance of the proposed algorithms are investigated through simulation for multiple finite length curves. \end{itemize} \subsection{Related Work} The concept of sweep coverage initially came from the context of robotics \cite{Batalin02}. In \cite{Batalin02}, the authors considered a dynamic sensor coverage problem using mobile robots in absence of global localization information. The sensors are mobilized by mounting them on the mobile robots. The robots explore an unexplored area by deploying small communication beacons. The robots decide direction of movements during the exploration using local markers with the beacons. Recently, several works \cite{Hung10,Du2010,Li11,ShuCZZ14,Chao11,Xi09,Yang13} on sweep coverage are proposed in WSNs. Most of the works focused on designing suitable heuristics. In \cite{Xi09}, the authors considered a network consisting static and mobile sensors. Two different problems are considered in that paper. In the first problem, objective is to minimize number of mobile sensors that can guarantee data download from every static sensor in a given time period with high probability. In the second problem, objective is to guide mobile sensors for moving towards static sensors without any centralized control. In the first heuristic ({\it MinExpand}), mobile sensors move in the same path in every time period. In the second heuristic ({\it OSweep}), the mobile sensors move in different paths in different time periods. Hung et al. \cite{Hung10} considered a sweep coverage problem where nonuniform deployment of a set of PoIs is made over an area of interest. The area is divided into smaller sub-areas. Then mobile sensors are deployed over the sub-areas, one for each, to guarantee sweep coverage of all the PoIs in the respective sub-areas. Due to unequal number of PoIs in different sub-areas, sweep periods (patrolling times) of the mobile sensors may not be same. Objective of the proposed heuristic is to make the patrolling time approximately same for all mobile sensors. In \cite{ShuCZZ14}, the authors considered a problem where sweep periods of the PoIs are different. A scheme is proposed based on periodic vehicle routing problem to minimize number of unnecessary visits of a PoI by a mobile sensor. To extend lifetime of sweep coverage, Yang et al. \cite{Yang13} utilized base station as a power source for periodical refueling or replacing battery of the mobile sensors. The authors proposed two heuristics with one base station and multiple base stations, respectively. Hardnass of the sweep coverage problem is studied by Li et al. \cite{Li11} theoretically. The authors proved that finding minimum number of mobile sensors to sweep cover a set of PoIs is NP-hard. It is proved that the problem cannot be approximated less than a factor of 2, unless P=NP. A $(2+\epsilon)$-approximation and a 3-approximation algorithms are proposed to solve the problem. The authors remarked on impossibility of design distributed local algorithm to guarantee sweep coverage of all PoIs, i.e., a mobile sensor cannot locally determine whether all PoIs are sweep covered without global information. But there is a flaw in the approximation algorithms. We, in \cite{Gorainsss14}, remarked on the flaw of the 3-approximation algorithm \cite{Li11} and proposed corrected algorithm keeping same approximation factor to guarantee sweep coverage for the given set of PoIs. If the sweep periods of the PoIs are different, a $O(\log \rho)$-approximation algorithm we proposed where $\rho$ is the ratio of the maximum and minimum sweep periods. An inapproximability result is also established when the speed of the mobile sensors are not necessarily same. In \cite{Gorain2014}, a 2-approximation algorithm for a special case of point sweep coverage problem we proposed, which is the best possible approximation factor according to the result in \cite{Li11}. A distributed 2-approximation algorithm we proposed for the point sweep coverage problem, where static sensors are deployed at the PoIs and those are considered as PoIs. The static sensors communicate among them self through message to find initial deployment locations of the mobile sensors. The area sweep coverage problem is formulated and proved that the problem is NP-complete. A $2\sqrt{2}$ approximation algorithm is proposed to solve the problem for a square region. The area sweep coverage problem for arbitrary bounded region is also investigated in that paper. An energy efficient sweep coverage problem is proposed in \cite{Gorain015}, where a set of static and mobile sensors are used for sweep coverage of a set of discrete points. The objective is to minimize the total energy consumption per unit time by the set of sensors guaranteeing the required sweep coverage. An 8-approximation algorithm is proposed to solve the problem. There are similar patrolling problems \cite{Czyzowicz2011,Dumitrescu2014,Kawamura2014} like sweep coverage with different objectives. Objective of the problems is to minimize time between two consecutive visits of any point while monitoring a given road network or boundary of a region by a set of mobile agents having different speeds. In \cite{Czyzowicz2011}, the authors proposed two strategies; partition based strategy and cycle based strategy to obtain movement schedules of the mobile agents. The authors proved that the strategy obtains optimal solution when number of agents is less than or equal to 2 for partition strategy and number of agents is less than or equal to 4 for cycle strategy. Kawamura et al. \cite{Kawamura2014} proved that the partition strategy proposed in the paper \cite{Czyzowicz2011} achieves optimal solution for number of agents less than or equal to 3. The rest of the paper is organized as follows. The problem definitions are given in section \ref{sec:probDefination}. Barrier sweep coverage for single curve and multiple curves are discussed in section \ref{sec:single} and section \ref{sec:multiple} respectively. A data gathering problem by data mules is formulated and discussed in section \ref{sec:MDMDG}. Finally, conclude and future works are given in section \ref{sec:concl}. \section{Problem Definitions}\label{sec:probDefination} Let ${\mathcal C}$ be a finite length curve on a 2D plane. ${\mathcal C}$ is said to be {\it covered} by a set of sensors if and only if each point on ${\mathcal C}$ is covered by at least one sensor. Based on the above coverage metric, the definitions of barrier sweep coverage problems are given below. \begin{definition} \rm({\bf $t-$barrier sweep coverage}\rm) Let ${\mathcal C}$ be any finite length curve on a 2D plane and ${\mathcal S}=\{s_1,s_2,\cdots,s_m\}$ be a set of mobile sensors. ${\mathcal C}$ is said to be $t$-barrier sweep covered if and only if each point of ${\mathcal C}$ is visited by at least one mobile sensor in every time period $t$. \end{definition} \begin{definition} \rm({\bf Barrier sweep coverage problem for single finite length curve}\rm) Let ${\mathcal C}$ be a finite length curve and ${\mathcal S}=\{s_1,s_2,\cdots,s_m\}$ be a set of mobile sensors. For given $t>0$ and $v>0$, find the minimum number of mobile sensors with uniform speed $v$ such that all points of ${\mathcal C}$ are visited by all mobile sensors and ${\mathcal C}$ is $t$-barrier sweep covered. \end{definition} \begin{definition} \rm({\bf Barrier sweep coverage problem for multiple finite length curves \rm(BSCMC\rm)}\rm) Let ${\mathcal X}=\{{\mathcal C}_1,{\mathcal C}_2,\cdots,{\mathcal C}_n\}$ be a set of $n$ finite length curves and ${\mathcal S}=\{s_1,s_2,\cdots,s_m\}$ be a set of mobile sensors. For given $t>0$ and $v>0$, find the minimum number of mobile sensors with uniform speed $v$ such that each ${\mathcal C}_i$ for $i=1,2,\cdots, n$ is $t$-barrier sweep covered. \end{definition} In general sensors are equipped with limited battery power. In order to continue sweep coverage for long time, each mobile sensor must visits an energy source to recharge or replace its battery. Let $T$ be the maximum travel time for a mobile sensor starting with full powered battery maintaining uniform speed $v$ till battery power goes off. Let $e$ be an energy source on the plane and every mobile sensor can recharge or replace its battery by visiting $e$. We define energy restricted barrier sweep coverage problem for a finite length curve ${\mathcal C}$ as given below. \begin{definition} \rm({\bf Energy restricted barrier sweep coverage problem}\rm) Given a finite length curve ${\mathcal C}$, an energy source $e$ and $v,t,T>0$, find the minimum number of mobile sensors with the uniform speed $v$ such that each point of ${\mathcal C}$ can be visited by at least one mobile sensor in every $t$ time period and each mobile sensor visits $e$ once in every $T$ time period. \end{definition} \section{Barrier sweep coverage for a finite length curve}\label{sec:single} In this section, first we propose a solution for finding the optimal number of mobile sensors to sweep cover a finite length curve. Later, we give an approximate solution for the energy restricted barrier sweep coverage problem for a finite length curve. \subsection{Optimal solution for a finite length curve} The barrier sweep coverage problem for a finite length curve can be optimally solved using the following strategy. Let $| {\mathcal C} |$ be the length of the curve ${\mathcal C}$. If ${\mathcal C}$ is a closed curve, then partition ${\mathcal C}$ into $\left\lceil \frac{| {\mathcal C} | }{vt}\right\rceil$ equal parts of length $vt$ and deploy total $\left\lceil \frac{| {\mathcal C} |}{vt}\right\rceil$ mobile sensors, one at each of the partitioning points. Then each mobile sensor starts moving at same time along ${\mathcal C}$ in the same direction to ensure $t$-sweep coverage of ${\mathcal C}$. If ${\mathcal C}$ is an open curve, then join the end points of ${\mathcal C}$ to make it close and apply the same strategy as above. \subsection{Energy restricted barrier sweep coverage for a finite length curve} In this section we propose an approximate solution for the energy restricted barrier sweep coverage problem. The approximation factor of the proposed algorithm is $\frac{13}{3}$ though it is not known whether the problem is NP-hard or not. Let $e$ be the energy source on the plane. To make the problem feasible, we assume that the distance of any point on ${\mathcal C}$ from $e$ is less than $\frac{vT}{2}$. We define a tour named as $e$-tour which is denoted as $\{e,p,q,e\}$, that starts from $e$, visits $arc(pq)$ of ${\mathcal C}$ continuously and then ends at $e$ such that total length of the tour is at most $vT$, where $p$ and $q$ are two points on ${\mathcal C}$. The objective of our technique is to find a tour through $e$ and ${\mathcal C}$, which is concatenation of multiple number of $e$-tours. Let $d(a,b)$ the Euclidean distance between two points $a$ and $b$ and $d_c(p,q)$ be the distance between two points $p$ and $q$ along ${\mathcal C}$ in the clockwise direction, where $p$ and $q$ are two points on ${\mathcal C}$. So, $d_c(p,q)$ is equal to the length of the $arc(pq)$. \begin{figure}[h] \psfrag{x}{$i_1$} \psfrag{y}{$i_2$} \psfrag{z}{$i_3$} \psfrag{w}{$i_4$} \psfrag{e}{$e$} \centering \includegraphics[width=0.4\textwidth]{e-tour.eps} \caption{Showing selection of $e$-tours $\{e,i_1,i_2,e\}$, $\{e,i_2,i_3,e\}$, $\{e,i_3,i_4,e\},\cdots$} \label{fig:E-tour} \end{figure} Let us chose any point $i_1$ on a finite length `closed' curve ${\mathcal C}$. Find a point $i_2$ on ${\mathcal C}$ in the clockwise direction of ${\mathcal C}$ (Ref. Fig. \ref{fig:E-tour}) such that $d_c(i_1,i_2)=\frac{vT}{2}-d(e,i_1)$. Here $T_1=\{e,i_1,i_2,e\}$ is a $e$-tour, since length of $T_1$ is equal to $d(e,i_1)+d_c(i_1,i_2)+d(i_2,e) \le d(e,i_1)+\frac{vT}{2}-d(e,i_1)+\frac{vT}{2}$ = $vT$, as shown in Fig. \ref{fig:E-tour}. Next $e$-tour is selected as $\{e,i_2,i_3,e\}$, where $i_3$ is a point on ${\mathcal C}$ in the clockwise direction from $i_2$ such that $d_c(i_1,i_2)=\frac{vT}{2}-d(e,i_2)$. Once the second tour is selected, we check whether the combination of the previous tour and current tour together, i.e., $\{e,i_1,i_3,e\}$ form an $e$-tour or not. If the combined tour $\{e,i_1,i_3,e\}$ is a valid $e$-tour, then the previous tour $\{e,i_1,i_2,e\}$ is updated into $\{e,i_1,i_3,e\}$ and proceed to select next $e$-tour. This updating of the previous $e$-tour with current one will continue until combined tour violates the length constraint of $e$-tour i.e., length of the combined tour is greater than $vT$. In general, after computing an $e$-tour $T_j=\{e,i_j,i_{j+1},e\}$, we select next point $i_{j+2}$ on ${\mathcal C}$ such that $d_c(i_{j+1},i_{j+2})=\frac{vT}{2}-d(e,i_{j+1})$. Then if $\{e,i_{j},i_{j+2},e\}$ this is a valid $e$-tour, we update the tour $T_j$ as $T_j=\{e,i_{j},i_{j+2},e\}$. Otherwise the tour $T_{j+1}$ is selected as $T_{j+1}=\{e,i_{j+1},i_{j+2},e\}$. Continuing this way we decompose ${\mathcal C}$ into multiple number of $e$-tours such that every point on ${\mathcal C}$ is included in some $e$-tour. If ${\mathcal C}$ is an `open' curve, we use the above technique to decompose ${\mathcal C}$ into multiple number of $e$-tours considering one end point of ${\mathcal C}$ as $i_1$ and continue till the other end point of it. Note that according to the above construction of the $e$-tour, length of the combined tour of two consecutive $e$-tours is always greater than $vT$. Let $APPRX$ be the total tour after concatenation of the $e$-tours, $T_1,T_2,T_3,\cdots$ one after another in order of obtained by the above technique. For example $T_1=\{e,i_1,i_2,e\}$, $T_2=\{e,i_2,i_3,e\}$, $T_3=\{e,i_3,i_4,e\},\cdots$ are $e$-tours as shown in Fig. \ref{fig:E-tour}, where concatenation of those tours is $T_1 \cdot T_2 \cdot T_3 \cdot \cdots$ =$\{e,i_1,i_2,e\} \cdot \{e,i_2,i_3,e\} \cdot \{e,i_3,i_4,e\} \cdot \cdots$ = $\{e,i_1,i_2,e, i_2,i_3,e,i_3,i_4,e,\cdots \}$, where `$\cdot$' is denoted as the concatenation operation of the $e$-tours. Let $ | APPRX |$ be the length of $APPRX$. Divide $APPRX$ into equal parts of length $vt$ and deploy one mobile sensor at each of the partitioning points. The mobile sensors then start moving along $APPRX$ in the same direction to ensure sweep coverage of ${\mathcal C}$ for desirable long-time. The Algorithm \ref{alg:Energy}({\textsc{EnergyRestrictedBSC}}) given below for energy restricted barrier sweep coverage is applicable for closed curve. Though same Algorithm is also applicable for an open curve as explained in the previous paragraph. \begin{algorithm}[] \caption{\textsc{EnergyRestrictedBSC}} \begin{algorithmic}[1] \STATE{Choose any point $i_1$ on ${\mathcal C}$.} \STATE{${\mathcal C}'={\mathcal C}$, $n'=1$.} \WHILE{${\mathcal C}' \ne \phi$} \IF{$d(e,i_{n'})+ | {\mathcal C}'| \le \frac{vT}{2}$} \STATE{$h=i_1$} \ELSE \STATE{Select a point $h$ on ${\mathcal C}$ in a clockwise direction from $i_{n'}$ such that $d(i_{n'},h)= \frac{vT}{2}-d(e,i_{n'})$} \ENDIF \IF{$n' \ne 1$ and $d(e,i_{{n'}-1})+d_c(i_{{n'}-1},h)+d(e,h) \le vT$} \STATE{${\mathcal C}'={\mathcal C}'\setminus arc(i_{n'}h)$} \STATE{$i_{n'}=h$, $T_{{n'}-1}=\{e,i_{{n'}-1},i_{n'},e\}$} \ELSE \label{step:violation} \STATE{ $i_{{n'}+1}=h$, $T_{n'}$ = $\{e,i_{n'},i_{{n'}+1},e\}$} \STATE{${\mathcal C}'={\mathcal C}'\setminus arc(i_{n'}i_{{n'}+1})$} \STATE{$n'=n'+1$} \ENDIF \STATE{${\mathcal C}'={\mathcal C}'\setminus arc(i_{n'}i_{{n'}+1})$} \ENDWHILE \STATE{$APPRX=T_1\cdot T_2 \cdot T_2 \cdot \cdots \cdot T_{n'}$, \\ where `$\cdot$' is denoted as concatenation operation.} \STATE{Divide $APPRX$ into equal parts of length $vt$ and deploy one mobile sensor at each of the partitioning points.} \STATE{All mobile sensor start moving along $APPRX$ at same time in same direction.} \end{algorithmic}\label{alg:Energy} \end{algorithm} \begin{theorem} According to the Algorithm \ref{alg:Energy}, each mobile sensor visits $e$ in every $T$ time period and each point on ${\mathcal C}$ is visited by at least one mobile sensor in every $t$ time period. \end{theorem} \begin{proof} According to our proposed Algorithm \ref{alg:Energy}, the mobile sensors move along the tour $APPRX$. As the length of each $e$-tour is less than or equals to $vT$, the mobile sensors will visit $e$ after traveling at most $vT$ distance since its last visit of $e$. Therefore, each mobile sensor visits $e$ once in every $T$ time period. Again, the mobile sensors are deployed at every partitioning points of $APPRX$ and two consecutive partitioning points are $vt$ distance apart. The relative distance between any two consecutive mobile sensors is $vt$ at any time as they are moving in same speed $v$ along the same direction. Therefore, any point on ${\mathcal C}$, which is visited at time $t_0$ (say) by a mobile sensor visits again within time $t+t_0$ by the next mobile sensor following it. \qed\end{proof} To analyze the approximation factor with respect to the optimal solution, we consider some special points on ${\mathcal C}$ as follows. Let $i_p^1$, $i_p^2$ be two special points on the $arc(i_pi_{p+1})$ of ${\mathcal C}$ such that the $arc(i_pi_{p+1})$ is partitioned into three equal parts, i.e., the length of $arc(i_pi_p^1)$ equal to the length of $arc(i_p^1i_p^2)$ equal to the length of $arc(i_p^2i_{p+1})$. We define a set of points, ${\mathcal I}=\{i_j, i_j^1, i_j^2 | j=1~\text{to}~n' \}$. Following two Lemmas give an upper bound of the length of the tour $APPRX$ and a lower bound of the length of the optimal tour respectively. \begin{lemma}\label{lem:upperb} $$|APPRX| \le \frac{1}{3} \left(2\sum_j\left(d(e,i_j)+d(e,i_j^1) \\+d(e,i_j^2)\right)+5 | {\mathcal C} | \right)$$. \end{lemma} \begin{proof} According to the Algorithm \ref{alg:Energy}, total length of the tour $APPRX$ is \begin{eqnarray}\label{Eq:eq1} |APPRX| & = & d(e,i_1)+d_c(i_1,i_2)+d(i_2,e)+d(e,i_2) \nonumber \\ & & +d_c(i_2,i_3) +d(i_3,e)+ \cdots + d(e,i_1)\nonumber \\ & =& | {\mathcal C} | +2\sum_j d(e,i_j). \end{eqnarray} Now by triangle inequality, \begin{eqnarray} d(e,i_j) & \le & d(e,i_j^1)+d_c(i_j,i_j^1)~ {\rm and} \nonumber \\ d(e,i_j) & \le & d(e,i_j^2)+d_c(i_j,i_j^2) \nonumber \end{eqnarray} Therefore, \begin{eqnarray}\label{Eq:neq1} | APPRX |& \le & | {\mathcal C} | + 2\sum_j\left(d(e,i_j^1)+d_c(i_j,i_j^1)\right) \end{eqnarray} \begin{eqnarray}\label{Eq:neq2} | APPRX |& \le & | {\mathcal C} | + 2\sum_j\left(d(e,i_j^2)+d_c(i_j,i_j^2)\right) \end{eqnarray} From above the equation \ref{Eq:eq1}, \ref{Eq:neq1} and \ref{Eq:neq2}, we can write, \begin{eqnarray}\label{Eq:neq3} 3 | APPRX | & \le & 3 | {\mathcal C} |+ 2\sum_j d(e,i_j) + 2\sum_j\left(d(e,i_j^1)+d_c(i_j,i_j^1)\right) \nonumber \\ & & + 2\sum_j\left(d(e,i_j^2)+d_c(i_j,i_j^2)\right) \end{eqnarray} As the points $i_j^1$ and $i_j^2$ divide length of the $arc(i_ji_{j+1})$ into three equal parts, therefore, \\$ \displaystyle \sum_j d_c(i_j,i_j^1)=\frac{| {\mathcal C} |}{3}$ and $ \displaystyle \sum_j d_c(i_j,i_j^2)=\frac{2 | {\mathcal C} |}{3}$. Using these two results, equation \ref{Eq:neq3} can be written as: \begin{eqnarray} 3 | APPRX | & \le & 3 | {\mathcal C} |+ 2\sum_j\left(d(e,i_j)+d(e,i_j^1) + d(e,i_j^2)\right) \nonumber + 2\frac{ | {\mathcal C} |}{3}+2\frac{2 | {\mathcal C} |}{3} \nonumber \\ | APPRX | & \le & \frac{1}{3} \left(2\sum_j\left(d(e,i_j)+d(e,i_j^1)+d(e,i_j^2)\right)\right) \nonumber + \frac{5}{3} | {\mathcal C} | . \nonumber \end{eqnarray} \qed\end{proof} \begin{figure}[h] \psfrag{x}{\hspace{-.2cm}$p$} \psfrag{y}{$i_j$} \psfrag{z}{$i_{j+1}$} \psfrag{w}{\hspace{-.5cm}$i_{j-1}$} \psfrag{p}{$q$} \psfrag{q}{$i_{j+2}$} \psfrag{e}{\vspace{1cm}$e$} \centering \includegraphics[width=0.4\textwidth]{opt_tour.eps} \caption{Showing one $e$-tour $OPT_l=\{e,p,q,e\}$ of the optimal tour $OPT$} \label{fig:opt-tour} \end{figure} \begin{lemma} \label{lem:lowerb} Let $OPT$ be the optimal tour of the mobile sensors and $ | OPT |$ be length of $OPT$, then $$\displaystyle | OPT | \ge \max\left\{\frac{1}{4}\sum_j\left(d(e,i_j)+d(e,i_j^1)+d(e,i_j^2)\right), | {\mathcal C} | \right\}$$. \end{lemma} \begin{proof} Let $OPT_l=\{e,p,q,e\}$ be an $e$-tour of the optimal tour $OPT$ (Ref. Fig. \ref{fig:opt-tour}). We claim that at most one $arc(i_ji_{j+1})$, part of an $e$-tour computed using the Algorithm \ref{alg:Energy} for some $j$, can be completely contained in $arc(pq)$, where $arc(pq)$ is a part of $OPT_l$. To prove this, let us assume that there are two such arcs $arc(i_{j-1}i_j)$ and $arc(i_ji_{j+1})$ are completely contained in $arc(p,q)$. As length of combined tour of two consecutive $e$-tours is always greater than $vT$ according to step \ref{step:violation} of the Algorithm \ref{alg:Energy}, therefore, \begin{equation}\label{eq:vt} \left( d(e,i_{j-1})+d_c(i_{j-1},i_{j+1})+d(e,i_{j+1})\right) > vT \end{equation} Now, \begin{eqnarray} | OPT_l | &=& d(e,p)+d_c(p,i_{j-1})+d_c(i_{j-1},i_j)\nonumber \\ & & +d_c(i_j,i_{j+1})+d_c(i_{j+1},q)+d(e,q)\nonumber \\ & \ge& d(e,i_{j-1})+d_c(i_{j-1},i_{j+1})+d(e,i_{j+1}) \nonumber\\ & > & vT ~ \rm{(from~ equation~ \ref{eq:vt})} \nonumber \end{eqnarray} This contradicts the fact that $OPT_l$ is an $e$-tour. Therefore, at most one $arc(i_ji_{j+1})$ for some $j$, can be completely contained in $arc(pq)$. Hence, at most one complete $arc(i_ji_{j+1})$ and two $arc(i_{j-1}i_{j})$ and $arc(i_{j+1}i_{j+2})$ partially contained in $arc(pq)$ as shown in Fig. \ref{fig:opt-tour}. Therefore, maximum eight points from the set ${\mathcal I}$ may belong to the $arc(pq)$, since there are four points $i_j$, $i_j^1$, $i_{j}^2$, $i_{j+1}$ for the $arc(i_ji_{j+1})$ and at most four special points, $i_{j-1}^1$, $i_{j-1}^2$ and $i_{j+1}^1$, $i_{j+1}^2$ for the $arc(i_{j-1}i_{j})$ and $arc(i_{j+1}i_{j+2})$ respectively. Now, for any point $x$ on $arc(pq)$ of $OPT_l$ implies $ | OPT_l | \ge 2d(e,x)$. As there are at most eight points of ${\mathcal I}$ in $arc(pq)$, which implies \begin{eqnarray} \displaystyle | OPT_l | & \ge & \frac{2\sum_{x ~\in~ {\mathcal I} \cap arc(pq)} d(e,x)}{8} \end{eqnarray} Since all the points in ${\mathcal I}$ are on $arc(pq)$ for some $OPT_l$, where $OPT_l$ is a part of $OPT$, therefore,\\ $| OPT | \ge \sum _l | OPT_l | \ge \frac{2\sum_j\left(d(e,i_j)+d(e,i_j^1)+d(e,i_j^2)\right)}{8}$. Also, $ | OPT | \ge | {\mathcal C} | $. Therefore, \\$ | OPT | \ge \max\left\{\frac{1}{4}\sum_j\left(d(e,i_j)+d(e,i_j^1) \\ +d(e,i_j^2)\right), | {\mathcal C} | \right\}$. \qed\end{proof} \begin{theorem} The approximation factor of the Algorithm \ref{alg:Energy} for energy restricted barrier sweep coverage problem is $\displaystyle \frac{13}{3}$. \end{theorem} \begin{proof} Let $N$ be the number of mobile sensors needed in our solution and $N_{opt}$ be the number of mobile sensors in the optimal solution.\\ Then $\displaystyle N =\left\lceil\frac{ | APPRX | }{vt}\right\rceil$ and $\displaystyle N_{opt}\ge \frac{ | OPT | }{vt}$.\\ From Lemma \ref{lem:upperb} and Lemma \ref{lem:lowerb}, we have $\displaystyle\frac{N}{N_{opt}} \le \frac{ | APPRX | }{ | OPT | } \le \frac{8}{3}+\frac{5}{3} =\frac{13}{3}$.\\ Hence the approximation factor of our proposed Algorithm \ref{alg:Energy} is $\displaystyle \frac{13}{3}$. \qed\end{proof} \section{Barrier sweep coverage problem for multiple finite length curves}\label{sec:multiple} Finding minimum number of mobile sensors with uniform velocity to guarantee sweep coverage for a set of points in 2D plane is NP-hard and it cannot be approximated within a factor of 2 unless P=NP, as proved in the paper \cite{Li11} by Li et al. The point sweep coverage problem as proposed in \cite{Li11} is a special case of BSCMC when all curves are points. Therefore, BSCMC is NP-hard and cannot be approximated within a factor of 2 unless P=NP. \subsection{2-approximation Solution for a Special Case}\label{sec:FLSC} In this section we propose a solution for BSCMC for a special case where each mobile sensor must visits all the points of each curve. We propose an algorithm where each curve is a line segment. The same idea works for finite length curves explained in the last paragraph of section \ref{sec:multiple}. \begin{figure}[h] \psfrag{a}{$a$} \psfrag{b}{$b$} \psfrag{l}{$l$} \centering \includegraphics[width=0.4\textwidth]{fig_1.eps} \caption{Set of line segments ${\mathcal L}$} \label{fig:setOfLine} \end{figure} Let ${\mathcal L}=\{l_1,l_2,\cdots,l_n\}$ be a set of line segments on a 2D plane. Let $S$ be the set of shortest distance line $s_{ij}$ between every pair of line segments ($l_i,l_j$) for $i\ne j$. We define a complete weighted graph $G=(V,E)$, where $V=\{v_1,v_2,\cdots,v_n\}$ is the set of vertices. The vertex $v_i$ represents line segment $l_i$ for $i= 1$ to $n$. $E=V\times V$ is the set of edges, where the edge $(v_i,v_j)$ represents $s_{ij} \in S$ and edge weight $w(v_i,v_j)=$ length of $s_{ij}$. Let $T$ be a minimum spanning tree (MST) of $G$. $T$ can be represented as $T_{\mathcal L}$, where $T_{\mathcal L}={\mathcal L}\cup\{s_{ij} : (v_i,v_j) \in T \}$. An illustration is shown from Fig. \ref{fig:setOfLine} to Fig. \ref{fig:TL}, where a set of line segments is shown in Fig. \ref{fig:setOfLine}, corresponding complete graph $G$ is shown in Fig. \ref{fig:CompleteG}, an MST $T$ of $G$ is shown in Fig. \ref{fig:MST} and the representation $T_{\mathcal L}$ of $T$ is shown in Fig. \ref{fig:TL}. We construct a graph $G_{\mathcal L}$ from $T_{\mathcal L}$ by introducing vertices at the end points of each line segment in $T_{\mathcal L}$, which may split $l_i$'s into several smaller line segments. According to Fig. \ref{fig:GL}, vertices of $G_{\mathcal L}$ are \{$a_1, p, b_1$, $a_2,q,b_2$, $a_3,b_3$, $a_4,r,b_4$\}. The vertex $p$ splits line segment ($a_1,b_1$) into two smaller line segments ($a_1,p$) and ($p,b_1$). Similarly, vertices $q$ and $r$ split ($a_2,b_2$) and ($a_4,b_4$) into ($a_2,q$), ($q,b_2$) and ($a_4,r$), ($r,b_4$) respectively, whereas the line segment ($a_3,b_3$) remains same. Each of these line segments and the lines corresponding to the edges of $T$ together are the edges of the graph $G_{\mathcal L}$. According to Fig. \ref{fig:GL}, edges of $G_{\mathcal L}$ are \{($a_1,p$), ($p,b_1$), ($a_2,q$), ($q,b_2$), ($a_3,b_3$), ($a_4,r$), ($r,b_4$), ($p,a_3$), ($a_3, r$), ($b_3,q$)\}. \begin{minipage}{0.45\linewidth} \begin{figure}[H] \psfrag{v}{$v$} \psfrag{1}{$_1$} \psfrag{2}{$_2$} \psfrag{3}{$_3$} \psfrag{4}{$_4$} \centering \includegraphics[width=.73\linewidth]{fig_2.eps} \caption{Complete graph $G$}\label{fig:CompleteG} \end{figure} \end{minipage} \hspace*{.85cm} \begin{minipage}{0.45\linewidth} \begin{figure}[H] \psfrag{v}{$v$} \psfrag{1}{$_1$} \psfrag{2}{$_2$} \psfrag{3}{$_3$} \psfrag{4}{$_4$} \centering \includegraphics[width=.7\linewidth]{fig_3.eps} \caption{MST $T$ of $G$}\label{fig:MST} \end{figure} \end{minipage} \begin{minipage}{0.45\linewidth} \begin{figure}[H] \psfrag{a}{$a$} \psfrag{b}{$b$} \psfrag{p}{$p$} \psfrag{q}{$q$} \psfrag{r}{$r$} \centering \includegraphics[width=\linewidth]{fig_4.eps} \caption{$T_{{\mathcal L}}$}\label{fig:TL} \end{figure} \end{minipage} \hspace*{.85cm} \begin{minipage}{0.45\linewidth} \begin{figure}[H] \psfrag{a}{$a$} \psfrag{b}{$b$} \psfrag{p}{$p$} \psfrag{q}{$q$} \psfrag{r}{$r$} \centering \includegraphics[width=\linewidth]{fig_5.eps} \caption{$G_{{\mathcal L}}$}\label{fig:GL} \end{figure} \end{minipage} The graph $G_{{\mathcal L}}$ is a tree and the sum of the edge weights of the graph is $| G_{{\mathcal L}}| $ = $| T | +\sum_{i=1}^{n}{l_i}$, where $ | T |$ is the sum of the edge weights of $T$. Following Algorithm \ref{alg:pint1} ({\textsc BarrierSweepCoverage}) computes a tour on $G_{\mathcal L}$ and finds number of mobile sensors and their movement paths. \begin{lemma} According to the Algorithm \ref{alg:pint1} each point on $l_i$ can be visited by at least one mobile sensor in every time period $t$ for $i=1,2,\cdots n$. \end{lemma} \begin{proof} Since the mobile sensors are moving along the Eulerian tour ${\mathcal E}$, each edges of $G_{\mathcal L}$ are visited. Let us consider any point $p$ on a line segment $l_i$ and let $t'$ be the time when a mobile sensor visited $p$ last time. Now we have to prove that the point $p$ must be visited by at least one mobile sensor in $t'+t$ time. According the deployment strategy of mobile sensors any two consecutive mobile sensors are within the distance of $vt$ at any time. So, when a mobile sensor visited $p$ at $t'$ another mobile sensor is on the way to $p$ and within the distance of $vt$ along ${\mathcal E}$. Hence $p$ will be again visited by another mobile sensor within next $t$ time. \qed\end{proof} \begin{algorithm}[] \caption{\textsc{BarrierSweepCoverage}} \begin{algorithmic}[1] \STATE{Construct complete weighted graph $G$ from the given set of line segments ${\mathcal L}$.} \STATE{Find an MST $T$ of $G$.} \STATE{Construct $G_{{\mathcal L}}$.} \STATE{Find Eulerian graph after doubling each edge of $G_{\mathcal L}$.} \STATE{Find an Eulerian tour ${\mathcal E}$ on the Eulerian graph. Let $|{\mathcal E}|$ be the length of ${\mathcal E}$.} \STATE{Partition ${\mathcal E}$ into $\left\lceil\frac{|{\mathcal E}|}{vt}\right\rceil$ parts and deploy $\left\lceil{\frac{|{\mathcal E}|}{vt}}\right\rceil$ mobile sensors at all partition points, one for each.} \STATE{Each mobile sensor then starts moving at the same time along ${\mathcal E}$ in same direction.} \end{algorithmic}\label{alg:pint1} \end{algorithm} \begin{lemma}\label{lem:opt} If $L_{opt}$ is the length of the optimal TSP tour for visiting all points of every line segment in ${\mathcal L}$, then $| T |+\sum_{i=1}^{n}{l_i}\le L_{opt}$. \end{lemma} \begin{proof} The optimal TSP tour $L_{opt}$ contains two types of movement paths; movement paths along the line segments of ${\mathcal L}$ and movement paths between the line segments. Let total length of the movement paths along the line segments be $L_{along}$. Since $L_{opt}$ is the optimal tour for visiting all points of each line segment $l_i \in {\mathcal L}$, therefore, \begin{equation}\label{Eq:Lalong} L_{along}\ge \sum_{i=1}^{n}{l_i} \end{equation} Let $L_G$ be the optimal TSP tour on $G$. Then $ | T | \le L_G $. Let total length of the movement paths between the line segments be $L_{between}$. Since $L_G$ is the optimal TSP tour on $G$ and the weights of all edges of $G$ are taken to be the shortest distance between respective line segments, therefore, \begin{equation}\label{Eq:Lbetween} L_{between}\ge L_G \end{equation} Now, from equation \ref{Eq:Lalong} and \ref{Eq:Lbetween}, $L_{G} +\sum_{i=1}^{n}{l_i} \le L_{opt}$. Hence $| T | + \sum_{i=1}^{n}{l_i}\le L_{opt}$. \qed\end{proof} \begin{theorem} The approximation factor of the Algorithm \ref{alg:pint1} is 2. \end{theorem} \begin{proof} The total edge weights of $G_{\mathcal L}$ is $| T |+\sum_{i=1}^{n}{l_i}$. Now, $|{\mathcal E}| = 2(| T |+\sum_{i=1}^{n}{l_i})$, since Eulerian tour ${\mathcal E}$ found by the Algorithm \ref{alg:pint1} after doubling each edges of $G_{\mathcal L}$. By Lemma \ref{lem:opt}, $ |{\mathcal E}|\le 2L_{opt}$. Let $N_{opt}$ be the number of mobile sensors required for optimal solution. Then $N_{opt} \times vt \ge L_{opt}$, {\it i.e.}, $N_{opt} \ge \left\lceil \frac{L_{opt}}{vt}\right\rceil$. The number of mobile sensors calculated by the Algorithm \ref{alg:pint1} is $\left\lceil\frac{|{\mathcal E}|}{vt}\right\rceil$ (=$N$, say). Therefore, the approximation factor of the Algorithm \ref{alg:pint1} is equal to $\frac{N}{N_{opt}} \le \left\lceil\frac{2L_{opt}}{vt}\right\rceil\Big/\left\lceil\frac{L_{opt}}{vt}\right\rceil\le 2$. \qed\end{proof} \subsection{Solution for BSCMC} In this section, we propose an algorithm for the BSCMC problem. The description of the algorithm is given below. For $k=1$ to $n$ following calculations are performed. Compute a minimum spanning forest $F_k$ of $G$ with $k$ components. Let $C^1$, $C^2$, $\cdots$, $C^k$ be the connected components of $F_k$. Let $C^i_{{\mathcal L}}$ be the representation of $C^i$ on the set of line segments ${\mathcal L}$ for $i=1$ to $k$, where $C^i_{{\mathcal L}}= \{l_j| v_j \in C^i\} \cup\{s_{ij}| (v_i,v_j) \in C^i\}$. Construct graph $G^i_{{\mathcal L}}$ from $C^i_{{\mathcal L}}$ in the same way the $G_{\mathcal L}$ is constructed from $T_{\mathcal L}$ in section \ref{sec:FLSC}. Clearly, $\sum _{i=1}^k {| G^i_{{\mathcal L}}|}= \sum_{j=1}^n l_j + | F_k |$. Find Eulerian tours ${\mathcal E}^1_{{\mathcal L}}$, ${\mathcal E}^2_{{\mathcal L}}$, $\cdots$, ${\mathcal E}^k_{{\mathcal L}}$ after doubling the edges of $G^1_{{\mathcal L}}$, $G^2_{{\mathcal L}}$, $\cdots$, $G^k_{{\mathcal L}}$ respectively. Partition each ${\mathcal E}^k_{{\mathcal L}}$ into $\left\lceil\frac{|{\mathcal E}^k_{{\mathcal L}}|}{vt}\right\rceil$ parts of length $vt$. Let $N_k$ be the total number of partitioning points. Choose the minimum over all $N_k$'s as the number of mobile sensors. Deploy the number of mobile sensors, one at each of the partitioning points. Then all mobile sensors start their movement at the same time along their respective tours in the same direction. \begin{theorem} The approximation factor of the proposed solution for BSCMC is 5. \end{theorem} \begin{proof} Let $opt$ be the number of mobile sensor required for the optimal solution of BSCMC. Let $opt'$ be the minimum number of mobile sensor which can guarantee $t$-sweep coverage of all the vertices of $G$. Then $opt \ge opt'$. Since, all the points of each line segments are visited by the mobile sensors in any time period $t$ then $opt \ge \frac{\sum_{i=1}^{n}{l_i}}{vt}$. Let us consider the movement paths of $opt'$ number of mobile sensors to sweep cover the vertices of $G$ in any time interval $[t_0,t+t_0]$. Let $Min\_path$ be the total sum of the lengths of the paths. Then $Min\_path \le vt\times opt'$. Again these $opt'$ movement paths form a spanning forest with $opt'$ number of connected components of $G$. Therefore, we have $ | F_{opt'} | \le Min\_path$ as $F_{opt'}$ is the minimum spanning forest of $G$ with $opt'$ components. Let us consider the iteration of our solution for $k=opt'$. Total number of mobile sensors $N$ in this iteration is given below.\\ $N=\sum _{i=1} ^k \left \lceil \frac{|{\mathcal E}^i_{{\mathcal L}}|}{vt}\right \rceil \le \sum _{i=1} ^k \frac{|{\mathcal E}^i_{{\mathcal L}}|}{vt} + k = 2\sum _{i=1}^k \frac{| G^i_{{\mathcal L}}|}{vt} + k = 2\frac{\sum _{i=1} ^n l_i}{vt}+2\frac{| F_k|}{vt} + k \le 2opt+ 2 \frac{Min\_path}{vt}+k \le 2 opt+ 2 opt' + opt' \le 5opt$. Therefore, the approximation factor of our proposed solution is 5. \qed\end{proof} The two algorithms proposed in this section also works for a set of finite length curves as explained below. Let ${\mathcal X}$ be a set of finite length curves and $S$ be the set of shortest distance line between every pair of the curves. The complete weighted graph $G$ can be constructed considering each curve as a vertex and the distance between every pair of curves as an edge weight corresponding to the edge. For the MST or any subtree of $G$, Eulerian tour on the tree or the subtree can be form in the same way by introducing vertices vertices at the end points of each curve and the joining line segments and doubling the edges. The number of mobile sensors also can be deployed after partitioning the tours at each of the partitioning points. The mobile sensors follow same movement strategy as earlier to guarantee sweep coverage of the multiple finite length curves. \section{Data Gathering by Data Mules}\label{sec:MDMDG} In this section we consider a data gathering problem by a set of data mules \cite{Anastasi08,CelikM10,LevinES14,LevinSS10,ShahRJB03}, which we have formulated as a variation of barrier sweep coverage problem. A set of mobile sensors are moving along finite length straight lines on a plane for monitoring or sampling data around it. Movement of the mobile sensors are arbitrary along their respective paths i.e., a mobile sensor moves in any direction along the straight line with its arbitrary speed. A set of data mules are moving with uniform speed $v$ in the same plane for collecting data from the mobile sensors. A data mule can collect data from a mobile sensor whenever it meets the mobile sensor on its path. We assume data transfer can be done instantaneously whenever a data mule meets a mobile sensor. The definition of the problem is given below. \begin{definition} \rm({\it Minimum number of data mule for data gathering \rm(MDMDG\rm)}\rm) A set of mobile sensors are moving arbitrarily on a plane along line segments. Find minimum number of data mules, which are moving with uniform speed $v$, such that data can be collected from each of the mobile sensors at least once in every $t$ time period. \end{definition} The points sweep coverage problem \cite{Li11} is a special instance of the MDMDG when two end points of every line segment are same, {\it i.e.} the mobile sensors are behaving like a static sensor. Therefore, MDMDG problem is NP-hard and cannot be approximated within a factor of 2 unless P=NP. The following Lemma \ref{lem:MDMDG1} shows that to visit the mobile sensors, each point of all paths must be visited by the data mules. \begin{lemma}\label{lem:MDMDG1} To solve the MDMDG problem, each and every point of all line segments must be visited by the set of data mules. \end{lemma} \begin{proof} We will prove the Lemma by the method of contradiction. Let $l$ be a line segment for which all points of $l$ are not visited by the data mules. Therefore, there exist one point $p$ on $l$ such that $p$ is not visited by any data mule. One mobile node can stop its movement for sometime and which can be allowed for the arbitrary nature of their movements. Now, if the mobile sensor on $l$ remains static at $p$ for more than $t$ time then the mobile sensor is not visited by any data mule and which contradict the condition of $t$ sweep coverage. Hence, each and every point of all line segments must be visited by the set of data mules. \qed\end{proof} Now, we find the minimum path traveled by a single data mule to visit all the mobile sensors. According to the problem definition and by Lemma \ref{lem:MDMDG1}, within any time interval $[t_0,t_0+t]$, all points of each line segment are visited by the data mules. Therefore, the total length of the tour traversed by all the data mules within the time interval is greater or equals to the optimal tour traversed by a single data mule in order to visit all mobile sensors. The following Lemma gives the nature of the optimal tour traversed by a single data mule for visiting all the mobile sensors. \begin{lemma}\label{lem:MDMDG2} It may not be possible to visit a mobile sensor by a data mule unless it visits the whole line segment, which is movement path of the mobile sensor, from one end to the other end continuously. \end{lemma} \begin{proof} If a line segment is not visited continuously then in the following scenario the data mule cannot visit a mobile sensor. Let $l'$ and $l''$ be the two parts of a line segment $l$. The data mule continuously visits $l'$ and during that time period the mobile sensor remains on $l''$. After sometime, when the mobile sensor remains on $l'$, the data mule visits $l''$ continuously. So, in this scenario the data mule cannot visit mobile sensor. \qed\end{proof} Let $L_{opt}$ be the optimal tour for visiting all mobile sensors by one data mule. Now, the optimal tour $L_{opt}$ contains two types of movement paths: the movement paths along the line segments and the paths between pair of line segments. According to Lemma \ref{lem:MDMDG2}, the paths between pair of line segments are the lines which connect end points of the pair of line segments. We construct euclidian complete graph $G_{2n}$ with $2n$ vertices $a_i,b_i$, $i=1,2, \cdots n$, where $a_i$ and $b_i$ are the two end points of the line $l_i$. The edge set $E(G_{2n})$ of $G_{2n}$ is given by $E(G_{2n})= \{l_i : i=1,2,\cdots, n\} \cup \{(a_i,a_j): i \ne j\} \cup \{(b_i,b_j): i \ne j\} \cup \{(a_i,b_j): i \ne j\} \cup \{(b_i,a_j): i \ne j\}$. The weight of each of the edges is equal to the euclidian distance between the two vertices. Let $T_{2n}$ be a MST of $G_{2n}$ containing all edges $l_i \in E(T_{2n}), i=1,2,\cdots, n$. We compute $T_{2n}$ using Kruskal's algorithm after including all edges $l_i, i=1,2,\cdots,n$ in the initial edge set of $T_{2n}$. Until the spanning tree is formed we apply Kruskal's algorithm on the remaining edges, $E(G_{2n})\backslash \{l_i : i=1,2,\cdots, n\}$ of $G_{2n}$. An Eulerian graph is formed from $T_{2n}$ as described in Christofides algorithm \cite{christofides76}. We compute an Eulerian tour ${\mathcal E}_{2n}$ from the above Eulerian graph. We cannot directly apply the movement strategy of the mobile sensors, which is used in Algorithm \ref{alg:Energy} and Algorithm \ref{alg:pint1} to give the movement strategy of the data mules to solve the MDMDG problem. If there exist a line segment with length greater than $vt$ and the mobile sensor moves with a speed $v$ along the line in the same direction as the data mule moves then it may not be possible by the data mule to meet the mobile sensor within time $t$. Hence, we apply a new strategy as explained below in order solve the MDMDG problem. Partition the tour ${\mathcal E}_{2n}$ into equal parts of length $vt$ and consider two sets of data mules $DM_1$ and $DM_2$, each of which contains $\lceil\frac{{\mathcal E}_{2n}}{vt}\rceil$ number of data mules. Deploy two data mules at each of the partitioning points one from each set. Then each data mule from the set $DM_1$ moves in a direction say, clockwise direction, whereas other set of data mules $DM_2$ move in the counter clockwise direction. All data mules, irrespective of the sets start their movement at same time. Based on the above discussions, we propose following Algorithm \ref{alg:MDMDG} ({\textsc{MDMDG}}). \begin{algorithm} \caption{\textsc{MDMDG}} \begin{algorithmic}[1] \STATE{Use Kruskal algorithm to find an MST $T_{2n}$ of $G_{2n}$ with the initial set of edges containing all edges $l_i$ for $i=1,2,\cdots,n$.} \STATE{Construct an Eulerian graph from $T_{2n}$ using Christofides algorithm \cite{christofides76}.} \STATE{Find an Eulerian tour ${\mathcal E}_{2n}$ from the Eulerian graph. Let $|{\mathcal E}_{2n}|$ be the length of ${\mathcal E}_{2n}$.} \STATE{Partition ${\mathcal E}_{2n}$ into $\left\lceil\frac{|{\mathcal E}_{2n}|}{vt}\right\rceil$ parts of length $vt$. Deploy two data mules, one from $DM_1$ and other from $DM_2$ at each of the partitioning points.} \STATE{All data mules start moving at the same time along ${\mathcal E}_{2n}$ such that the data mules from $DM_1$ move in clockwise direction and the data mules from $DM_2$ move in anticlockwise direction.} \end{algorithmic}\label{alg:MDMDG} \end{algorithm} \subsection{Analysis} \begin{theorem}[Correctness] According to the Algorithm \ref{alg:MDMDG}, each mobile sensor is visited by a data mule at least once in every $t$ time period. \end{theorem} \begin{proof} Let the position of a mobile sensor be $p$ when it last visited by a data mule at time $t_0$. According the deployment strategy of data mules, two data mules visit $p$ again within time $t_0+t$, from two different directions. The statement of the theorem follows if the mobile sensor remains static at $p$ till $t_0+t$. Now we consider the case when the mobile sensor moves clockwise direction from $p$ after time $t_0$. In this case there exist a data mule, which is moving in counter clockwise direction visits it within time $t_0+t$. Similarly, if the mobile sensor moves counter clockwise direction from $p$ after time $t_0$, then there exist a data mule, which is moving in clockwise direction visits it within time $t_0+t$. Hence, irrespective of the nature of movement, a mobile sensor is visited by a data mule at least once in every $t$ time period. \qed\end{proof} \begin{theorem} The approximation factor of the Algorithm \ref{alg:MDMDG} is 3. \end{theorem} \begin{proof} According to the Christofides algorithm \cite{christofides76} we can write $|{\mathcal E}_{2n}| \le \frac{3}{2}L_{opt}$. Let $N_{opt}$ be the number of data mules required for optimal solution.\\ Then $N_{opt} \times vt \ge L_{opt}$, {\it i.e.}, $N_{opt} \ge \left\lceil \frac{L_{opt}}{vt}\right\rceil$. The number of data mule calculated by the Algorithm \ref{alg:MDMDG} is $\left\lceil\frac{2|{\mathcal E}_{2n}|}{vt}\right\rceil$ (=$N$, say). Therefore, the approximation factor for the Algorithm \ref{alg:MDMDG} is equal to $\frac{N}{N_{opt}} \le \left\lceil\frac{2 \times\frac{3}{2}L_{opt}}{vt}\right\rceil\Big/\left\lceil\frac{L_{opt}}{vt}\right\rceil\le 3$. \qed\end{proof} \section{Simulation Results} To the best of our knowledge there is no related work on barrier sweep coverage problem in literature. We compare performance of our proposed Algorithm \ref{alg:pint1} and the Algorithm for BSCMC through simulation. We implement both of the algorithms in C++ language. A set of line segments are randomly generated inside a square region of side 200 meter. The length of each line segment is randomly chosen within 5 meter. The velocity of each mobile sensor is taken as 1 meter per second. \begin{table}[] \centering {\small \begin{tabular}{|c|c|c|} \hline No. of line segments & \multicolumn{2}{c|}{Number of Mobile Sensors}\\ \cline{2-3} & \multicolumn{1}{c|}{Algorithm for BSCMC} &\multicolumn{1}{c|}{Algorithm \ref{alg:pint1}}\\ \hline 5&5&12\\ 15&14&28\\ 25&21&35\\ 35&26&43\\ 45&35&54\\ 55&42&64\\ 65&51&72\\ 75&57&77\\ 85&66&90\\ 95&75&97\\ 105&76&100\\ 115&80&104\\ 125&83&107\\ 135&86&110\\ \hline \end{tabular} \caption{Average number of mobile sensors to achieve sweep coverage varying with number of line segments for fixed sweep period}\label{tab2} } \end{table} \begin{figure}[] \centering \includegraphics[width=0.65\textwidth]{untitled1.eps} \caption{Comparison with respect to number of mobile sensors varying with number of line segments}\label{fig1:vsn} \end{figure} \begin{table}[] \centering {\small \begin{tabular}{|c|c|c|} \hline Sweep period & \multicolumn{2}{c|}{Number of Mobile Sensors}\\ \cline{2-3} & \multicolumn{1}{c|}{Algorithm for BSCMC} &\multicolumn{1}{c|}{Algorithm \ref{alg:pint1}}\\ \hline 50&75&197\\ 60&67&179\\ 70&62&135\\ 80&58&119\\ 90&56&107\\ 100&54&97\\ 110&52&86\\ 120&51&77\\ 130&51&71\\ 140&50&66\\ 150&50&60\\ \hline \end{tabular} \caption{Comparison of the number of mobile sensors to achieve sweep coverage varying sweep period for fixed number of line segments}\label{tab1} } \end{table} Table \ref{tab2} shows comparison of average number of mobile sensors to achieve sweep coverage for both of the algorithms varying with number of line segments. The average number of mobile sensor is calculated for 100 several executions of the algorithms with fixed sweep period 50 second. A graphical representation of the Table \ref{tab2} is illustrated in Fig. \ref{fig1:vsn}. Table \ref{tab2} and Fig. \ref{fig1:vsn} show that with increasing number of line segments, the Algorithm BSCMC performs better than the Algorithm \ref{alg:pint1} with respect to average number of mobile sensors. \begin{figure}[ht] \centering \includegraphics[width=0.65\textwidth]{untitled.eps} \caption{Comparison with respect to number of mobile sensors varying with sweep period (in second)}\label{fig1:vs-t} \end{figure} Table \ref{tab1} shows comparison of average number of mobile sensors to achieve sweep coverage varying with the sweep periods. The average number of mobile sensor is calculated for 100 several executions of the algorithms with fixed number of line segments, which is equal to 50. A graphical representation of the Table \ref{tab1} is illustrated in Fig. \ref{fig1:vs-t}. Table \ref{tab1} and Fig. \ref{fig1:vs-t} show that with increasing number of line segments, the difference between the average number of mobile sensors decreases. In general the Algorithm for BSCMC performs better than the Algorithm \ref{alg:pint1}. \section{Conclusion}\label{sec:concl} Unlike traditional coverage, in sweep coverage periodic monitoring is maintained by mobile sensors instead of continuous monitoring. There are many applications in industry, where periodic monitoring is required for identification of specific preventive maintenance. For example, periodic monitoring of electrical equipments like motors and generators is required to check their partial discharges \cite{Paoletti99}. In this paper we have introduced sweep coverage concept for barriers, where the objective is to cover finite length curves on a plane. In barrier sweep coverage, mobile sensors periodically visit all points of a set of finite length curves. For a single curve, we have solved the problem optimally. To resolve the issue of limited battery power of mobile sensors, we have proposed a solution by introducing an energy source on the plane and proposed a solution, which achieves a constant approximation factor $\frac{13}{3}$. We have proved that finding minimum number of mobile sensors to sweep cover a set of finite length curves is NP-hard and cannot be approximated within a factor of 2. For this problem we have proposed a 2-approximation algorithm for a special case, which is the best possible approximation factor. For the general problem, we propose a 5-approximation algorithm. As an application of barrier sweep coverage problems, we have defined a data gathering problem with data mules, where the concept of barrier sweep coverage is applied for gathering data by utilizing minimum number of data mules. A 3-approximation algorithm is proposed to solve the problem. In future we want to investigate the sweep coverage problems in presence of obstacles. There would be another possible extension of this work a plane, where the surface might be uneven. \bibliographystyle{plain}
1,116,691,501,061
arxiv
\section{introduction} Entanglement has been a vital physical resource for quantum information processing, such as quantum communication \cite{eke91,ben93} and quantum computation \cite{ben00,rau01,llb01}. Therefore, the characterization of entanglement for a given quantum state is a fundamental problem. Bipartite entanglement is well understood in many aspects \cite{bdj96,san00,vid02,mbp05}. Especially, for two qubits, its mixed state entanglement can be characterized with the help of the so-called concurrence \cite{woo01}. However, in multipartite cases, the quantification of entanglement is very complicated and challenging. A fundamental property of multipartite entangled state is that entanglement is monogamous. In a three-qubit composite system $\rho_{ABC}$, the monogamy means that there is a trade-off between the amount of entanglement that shared by $\rho_{AB}$ and $\rho_{AC}$, respectively. For the pure state $\ket{\Psi}_{ABC}$, Coffman, Kundu, and Wootters proved the inequality $C_{AB}^{2}+C_{AC}^{2}\leq \tau_{A(R_A)}$ \cite{ckw00}, where the square of the concurrence $C_{ij}$ quantifies the entanglement of subsystem $\rho_{ij}$ and the linear entropy $\tau_{A(R_A)}$ measures the pure state entanglement between qubit $A$ and remaining qubits $BC$. Particularly, the residual quantum correlation in the above equation, \emph{i.e.,} the $3$-tangle \begin{equation}\label{1} \tau(\Psi_{ABC})=\tau_{A(R_A)}-C_{AB}^{2}-C_{AC}^{2}, \end{equation} was proven to be a good measure for genuine three-qubit entanglement \cite{ckw00,dur00}. However, in a general case, quantum correlation and quantum entanglement are inequivalent, although both of them are nonnegative and invariant under the local unitary (LU) transformation \cite{ved97,dlz06}. For example, in the Werner state $\rho_{z}=\frac{1-z}{4}I+z\proj{\psi}$ with $\ket{\psi}=(\ket{00}+\ket{11})/\sqrt{2}$, the quantum correlation (quantum discord) \cite{oli02} is greater than $0$ when $z>0$, but the entanglement (concurrence) is nonzero only when $z>\frac{1}{3}$. The key difference between the two quantities is that entanglement does not increase under local operations and classical communication (LOCC), (i.e., the entanglement monotone property). Recently, Osborne and Verstraete also proved that the distribution of bipartite entanglement among $N$-qubit quantum state satisfies the relation \cite{tjo06} $C_{A_{1}A_{2}}^{2}+C_{A_{1}A_{3}}^{2}+\cdots+C_{A_{1}A_{N}}^{2}\leq \tau_{A_{1}(A_{2}\cdots A_{N})}$, where the $\tau_{A_{1}(A_{2}\cdots A_{N})}$ is the linear entropy for a pure state. Comparing with the three-qubit case, it is natural to ask \emph{whether or not the residual quantum correlation in an $N$-qubit pure state ($N>3$) is a good measure of the genuine multipartite entanglement.} In this paper, we attempt to answer the above tough question clearly. Based on the quantitative complementary relations (QCRs), we analyze the properties of multipartite correlations and entanglement in four-qubit pure states. It is shown that the single residual correlation in the four-qubit case does not satisfy the entanglement monotone property. In addition, the genuine three- and four-qubit correlations are unable to quantify entanglement, either. Finally, in terms of a serious analysis on the sum of all residual correlations, we conjecture it to be an appropriate quantity for constituting the multipartite entanglement measure in the composite system. The paper is organized as follows. In Sec. II, the properties of multipartite correlations in four-qubit pure states are analyzed in detail. As a result, a multipartite entanglement measure is conjectured. In Sec. III, we give some remarks and main conclusions. In addition, three examples are given in the Appendix. \section{Multipartite quantum correlations in four-qubit pure states} Before analyzing the quantum correlations, we first introduce the QCRs. Complementarity \cite{boh28} is an essential principle of quantum mechanics, which is often referred to the mutually exclusive properties of a single quantum system. As a special quantum property without classical counterpart, entanglement can constitute complementarity relations with local properties \cite{bos02,opp03}. Jakob and Bergou derived a QCR for two-qubit pure state \cite{jab03}, \emph{i.e.}, $C^{2}+S_k^2=1$, in which the concurrence $C$ quantifies the non-local correlation of the two qubits and the $S_{k}^{2}=|\overrightarrow{r_{k}}|^{2}$ is a measure for single particle characters ($\overrightarrow{r_{k}}$ is the polarization vector of qubit $k$). The experimental demonstration of this relation was made by Peng \emph{et al} \cite{pzd05} with nuclear magnetic resonance techniques. For an $N$-qubit pure state, the generalized QCRs are also available \cite{pzd05,tes05,cho06} \begin{eqnarray}\label{2} \tau_{k(R_k)}+S_{k}^{2} &=& 1, \end{eqnarray} where the linear entropy $\tau_{k(R_k)}=2(1-\mbox{tr}\rho_k^2)$ \cite{san00} characterizes the total quantum correlation between qubit $k$ and the remaining qubits $R_k$. For a two-qubit pure state, the linear entropy is a bipartite quantum correlation. For a three-qubit case, the $\tau_{k(R_k)}$ is composed of the two-qubit and genuine three-qubit correlations \cite{ckw00}. For an $N$-qubit pure state \cite{czz06}, here we propose a natural generalization that the linear entropy is contributed by different levels of quantum correlations, \emph{i.e.}, \begin{equation}\label{3} \tau_{k(R_k)}=t_{N}(\ket{\Psi}_N)+\cdots+\sum_{i< j\in R_{k}}t_{3}(\rho_{ijk})+\sum_{l\in R_{k}}t_{2}(\rho_{kl}), \end{equation} where the $t_{m}$ represents the genuine $m$-qubit quantum correlation, for $m=2,3,\cdots,N$. \begin{figure} \begin{center} \epsfig{figure=fig1.eps,width=0.4\textwidth} \end{center} \caption{(Color online) The correlation Venn diagram for a four-qubit pure state $\ket{\Psi}_{ABCD}$. The overlapping areas $t_{4}$, $t_{3}$'s, and $t_{2}$'s denote the genuine four-, three-, and two-qubit quantum correlations, respectively. The areas without overlapping $S_{k}^{2}$ is the local reality of qubit $k$, for $k=A,B,C,D$.} \end{figure} The Venn diagram, which is often utilized in the set theory, may be employed to depict quantum correlations in a composite system. Here we draw schematically a correlation Venn diagram for a four-qubit pure state $\ket{\Psi}_{ABCD}$ in Fig.1. Qubits $A$, $B$, $C$, and $D$ are represented by four unit circles, respectively, and the quantum correlations are denoted by the overlapping areas of these circles. According to this diagram, the four-qubit QCRs can be written as \begin{eqnarray}\label{4} t_{4}+t_{3}^{(2)}+t_{3}^{(3)}+t_{3}^{(4)} +\sum_{l\in R_{A}}t_{2}(\rho_{Al})+S_{A}^2=1,\nonumber\\ t_{4}+t_{3}^{(1)}+t_{3}^{(3)}+t_{3}^{(4)} +\sum_{l\in R_{B}}t_{2}(\rho_{Bl})+S_{B}^2=1,\nonumber\\ t_{4}+t_{3}^{(1)}+t_{3}^{(2)}+t_{3}^{(4)} +\sum_{l\in R_{C}}t_{2}(\rho_{Cl})+S_{C}^2=1,\nonumber\\ t_{4}+t_{3}^{(1)}+t_{3}^{(2)}+t_{3}^{(3)} +\sum_{l\in R_{D}}t_{2}(\rho_{Dl})+S_{D}^2=1, \end{eqnarray} where the $t_{3}^{(1)}$, $t_{3}^{(2)}$, $t_{3}^{(3)}$ and $t_{3}^{(4)}$ are the three-qubit correlations in subsystems $\rho_{BCD}$, $\rho_{ACD}$, $\rho_{ABD}$, and $\rho_{ABC}$, respectively. In three-qubit pure states, the quantum correlations $t_2$ (square of the concurrence) and $t_3$ (3-tangle) in the linear entropy are good measures for two- and three-qubit entanglement, respectively. However, it is an open problem that whether or not the similar relations also hold in a four-qubit pure state $\ket{\Psi}_{ABCD}$. Before analyzing the multipartite correlations $t_{4}$ and $t_{3}^{(i)}$s, we need consider how to evaluate the two-qubit correlation $t_{2}(\rho_{ij})$ in the pure state $\ket{\Psi}_{ABCD}$. Similar to the three-qubit case, we make use of the square of the concurrence which is defined as $C_{ij}=\mbox{max}[(\sqrt{\lambda_{1}}-\sqrt{\lambda_{2}}- \sqrt{\lambda_{3}}-\sqrt{\lambda_{4}}), 0]$, where the decreasing positive real numbers $\lambda_{i}$s are the eigenvalues of matrix $\rho_{ij}(\sigma_y\otimes\sigma_y)\rho_{ij}^{\ast}(\sigma_y\otimes\sigma_y)$ \cite{woo01}. The main reason for this evaluation is because that the relation $ \sum_{l\in R_{k}}C_{kl}^{2}=\tau_{k(R_k)}$ holds for the four-qubit $W$ state $\ket{\psi}_{ABCD}=\alpha_1\ket{0001}+\alpha_2\ket{0010}+\alpha_3\ket{0100}+\alpha_4\ket{1000}$ which involves only the two-qubit entanglement \cite{ckw00}. In the following, we will analyze the properties of the single residual correlation, the genuine three- and four-qubit correlations, and the sum of all residual correlations, respectively. \subsection{Single residual correlation} Under the above evaluation for the two-qubit quantum correlation, the multipartite correlation around the qubit $k$ (\emph{i.e.}, the residual correlation) will be \begin{equation}\label{5} M_{k}(\ket{\Psi})=\tau_{k(R_k)}-\sum_{l\in R_{k}}t_{2}(\rho_{kl}), \end{equation} in which $t_{2}(\rho_{kl})=C_{kl}^{2}$ and $k=A,B,C,D$. As widely accepted, a good measure for the multipartite entanglement should satisfy the following requirements \cite{ved97}: (1) the quantity should be a non-negative real number; (2) it is unchanged under the LU operations; (3) it does not increase on average under the LOCC \emph{i.e.}, the measure is entanglement monotone. Now we analyze the residual correlation $M_{k}$. According to the monogamy inequality proven by Osborne and Verstraete \cite{tjo06}, it is obvious that $M_k$ is positive semi-definite. In addition, for the full separable state and the entangled state involving only two-qubit correlations, it can be verified that $M_{k}=0$. The correlation $M_{k}$ is also LU invariant, which can be deduced from the fact that the linear entropy and the concurrence are invariant under the LU transformation. The last condition is that $M_{k}$ should be non-increasing on average under the LOCC. It is known that any local protocol can be implemented by a sequence of two-outcome POVMs involving only one party \cite{dur00}. Without loss of generality, we consider the local POVM $\{A_{1}, A_{2}\}$ performed on the subsystem $A$, which satisfies $A_{1}^{\dagger}A_{1}+A_{2}^{\dagger}A_{2}=I$. According to the singular value decomposition \cite{dur00}, the POVM operators can be written as $A_{1}=U_{1}diag\{\alpha, \beta\}V$ and $A_{2}=U_{2}diag\{\sqrt{1-\alpha^2}, \sqrt{1-\beta^2}\}V$, in which $U_{i}$ and $V$ are unitary matrices. Since $M_{k}$ is LU invariant, we need only to consider the diagonal matrices in the following analysis. Note that the linear entropy and concurrence are invariant under a determinant one stochastic LOCC (SLOCC) \cite{fer03}, we can deduce $M_{A}(\ket{\Phi_1})=M_{A}(\frac{A_{1}\ket{\Psi}}{\sqrt{p_{1}}}) =\frac{\alpha^2\beta^2}{p_{1}^{2}}M_{A}(\ket{\Psi})$ and $M_{A}(\ket{\Phi_2})=M_{A}(\frac{A_{2}\ket{\Psi}}{\sqrt{p_{2}}}) =\frac{(1-\alpha^2)(1-\beta^2)}{p_{2}^{2}}M_{A}(\ket{\Psi})$, where the $p_{i}=\mbox{tr}[A_{i}\proj{\Psi}A_{i}^{\dagger}]$ is the normalization factor. After some algebraic deductions similar to those in Refs. \cite{dur00,won01}, the following relation can be derived \begin{equation}\label{6} p_{1}M_{A}(\ket{\Phi_{1}})+p_{2}M_{A}(\ket{\Phi_{2}})\leq M_{A}(\ket{\Psi}), \end{equation} which means the multipartite correlation $M_{A}$ is entanglement monotone under the local operation performed on subsystem $A$. It should be pointed out that the above property is \emph{not} sufficient to show the parameter $M_{A}$ is monotone under the LOCC. This is because, unlike the three-qubit case, the residual correlation $M_{k}$ in a four-qubit state will change after permuting the parties. Therefore, before claiming that the $M_{k}$ is entanglement monotone, one needs to prove the parameters $M_{B},M_{C}$, and $M_{D}$ are also non-increasing on average under the POVM $\{A_{1}, A_{2}\}$ performed on subsystem $A$. However, this requirement can not be satisfied in a general case, because the behaviors of the three parameters are quite different from that of $M_{A}$. For example, in the correlation $M_{C}=\tau_{C(R_{C})}-C_{AC}^{2}-C_{BC}^{2}-C_{CD}^{2}$, only the $C_{AC}^{2}$ is invariant under the determinant one stochastic LOCC performed on subsystem A. With this property, we know $C_{AC}^{2}$ is entanglement monotone. As to the linear entropy $\tau_{C(R_C)}$ and the other concurrences ($C_{BC}^{2}$ and $C_{CD}^{2}$), one can prove that they are decreasing and increasing under the POVM $\{A_{1},A_{2}\}$, respectively, in terms of the following two facts: first, for the reduced density matrices $\rho_{C}$, $\rho_{BC}$ and $\rho_{CD}$, the effect of the POVM is equivalent to decomposing them into two mixed states, respectively; second, the linear entropy is concave function and the concurrence is convex function. Comparing the behaviors of $M_{A}$ and $M_{C}$ under the POVM, we can not ensure that $M_{C}$ is entanglement monotone (in the Appendix, we give an example in which the correlation $M_{C}$ will increase under a selected POVM performed on subsystem $A$). The cases for $M_{B}$ and $M_{D}$ are similar. For a kind of symmetric quantum state which has the property $M_{A}=M_{B}=M_{C}=M_{D}$, is the correlation $M_{k}$ entanglement monotone? The answer is still negative. Since the symmetry cannot hold after an arbitrary POVM, the parameter $M_{k}$ cannot be guaranteed to be monotone under the next level of POVM once the property is broken (see such an example in the Appendix). Therefore, we conclude that the correlation $M_{k}$ is not entanglement monotone and it is not a good entanglement measure. \subsection{Three- and four-qubit correlations} Next, we analyze the properties of the correlations $t_{4}$ and $t_{3}^{(i)}$. Note that the QCRs provide only four equations which cannot determine completely the five multipartite parameters in general. Therefore, a well-defined measure for $t_3$ or $t_4$ is needed in this case. Recently, an attempt was made to introduce an information measure $\xi_{1234}$ for the genuine four-qubit entanglement \cite{czz06}, but this measure can hardly characterize completely the genuine four-qubit correlation/entanglement \cite{noteC}. On the other hand, a mixed $3$-tangle $\tau_{3}=\mbox{min}\sum_{p_{x},\phi_{x}}p_{x}\tau(\phi_{x})$ \cite{dur00,uhl00} could not be chosen as the correlation $t_{3}$ either, because it is not compatible with the QCRs of Eq.(4). As an example, we consider the quantum state $\ket{\psi}_{ABCD}=(\ket{0000}+\ket{1011}+\ket{1101}+\ket{1110})/2$ \cite{fer02}, in which the reduce density matrix $\rho_{BCD}$ can be decomposed to the mix of two pure states $\ket{\phi}_{1}=\ket{000}$ and $\ket{\phi}_{2}=(\ket{011}+\ket{101}+\ket{110})/\sqrt{3}$. Supposing that the $\tau_{3}$ is a good measure for $t_{3}$, we can obtain $t_{3}^{(1)}=\tau_{3}(\rho_{BCD})=0$ in terms of the definition of the mixed $3$-tangle. Then the other multipartite correlations are determined from Eq. (4), with $t_{4}=1.5$ and $t_{3}^{(2)}=t_{3}^{(3)}=t_{3}^{(4)}=-0.25$. Because these correlations are not in the reasonable range, the mixed $3$-tangle is not a suitable measure compatible with the QCRs. Although the analytical measures for $t_{4}$ and $t_{3}$ are unavailable now, we may analyze a special kind of quantum state in which $t_{4}$ is zero. The quantum state $\ket{\varphi}=\alpha\ket{0000}+\beta\ket{0101}+\gamma\ket{1000}+\eta\ket{1110}$ is just the case. Suppose that the good correlation measures are existent and their values correspond to the overlapping regions in the Venn diagram (Fig.1). It is simple to see that these correlations are non-negative and LU invariant. In the quantum state $\ket{\varphi}$, if we let the $t_{3}^{(i)}$ be the variables, we can obtain the relation $t_{3}^{(1)}=-\frac{1}{3}t_{4}$ according to the QCRs of Eq. (4). Due to the non-negative property of the two correlations, we can judge the four-qubit correlations is zero in this state. Then the other three-qubit correlations can be solved with the QCRs. In order to test the entanglement monotone of $t_{3}^{(i)}$ more clearly, the parameters in $\ket{\varphi}$ are chosen to be $\alpha=\beta=\gamma=\eta=1/2$ (see the example 3 in the Appendix). After performing a selected POVM, we find the $t_{3}^{(2)}$ will increase on average, which implies that the correlations $t_{3}$ and $t_{4}$ are not suitable for the quantification of entanglement. \subsection{Sum of the residual correlations} Finally, we consider the sum of all residual correlations, which is defined as \begin{equation}\label{7} M=M_{A}+M_{B}+M_{C}+M_{D}=\sum_{k}\tau_{k(R_k)}-2\sum_{p>q}C_{pq}^{2}, \end{equation} in which $k,p,q=A,B,C,D$. It is obvious that $M$ is nonnegative and LU invariant in terms of the corresponding properties of $M_{k}$. It is extremely difficult to prove the entanglement monotone property analytically. The main hindrance lies in that one cannot compare the change of the concurrences in a general quantum state before and after the POVM. Nevertheless, we conjecture that the correlation $M$ is an entanglement monotone, as rationalized in some sense below. From the definition of $M$, it is seen that $M$ is invariant under the permutations of the subsystems. Without loss of the generality, suppose that the POVM is performed on the subsystem $A$. In this case, we analyze the behaviors of the components in $M$. According to the prior analysis in Eq.(6), the component $\xi_{1}=\tau_{A(R_A)}-C_{AB}^{2}-C_{AC}^{2}-C_{AD}^{2}$ is decreasing on average. Moreover, due to the concave property of linear entropy and the convex property of concurrence, the component $\xi_{2}=\tau_{B(R_B)}+\tau_{C(R_C)}+\tau_{D(R_D)}-2(C_{BC}^{2}+C_{BD}^{2}+C_{CD}^{2})$ is also decreasing after the POVM. The only increasing component is $\xi_{3}=-C_{AB}^{2}-C_{AC}^{2}-C_{AD}^{2}$. It is conjectured that the decrease of $\xi_{1}$ and $\xi_{2}$ can countervail the increase of $\xi_{3}$, which results further in the entanglement monotone property of $M$. In Fig.2, the quantity $\Delta M=M(\ket{\Psi})-p_{1}M(\ket{\Phi_{1}})-p_{2}M(\ket{\Phi_{2}})$ is calculated for nine quantum states $G_{abcd},L_{abc_{2}},L_{a_{2}b_{2}},L_{ab_{3}},L_{a_{4}},L_{a_{2}0_{3\oplus 1}}, L_{0_{5\oplus 3}},L_{0_{7\oplus 1}}$ and $L_{0_{3\oplus1}\overline{0}_{3\oplus 1}}$ (the state parameters we choose are listed in Table I), which are the representative states under the SLOCC classification (c.f. Ref. \cite{fer02}). Due to the form of quantum state $L_{0_{3\oplus1}\overline{0}_{3\oplus 1}}=\ket{0000}+\ket{0111}$, we perform the POVM on its subsystem $B$. For the other states, the POVM is performed on the subsystem $A$. From Fig.2, we can see the correlation $M$ do not increase on average under the POVMs, which support our conjecture (for the POVMs performed on other subsystems, we obtain the similar results). In addition, for the symmetric quantum states $G_{abcd},L_{abc_{2}}$ and $L_{ab_{3}}$, the second level of the POVM is also calculated and the $\Delta M$ is still nonnegative (in the first level of the POVM performed on the subsystem $A$, the diagonal elements are $\alpha_{1}=0.4$ and $\beta_{1}=0.7$; in the second level of POVM, $\alpha_{2}$ and $\beta_{2}$ are chosen from 0.05 to 0.95, and the interval is 0.01). \begin{figure} \begin{center} \epsfig{figure=fig2.eps,width=0.5\textwidth} \end{center} \caption{(Color online) The values of $\Delta M$ for nine representative states. In the POVM, the diagonal elements $\alpha$ and $\beta$ are chosen from 0.05 to 0.95, and the interval is 0.01. } \end{figure} \begin{table}[h] \begin{center} \begin{tabular}{|c|c| c| c| c| c|} \hline\hline $G_{abcd}$ &$L_{abc_{2}}$ & $L_{a_{2}b_{2}}$ & $L_{ab_{3}}$ & $L_{a_{4}}$ &$L_{a_{2}0_{3\oplus1}}$ \\\hline $\begin{array}{cc} $a=c=1$ \\ $b=d=0.5$ \\ \end{array}$ & $\begin{array}{cc} $a=2$ \\ $b=c=1$ \\ \end{array}$ & $\begin{array}{cc} $a=1$ \\ $b=1$ \\ \end{array}$ & $\begin{array}{cc} $a=1$ \\ $b=1.5$ \\ \end{array}$ & $a=1$ & $a=1$ \\\hline \end{tabular} \caption{The parameters we choose in the quantum states $G_{abcd},L_{abc_{2}},L_{a_{2}b_{2}},L_{ab_{3}},L_{a_{4}},L_{a_{2}0_{3\oplus 1}}$ (Ref. \cite{fer02}).} \end{center} \end{table} Mainly based on the above analysis, we therefore conjecture that the multipartite correlation $M$ is entanglement monotone and then is possible to constitute a measure for the total multipartite entanglement in four-qubit pure states. At this stage, we may also introduce the average multipartite entanglement \begin{eqnarray} E_{ms} &=& \frac{M}{4}=\frac{M_{A}+M_{B}+M_{C}+M_{D}}{4}, \end{eqnarray} to characterize the entanglement per single qubit (ranged in [0,1]), as far as the correlation $M$ is (conjectured to be) entanglement monotone. A remarkable merit of this quantity is its computability. For the quantum state $G_{abcd}=\frac{a+d}{2}(\ket{0000}+\ket{1111})+\frac{a-d}{2}(\ket{0011} +\ket{1100})+\frac{b+c}{2}(\ket{0101}+\ket{1010})+\frac{b-c}{2}(\ket{0110} +\ket{1001})$ which is the generic kind under the SLOCC classification, the change of $E_{ms}$ with the real parameters $a$ and $d$ are plotted in Fig.3 (the parameters $b=0$ and $c=0.5$ are fixed). In the regions near ($a=d=0$), ($a\gg c,d$) and ($d\gg a,c$), the multipartite entanglement $E_{ms}$ tends to zero, which can be explained that the quantum state tends to be the tensor product of the two bell states in these ranges. The bigger values of $E_{ms}$ appear in the regions near ($a=0,d=0.5$), ($a=0.5$ and $d=0$), and $a=d\gg c$. This is because the quantum state $G_{abcd}$ approaches to the four-qubit $GHZ$ state in these regions( e.g., when $a=0$ and $d=0.5$, the $E_{ms}$ is $1$ and the quantum state can be rewritten as $G_{abcd}=(\ket{\alpha\alpha\alpha\alpha}+\ket{\beta\beta\beta\beta})/\sqrt{2}$ after a local unitary transformation $\ket{\alpha}=(\ket{0}+i\ket{1})/\sqrt{2}$ and $\ket{\beta}=(\ket{0}-i\ket{1})/\sqrt{2}$ ). In this case, the four-partite entanglement is a dominant one. \begin{figure} \begin{center} \epsfig{figure=fig3.eps,width=0.45\textwidth} \end{center} \caption{(Color online) The average multipartite entanglement $E_{ms}$ for the quantum state $G_{abcd}$ in which the parameters $a$ and $d$ are chosen from 0 to 5 and the interval is 0.05. The parameters $b=0$ and $c=0.5$ are fixed.} \end{figure} Although the operational meaning of $E_{ms}$ for entanglement transformation and distillation is not clear now, we can use this quantity to restrict some procedures which are impossible (suppose that the $E_{ms}$ is validated to be entanglement monotone). For example, if the quantity increases in an LOCC transformation from $\ket{\varphi_{1}}$ to $\ket{\varphi_{2}}$, we can judge that this procedure is impossible because the entanglement should be monotone in a real physical transformation. It should be pointed out that the quantity $E_{ms}$ in Eq. (8) corresponds to the correlation $t_{4}+\frac{3}{4}\sum t_{3}^{(i)}$, which is not the total multipartite correlation $M_{T}=t_{4}+\sum t_{3}^{(i)}$ in the Venn diagram. Whether or not the $M_{T}$ is a good candidate for the total multipartite entanglement in the system is worth study in the future. In order to test the entanglement properties of $M_{T}$, one needs first to find the appropriate definitions for the correlation $t_{4}$ and $t_{3}$, respectively. For an $N$-qubit pure state, the sum of all residual correlations is given by \begin{eqnarray}\label{9} M_{N}(\Psi_{N}) &=& Nt_{N}+(N-1)\sum t_{N-1}+\cdots+3\sum t_{3}\nonumber \\ &=& \sum\tau_{k(R_{k})}-2\sum_{i>j} C_{ij}^{2}. \end{eqnarray} Similar to the four-qubit case, this quantity is non-negative real number in terms of the monogamy inequality. In addition, the LU invariance of $M_{N}$ is guaranteed by the corresponding property of linear entropy and concurrence. For the entanglement monotone, we conjecture the correlation $M_{N}$ also satisfies. Therefore, correlation $M_{N}$ may be able to characterize the multipartite entanglement in the system. Similarly, the average over $N$ qubits $M_{N}/N$ (ranged in [0,1]) can be considered as the entanglement per qubit \section{discussion and conclusion} In the correlation Venn diagram of three-qubit pure state $\ket{\Psi}_{ABC}$ \cite{cho06,cai07}, the quantum correlations at different levels are able to characterize the corresponding quantum entanglements. Therefore, the total entanglement in the system is contributed by the two-qubit entanglement and the genuine three-qubit entanglement, respectively. However, in the four-qubit case, the structure of total entanglement is quite complicated; how to quantify separately the three- and four-qubit entanglement is still an open problem. It was indicated by Wu and Zhang that the set of two-, three-, four-partite GHZ states is not a reversible entanglement generating set for four-party pure states \cite{sjw00} (\emph{i.e.}, the set of entangled states can not generate an arbitrary four-party pure state by the LOCC asymptotically \cite{bpr00}), which implies that the GHZ-class entanglements are not sufficient for characterizing the structure of total entanglement in the system. Recently, it was noted by Lohmayer \emph{et. al.} \cite{loh06} that a kind of rank-2 three-qubit mixed states which are entangled but do not have the mixed 3-tangle and concurrence (one can consider that these states are reduced from four-qubit pure states). This case shows further that the quantification of entanglement in multi-qubit systems is extremely complicated and highly nontrivial. In conclusion, based on the generalized QCRs, we have analyzed the multipartite correlations in four-qubit pure states. Unlike the three-qubit case, we find that the similar relations do not hold again in the four-qubit system. First, the residual correlation $M_{k}$ is not of entanglement monotone. In addition, the genuine three- and four-qubit correlations are not suitable to be entanglement measure, either. Finally, the total residual correlation $M$ has been analyzed, and it is conjectured that the average multipartite correlation $E_{ms}$ is able to quantify the multipartite entanglement in the system. \section*{Acknowledgments} The work was supported by the RGC of Hong Kong under grant Nos. HKU 7051/06P, 7012/06P, and HKU 3/05C, the URC fund of HKU, NSF-China grants under No. 10429401. \section*{Appendix} \textbf{Example 1}: Consider a quantum state $\ket{\Psi}_{ABCD}=(\ket{0000}+\ket{0011}+\ket{0101}+\ket{0110} +\ket{1010}+\ket{1111})/\sqrt{6}$, which belongs to the representative state $L_{a_{2}b_{2}}$ (the parameters is chosen as $a=b=1$) under the SLOCC classification\cite{fer02}. The POVM $\{A_{1},A_{2}\}$ is performed on subsystem $A$, which has the form $A_{1}=U_{1}diag\{\alpha,\beta\}V$ and $U_{2}diag\{\sqrt{1-\alpha^2},\sqrt{1-\beta^2}\}V$. Due to the LU invariance of the correlation $M_{k}$, we need only to consider the diagonal matrices in which the parameters are chosen to be $\alpha=0.9$ and $\beta=0.2$. After the POVM, two outcomes $\ket{\Phi_{1}}=A_{1}\ket{\Psi}/\sqrt{p_{1}}$ and $\ket{\Phi_{2}}=A_{2}\ket{\Psi}/\sqrt{p_{2}}$ are present, with the possibilities as $p_{1}=0.5533$ and $p_{2}=0.4467$. Some calculated results are listed in Table II. \begin{table} \begin{center} \begin{tabular}{c|c c c c c} \hline\hline $\begin{array}{cc} & \mbox{correlation} \\ \mbox{state} & \\ \end{array}$ &$\tau_{C(R_C)}$ & $C_{AC}^{2}$ & $C_{BC}^{2}$ & $C_{CD}^{2}$ & $M_{C}$ \\\hline $\ket{\Psi}$ & 8/9 & 4/9 & 0 & 0 & 4/9 \\ $\ket{\Phi_{1}}$ & 0.9994 & 0.04703 & 0 & 0 & 0.9524 \\ $\ket{\Phi_{2}}$ & 0.4867 & 0.4063 & 0 & 0 & 0.08042\\ \hline \end{tabular} \caption{The values of the correlation measures related to subsystem $C$ before and after the POVM.} \label{tab1} \end{center} \end{table} According to these values, we can deduce that $M_{C}(\ket{\Psi})-[p_{1}M_{C}(\ket{\Phi_{1}})+p_{2}M_{C}(\ket{\Phi_{2}})]=-0.1185$, which means that the correlation $M_C$ is increasing under the LOCC. \textbf{Example 2}: Consider a symmetric quantum state $\ket{\Psi}=(3\ket{0000}+3\ket{1111}-\ket{0011}-\ket{1100}+3\ket{0101} +3\ket{1010}+\ket{0110}+\ket{1001})/2\sqrt{10}$, which belongs to the representative state $G_{abcd}$ (the state parameters are chosen as $a=c=0.5$ and $b=d=1$) \cite{fer02}. According to the analysis in Sec. II $A$, we know that the correlation $M_{k}$ is monotone under the first level of the POVM. In this example, we will show that the correlation $M_{A}$ will be increasing under the second level of the POVM. The first level of POVM $\{A_{1},A_{2}\}$ is performed on the subsystem $A$ in which the diagonal elements are $\alpha=0.3$ and $\beta=0.8$. After the POVM, two outcomes $\ket{\Phi_{1}}$ and $\ket{\Phi_{2}}$ can be obtained with the probabilities $p_{1}=0.3650$ and $p_{2}=0.6350$, respectively. Suppose that $\ket{\Phi}_{1}$ is gained. Then we do the second level of POVM $\{A_{11},A_{12}\}$ on the subsystem $C$, in which the diagonal elements are chosen to be $\alpha_{1}=0.9$ and $\beta_{1}=0.2$. The outcomes $\ket{\Phi_{11}}$ and $\ket{\Phi_{12}}$ are obtained with the probabilities $p_{11}=0.1929$ and $p_{12}=0.8071$, respectively. The calculated results are presented in Table III. \begin{table}[h] \begin{center} \begin{tabular}{c|c c c c c} \hline\hline $\begin{array}{cc} & \mbox{correlation} \\ \mbox{state} & \\ \end{array}$ &$\tau_{A(R_A)}$ & $C_{AB}^{2}$ & $C_{AC}^{2}$ & $C_{AD}^{2}$ & $M_{A}$ \\\hline $\ket{\Phi_{1}}$ & 0.4324 & 0 & 0.2767 & 0 & 0.1556 \\ $\ket{\Phi_{11}}$ & 0.9960 & 0 & 0.2408 & 0 & 0.7552 \\ $\ket{\Phi_{12}}$ & 0.1565 & 0 & 0.07749 & 0 & 0.07901\\ \hline \end{tabular} \caption{The values of the correlation measures related to subsystem $A$ before and after the second level of the POVM.} \label{tab3} \end{center} \end{table} Comparing the change of $M_{A}$, we can get $M_{A}(\ket{\Phi}_{1})-[p_{11}M_{A}(\ket{\Phi_{11}})+p_{12}M_{A}(\ket{\Phi_{12}})] =-0.05382$. This means that the correlation $M_{A}$ is increasing under the LOCC, and thus $M_{k}$ is not a good entanglement measure for the symmetric quantum state. \textbf{Example 3:} We analyze the quantum state $\ket{\Psi}_{ABCD}=(\ket{0000}+\ket{0101}+\ket{1000}+\ket{1110})/2$, which is the representative state $L_{0_{5\oplus 3}}$ \cite{fer02}. The POVM $\{A_{1},A_{2}\}$ is performed on the subsystem $B$. Due to the LU invariance of the correlations $t_{4}$ and $t_{3}$, we only consider the diagonal elements of the operators $A_{1}$ and $A_{2}$ (in the form of the singular value decomposition) in which the parameters are chosen to be $\alpha=0.9$ and $\beta=0.4$. After the POVM, two outcomes $\ket{\Phi_{1}}$ and $\ket{\Phi_{2}}$ are obtained with the probabilities $p_{1}=0.4850$ and $p_{2}=0.5150$, respectively. In Table IV, the values of $t_{4}$ and $t_{3}^{(i)}$ for $\ket{\Psi}$, $\ket{\Phi_{1}}$ and $\ket{\Phi_{2}}$ are listed. \begin{table}[h] \begin{center} \begin{tabular}{c|c c c c c} \hline\hline $\begin{array}{cc} & \mbox{correlation} \\ \mbox{state} & \\ \end{array}$ &$t_{4}$ & $t_{3}^{(1)}$ & $t_{3}^{(2)}$ & $t_{3}^{(3)}$ & $t_{3}^{(4)}$\\\hline $\ket{\Psi}$ & 0 & 0 & 0.2500 & 0.2500 & 0.2500 \\ $\ket{\Phi_{1}}$ & 0 & 0 & 0.02721 & 0.1377 & 0.1377 \\ $\ket{\Phi_{2}}$ & 0 & 0 & 0.6651 & 0.1504 & 0.1504\\ \hline \end{tabular} \caption{The values of the correlation measures $t_{4}$ and $t_{3}$ before and after the POVM.} \label{tab4} \end{center} \end{table} With these values, we can get $t_{3}^{(2)}(\ket{\Psi})-[p_{1}t_{3}^{(2)}(\ket{\Phi_{1}}) +p_{2}t_{3}^{(2)}(\ket{\Phi_{2}})]=-0.1057$, which means that the correlation $t_{3}$ can increase on average under the LOCC and that it is not a good entanglement measure.
1,116,691,501,062
arxiv
\section{Introduction} \label{s:intro} Let $G$ be a finite simple graph with $n$ vertices and $m$ edges. A \emph{thrackle drawing} of $G$ on the plane is a drawing $\mathcal{T}:G\rightarrow\reals^2$, in which every pair of edges meets precisely once, either at a common vertex or at a point of proper crossing (see \cite{LPS97} for definitions of a drawing of a graph and a proper crossing). The notion of thrackle was introduced in the late sixties by John Conway, in relation with the following conjecture. \begin{ctc} For a thrackle drawing of a graph on the plane, one has $m\leq n$. \end{ctc} Despite considerable effort \cite{WOO71, LPS97, GY2000, GMY2004, GY2009, PJS2011, FP2011, GY2012, GKY2015, GX2017, FP17, MNajc}, the conjecture remains open. The best known bound for a thrackleable graph with $n$ vertices is $m \le 1.3984 n$ \cite{FP17}. Adding a point at infinity we can consider a thrackle drawing on the plane as a thrackle drawing on the $2$-sphere $S^2$. The complement of a thrackle drawing on $S^2$ is the disjoint union of open discs. We say that a drawing \emph{belongs to the class $T_d, \; d \ge 1$}, if there exist $d$ open discs $D_1, \dots, D_d$ whose closures are pairwise disjoint such that all the vertices of the drawing lie on the union of their boundaries (a disk may contain no vertices on its boundary). We say that two thrackle drawings of class $T_d$ are \emph{isotopic} if they are isotopic as drawings on $S^2 \setminus (\cup_{k=1}^d D_k)$. We will also occasionally identify a graph $G$ with its thrackle drawing $\mathcal{T}(G)$ speaking, for example, of the vertices and edges of the drawing. Thrackles of class $T_1$ are called \emph{outerplanar}: all their vertices lie on the boundary of a single disc $D_1$. Such thrackles are very well understood. \begin{theorem} \label{t:outer} Suppose a graph $G$ admits an outerplanar thrackle drawing. Then \begin{enumerate}[{\rm (a)}] \item \label{it:outallodd} any cycle in $G$ is odd \cite[Theorem~1]{GY2012}; \item \label{it:{it:outCTC}} the number of edges of $G$ does not exceed the number of vertices \cite[Theorem~2]{PJS2011}; \item \label{it:outRei} if $G$ is a cycle, then the drawing is Reidemeister equivalent to a standard odd musquash \cite[Theorem~1]{GY2012}. \end{enumerate} \end{theorem} We say that two thrackle drawings are \emph{Reidemeister equivalent} (or \emph{equivalent up to Reidemeister moves}), if they can be obtained from one another by a finite sequence of Reidemeister moves of the third kind in the complement of vertices (see Section~\ref{ss:R}). A \emph{standard odd musquash} is the simplest example of a thrackled cycle: for $n$ odd, distribute $n$ vertices evenly on a circle and then join by an edge every pair of vertices at the maximal distance from each other. This defines a musquash in the sense of Woodall \cite{WOO71}: an \emph{$n$-gonal musquash} is a thrackled $n$-cycle whose successive edges $e_0,\dots,e_{n-1}$ intersect in the following manner: if the edge $e_0$ intersects the edges $e_{k_1},\dots,e_{k_{n-3}}$ in that order, then for all $j=1,\dots,n-1$, the edge $e_j$ intersects the edges $e_{k_1+j},\dots,e_{k_{n-3}+j}$ in that order, where the edge subscripts are computed modulo $n$. A complete classification of musquashes was obtained in \cite{GD1999,GD2001}: every musquash is either isotopic to a standard $n$-musquash, or is a thrackled six-cycle. In this paper, we study thrackle drawings of the next two classes $T_d$: annular thrackles and pants thrackles. A thrackle drawing of class $T_2$ is called \emph{annular}. Up to isotopy, we can assume that the boundaries of $D_1$ and $D_2$ are two concentric circles on the plane, and that the thrackle drawing, except for the vertices, entirely lies in the open annulus bounded by these circles. Clearly, any outerplanar drawing can be viewed as an annular drawing. Figure~\ref{figure:annulus} shows an example of an annular thrackle drawing which is not outerplanar. Note however that the underlying graph has some vertices of degree $1$ (which must always be the case by Theorem~\ref{t:ann}\eqref{it:annout} below). \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.5,>=triangle 45] \draw[thick] (0,0) circle (4); \draw[thick] (0,0) circle (1); \coordinate (A3) at (0,-1); \coordinate (A5) at (-1,0); \coordinate (A2) at ({4*cos((pi/6) r)},{4*sin((pi/6) r)}); \coordinate (A1) at ({4*cos((5*pi/6) r)},{4*sin((5*pi/6) r)}); \coordinate (A4) at ({4*cos((11*pi/12) r)},{4*sin((11*pi/12) r)}); \coordinate (A6) at ({4*cos((pi/4) r)},{4*sin((pi/4) r)}); \fill (A1) circle (6.0pt); \fill (A2) circle (6.0pt); \fill (A3) circle (6.0pt); \fill (A4) circle (6.0pt); \fill (A5) circle (6.0pt); \fill (A6) circle (6.0pt); \draw [very thick] (A1) to [out=-100,in=-100] (A3) to [out=-80,in=-80] (A2) to (A1); \draw [very thick] (A2)--(A4); \draw [very thick] (A6) to [out=-90,in=0] (0,-2.5) to [out=180,in=-110] (A5); \draw [very thick] (A2) to [out=-80,in=0] (0,-3.5) to [out=180,in=-120] (A5); \end{tikzpicture} \caption{An annular thrackle drawing.} \label{figure:annulus} \end{figure} We show that the three assertions of Theorem~\ref{t:outer} also hold for annular drawings. \begin{theorem} \label{t:ann} Suppose a graph $G$ admits an annular thrackle drawing. Then \begin{enumerate}[{\rm (a)}] \item \label{it:annodd} any cycle in $G$ is odd; \item \label{it:annC} the number of edges of $G$ does not exceed the number of vertices; \item \label{it:annout} if $G$ is a cycle, then the drawing is, in fact, outerplanar \emph{(}and as such, is Reidemeister equivalent to a standard odd musquash\emph{)}. \end{enumerate} \end{theorem} We next proceed to the thrackle drawings of class $T_3$. We call such drawings \emph{pants thrackle drawings} or \emph{pants thrackles}. Any annular thrackle drawing is trivially a pants thrackle drawing. The pants thrackle drawing of a six-cycle in Figure~\ref{figure:sixcycle} is not annular. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.6,>=triangle 45] \draw[thick] (0,0) ellipse (5 and 3); \draw[thick] (-2,0) circle (1); \draw[thick] (2,0) circle (1); \coordinate (A1) at ({5*cos((10*pi/9) r)},{3*sin((10*pi/9) r)}); \coordinate (A2) at ({5*cos((8*pi/9) r)},{3*sin((8*pi/9) r)}); \coordinate (B1) at ({-2+cos((-pi/9) r)},{sin((-pi/9) r)}); \coordinate (B2) at ({-2+cos((pi/9) r)},{sin((pi/9) r)}); \coordinate (C2) at ({2+cos((pi/9) r)},{sin((pi/9) r)}); \coordinate (C1) at ({2+cos((-pi/9) r)},{sin((-pi/9) r)}); \foreach \x in {A1,A2,B1,B2,C1,C2} {\fill (\x) circle (5.0pt);} \draw [very thick] (A1) to[out=-30,in=-150] (-1.5,-2) to[out=30,in=-30] (B2); \draw [very thick] (A2) to[out=30,in=150] (-1.5,2) to[out=-30,in=30] (B1); \draw [very thick] (B1) to[out=-30,in=-150] (2.5,-1.5) to[out=30,in=-30] (C2); \draw [very thick] (B2) to[out=30,in=150] (2.5,1.5) to[out=-30,in=30] (C1); \draw [very thick] (A1) to[out=60,in=180] (1,2.5) to[out=0,in=60] (C2); \draw [very thick] (A2) to[out=-60,in=180] (1,-2.5) to[out=0,in=-60] (C1); \end{tikzpicture} \caption{Pants thrackle drawing of a six-cycle.} \label{figure:sixcycle} \end{figure} \medskip We prove the following. \begin{theorem} \label{t:pants} Suppose a graph $G$ admits a pants thrackle drawing. Then \begin{enumerate}[{\rm (a)}] \item \label{it:pantseven} any even cycle in $G$ is a six-cycle, and its drawing is Reidemeister equivalent to the one in Figure~\ref{figure:sixcycle}; \item \label{it:pantsodd} if $G$ is an odd cycle, then the drawing can be obtained from a pants drawing of a three-cycle by a sequence of edge insertions; \item \label{it:pantsC} the number of edges of $G$ does not exceed the number of vertices. \end{enumerate} \end{theorem} The procedure of edge insertion replaces an edge in a thrackle drawing by a three-path such that the resulting drawing is again a thrackle -- see Section~\ref{ss:ir} for details. The ideas of the proofs are roughly as follows. There is a toolbox of operations one can do on a thrackled graph while preserving thrackleability and that have been used in the past literature on thrackles; these include edge insertion, edge removal and vertex splitting. We investigate how these operations interact with the more restrictive annular or pants conditions. One key observation (Lemma~\ref{lemma:triangle} below) is that, in order to preserve thrackleability, edge removal hinges on some empty triangle condition which blends well with the annular or the pants structure. This allows the study of irreducible thrackles, which are those for which no edge removal is possible. We prove that irreducible thrackled cycles are either triangles, or, in the case of pants drawing, a six-cycle. \section{Thrackle operations} \label{s:pre} \subsection{Edge insertion and edge removal} \label{ss:ir} The operation of edge insertion was introduced in \cite[Figure~14]{WOO71}; given a thrackle drawing, one replaces an edge by a three-path in such a way that the resulting drawing is again a thrackle. All the changes to the drawing are performed in a small neighbourhood of the edge, as shown in Figure~\ref{figure:insertion}. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.6,>=triangle 45] \foreach \x in {0, 11} { \draw[very thick] (\x-1,1)-- (\x,0); \draw[very thick] (\x-1,-1)-- (\x,0); \draw[very thick] (\x-1,0)-- (\x,0); \draw[very thick] (\x+6,0.5)-- (\x+5,0); \draw[very thick] (\x+6,-0.5)-- (\x+5,0); \fill (\x,0) circle (5.0pt); \fill (\x+5,0) circle (5.0pt); \draw[very thick] (\x+0.8,-1) -- (\x+0.8,1.2); \draw[very thick] (\x+1.6,-1) -- (\x+1.6,1.2); \draw[very thick] (\x+4,-1) -- (\x+4,1.2); } \draw[->, very thick] (7,0) -- (9,0); \draw[very thick] (0,0) -- (5,0); \draw [very thick] (11,0) to[out=30,in=40] (16,-0.5); \draw [very thick] (11,-0.5) to[out=140,in=150] (16,0); \draw [very thick] (11,-0.5) to [out=170,in=-120] (10.25,0) to [out=35, in=145] (16.75,0) to [out=-45,in=10] (16,-0.5); \fill (11,-0.5) circle (5.0pt); \fill (16,-0.5) circle (5.0pt); \end{tikzpicture} \caption{The edge insertion operation.} \label{figure:insertion} \end{figure} Edge insertion on a given edge is not uniquely defined, even up to isotopy and Reidemeister moves, as we can choose one of two different orientations of the crossing of the first and the third edge of the three-path by which we replace the edge. We want to formalise and slightly modify the edge insertion procedure. Given an edge $e=uv$, on the first step, we remove from it a small segment $Q_1Q_2$ lying in the interior of $e$ and containing no crossings with other edges. On the second step, we slightly extend the segments $uQ_1$ and $Q_2v$ so that they cross (with one of two possible orientations), and then further extend each of them to cross other edges in such a way that the resulting drawing is again a thrackle. On the third step, we join the two endpoints of degree $1$ of the two edges obtained at the first step so that the resulting drawing is again a thrackle. We make two observations regarding this process of edge insertion. First, it may happen that we change the drawing not only in a small neighbourhood of $e$, but also ``far away" from it. Figure~\ref{figure:twosevens} shows two Reidemeister inequivalent thrackled seven-cycles obtained from the standard $5$-musquash by edge insertion. Note that the orientations of all the crossings in the two thrackles are the same (we note in passing, that up to isotopy and Reidemeister moves there exist only three thrackled seven-cycles: the two shown in Figure~\ref{figure:twosevens} and the standard $7$-musquash; we can prove that using the algorithm given in the end of Section~3 of \cite{MNajc}). \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.45] \foreach \x in {0,15} { \coordinate (A0) at (\x,5); \coordinate (A1) at ({\x + 5*cos((2*pi/5+pi/2) r)},{5*sin((2*pi/5+pi/2) r)}); \coordinate (A2) at ({\x + 5*cos((2*pi*2/5+pi/2) r)},{5*sin((2*pi*2/5+pi/2) r)}); \coordinate (A3) at ({\x + 5*cos((2*pi*3/5+pi/2) r)},{5*sin((2*pi*3/5+pi/2) r)}); \coordinate (A4) at ({\x + 5*cos((2*pi*4/5+pi/2) r)},{5*sin((2*pi*4/5+pi/2) r)}); \coordinate (B1) at ($(A4)+(-1.5,-.5)$); \foreach \y in {A0,A1,A2,A3,A4,B1} {\fill (\y) circle (6.67pt);} \draw[very thick] (A1) -- (A3) -- (A0) -- (A2) -- (A4); \draw [very thick] (A1) to [out=20,in=120] ($(A4)+(0.5,0)$) to [out=-60,in=-45] (B1); \ifthenelse{\x = 0} {\coordinate (B2) at ($(A1)+(1,-0.3)$); \fill(B2) circle(6.67pt); \draw [very thick] (A4) to[out=180,in=70] ($(A1)+(-0.5,0)$) to [out=-110,in=-120] (B2); \draw [very thick] (B2) to [out=-90,in=-120] ($(A1)+(-1,0)$) to [out=60,in=120] ($(A4)+(1,0)$) to [out=-60,in=-60] (B1) } {\coordinate (B2) at ($(A3)+(-1,1.2)$); \fill(B2) circle(6.67pt); \draw [very thick] (A4) to[out=180,in=70] ($(A1)+(-0.5,0)$) to [out=-110,in=180] ($(A2)+(0,-0.3)$) to [out=0,in=-120] (B2); \draw [very thick] (B2) to [out=-105,in=0] ($(A2)+(0,-0.5)$) to [out=180,in=-120] ($(A1)+(-1,0)$) to [out=60,in=120] ($(A4)+(1,0)$) to [out=-60,in=-60] (B1) } ; } \end{tikzpicture} \caption{Two seven-cycles obtained by edge insertions on a five-cycle.} \label{figure:twosevens} \end{figure} Our second observation is that edge insertion may not always be possible within the same class $T_d$. For example, in the proof of assertion~\eqref{it:pantseven} of Theorem~\ref{t:pants} in Section~\ref{s:pants}, it will be shown that no edge insertion on the pants thrackle drawing of the six-cycle shown in Figure~\ref{figure:sixcycle} produces a pants thrackle drawing. The operation of \emph{edge removal} is inverse to the edge insertion operation. Let $\mathcal{T}(G)$ be a thrackle drawing of a graph $G$ and let $v_1v_2v_3v_4$ be a three-path in $G$ such that $\deg v_2 = \deg v_3 = 2$. Let $Q = \mathcal{T} (v_1v_2) \cap \mathcal{T} (v_3v_4)$. Removing the edge $v_2v_3$, together with the segments $Qv_2$ and $Qv_3$ we obtain a drawing of a graph with a single edge $v_1v_4$ in place of the three-path $v_1v_2v_3v_4$ (Figure~\ref{figure:2}). \begin{figure}[h] \centering \begin{tikzpicture}[>=triangle 45] \node[coordinate] (v3) at (-2,2) [label=180:$v_3$] {}; \fill (v3) circle (3pt); \node[coordinate] (v2) at (2,2) [label=0:$v_2$] {}; \fill (v2) circle (3pt); \node[coordinate] (v1l) at (-2,0) [label=180:$v_1$] {}; \fill (v1l) circle (3pt); \node[coordinate] (v4l) at (2,0) [label=0:$v_4$] {}; \fill (v4l) circle (3pt); \node[coordinate] (Ql) at (0,1) [label=-90:$Q$] {}; \draw[very thick] (v1l) -- (v2) -- (v3) -- (v4l); \draw[very thick] (-1,-0.2) -- (-0.5,2.2); \draw[very thick] (1.5,-0.2) -- (0.5,2.2); \draw[very thick] (1,-0.2) -- (1.5,2.2); \draw[->, very thick] (3,1) -- (5,1); \node[coordinate] (v1r) at (6,0) [label=180:$v_1$] {}; \fill (v1r) circle (3pt); \node[coordinate] (v4r) at (10,0) [label=0:$v_4$] {}; \fill (v4r) circle (3pt); \node[coordinate] (Qr) at (8,1) [label=-90:$Q$] {}; \draw[very thick] (v1r) -- (Qr) -- (v4r); \draw[very thick] (7,-0.2) -- (7.5,2.2); \draw[very thick] (9.5,-0.2) -- (8.5,2.2); \draw[very thick] (9,-0.2) -- (9.5,2.2); \end{tikzpicture} \caption{The edge removal operation.} \label{figure:2} \end{figure} Edge removal does not necessarily result in a thrackle drawing. Consider the triangular domain $\triangle$ bounded by the arcs $v_2v_3, \, Qv_2$ and $v_3Q$ and not containing the vertices $v_1$ and $v_4$ (if we consider the drawing on the plane, $\triangle$ can be unbounded). We have the following lemma. \begin{lemma}[{\cite[Lemma~3]{GY2012}}] \label{lemma:triangle} Edge removal results in a thrackle drawing if and only if $\triangle$ contains no vertices of $\mathcal{T}(G)$. \end{lemma} Note that for a thrackle drawing of class $T_d$, the condition of Lemma~\ref{lemma:triangle} is satisfied if $\triangle$ contains none of the $d$ circles bounding the discs $D_k$. Given a thrackle drawing of class $T_d$ of an $n$-cycle, edge removal, if it is possible, produces a thrackle drawing of the same class $T_d$ of an $(n-2)$-cycle. We call a thrackle drawing \emph{irreducible} if it admits no edge removals and \emph{reducible} otherwise. To a path in a thrackle drawing of class $T_d$ we can associate a word $W$ in the alphabet $X=\{x_1, \dots, x_d\}$ in such a way that the $i$-th letter of $W$ is $x_k$ if the $i$-th vertex of the path lies on the boundary of the disc $D_k$. For a thrackled cycle, we consider the associated word $W$ to be a cyclic word. For a word $w$ and an integer $m$ we denote $w^m$ the word obtained by $m$ consecutive repetitions of $w$. We have the following simple observation. \begin{lemma} \label{l:repeatsgen} For a thrackle drawing of a graph $G$ of class $T_d$, \begin{enumerate}[{\rm (a)}] \item \label{it:noaabbgen} For no two different $i, j = 1, \dots, d$, may a thrackle drawing of class $T_d$ contain two edges with the words $x_i^2$ and $x_j^2$. \item \label{it:noaaagen} Suppose that for some $i = 1, \dots, d$, a thrackle drawing of class $T_d$ contains a two-path with the word $x_i^3$ the first two vertices of which have degree $2$. Then the drawing is reducible. \end{enumerate} \end{lemma} \begin{proof} \eqref{it:noaabbgen} is obvious, as otherwise the thrackle condition will be violated by the corresponding two edges. \eqref{it:noaaagen} The complement of the two-path in $S^2 \setminus (\cup_{k=1}^d \overline{D_k})$ is the union of three domains, exactly one of which has the two-path on its boundary. That domain can contain no other vertices of the thrackle inside it or on its boundary, as otherwise the thrackle condition is violated. But then by Lemma~\ref{lemma:triangle}, edge removal can be performed on the three-path which is the union of the given two-path and the edge of the graph incident to its first vertex. \end{proof} \subsection{Reidemeister moves} \label{ss:R} A \emph{Reidemeister move} can be performed on a triple of pairwise non-adjacent edges of a thrackle drawing if the open triangular domain bounded by the segments on each of the edges between the crossings with the other two contains no points of the drawing -- see Figure~\ref{figure:Reid}. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.6,>=triangle 45] \foreach \x in {0,8} { \draw[very thick] ({\x+2*cos(pi/3 r)},{2*sin(pi/3 r)}) -- ({\x+2*cos(4*pi/3 r)},{2*sin(4*pi/3 r)}); \draw[very thick] ({\x+2*cos(2*pi/3 r)},{2*sin(2*pi/3 r)}) -- ({\x+2*cos(5*pi/3 r)},{2*sin(5*pi/3 r)}); \ifthenelse{\x = 0} {\draw[very thick] (\x-2,0) to [out=20,in=160] (\x+2,0)} {\draw[very thick] (\x-2,0) to [out=-20,in=-160] (\x+2,0)} ; } \draw[->, very thick] (3,0) -- (5,0); \end{tikzpicture} \caption{A Reidemeister move.} \label{figure:Reid} \end{figure} We say that two thrackle drawings are \emph{Reidemeister equivalent} if one can be obtained from the other by a finite sequence of Reidemeister moves. Suppose that two thrackles $\mathcal{T}_1(G)$ and $\mathcal{T}_2(G)$ can be obtained from one another by a Reidemeister move on a triple of edges $e_i, e_j, e_k$. From Lemma~\ref{lemma:triangle} it follows that if $\mathcal{T}_1(G)$ admits edge removal on a three-path not containing these three edges, then $\mathcal{T}_2(G)$ also does; moreover, after edge removals the resulting two thrackles can again be obtained from one another by the same Reidemeister move. However, adding an edge to $\mathcal{T}_1(G)$ may result in a thrackle which is not Reidemeister equivalent to any thrackle obtained from $\mathcal{T}_2(G)$ by adding an edge, as the added edge may end at a vertex inside the triangular domain $\triangle_{ijk}$ bounded by $e_i, e_j, e_k$. The same is true for edge insertion on $\mathcal{T}_1(G)$. Now suppose that $\mathcal{T}_1(G)$ and $\mathcal{T}_2(G)$ belong to a class $T_d$. The domains $\triangle_{ijk}$ in both $\mathcal{T}_1(G)$ and $\mathcal{T}_2(G)$ contain no vertices. If we additionally require that they contain no ``inessential" discs $D_l$, those having no vertices on their boundaries, then the edge added to $\mathcal{T}_1(G)$ cannot end in $\triangle_{ijk}$ and so we can add a corresponding edge to $\mathcal{T}_2(G)$ such that the resulting two thrackles are again Reidemeister equivalent. \subsection{Forbidden configurations} \label{ss:forbidden} A graph having more edges than vertices always contains one of the following subgraphs: a theta-graph (two vertices joined by three disjoint paths), a dumbbell (two disjoint cycles with a path joining a vertex of one cycle to a vertex of another), or a figure-$8$ graph (two cycles sharing a vertex). To prove Conway's Thrackle Conjecture it is therefore sufficient to show that none of these three graphs admits a thrackle drawing. Repeatedly using the vertex-splitting operation \cite[Figure~1(a)]{MNajc} one can show that the existence of a counterexample of any of these three types implies the existence of a counterexample of the other two types. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.5,>=triangle 45] \foreach \x in {0,8} { \draw[thick] (\x,0) arc (60:110:6); \coordinate (A1) at ({\x-6*cos(pi/3 r)},{6-6*sin(pi/3 r)}); \coordinate (A2) at ($(A1)-(0,4)$); \foreach \y in {A1,A2} {\fill (\y) circle (6.0pt);} \draw (\x-4.5,1.2) node {$\partial D_k$}; \draw[very thick] (A1)-- (A2); \draw[very thick] ($(A1)+(2,-2)$) -- (A1); \foreach \z in {1,2,3} {\draw[very thick] ($(A2)+(-0.3,\z*0.7)$) -- ($(A2)+(0.7,\z*0.7)$);} \ifthenelse{\x = 0} {\draw[very thick] ($(A1)+(-2,-2)$) -- (A1)} {\coordinate (A3) at ($(A1)+(0,-6)+({6*cos(11*pi/24 r)},{6*sin(11*pi/24 r)})$); \fill(A3) circle(6pt); \draw [very thick] (A2) -- (A3) -- ($(A1)+(-2,-2)$)} ; } \draw[->, very thick] (0.7,-1) -- (2.5,-1); \foreach \x in {16,24} { \draw[thick] (\x,0) arc (60:110:6); \coordinate (A1) at ({\x-6*cos(pi/3 r)},{6-6*sin(pi/3 r)}); \coordinate (A2) at ($(A1)-(0,4)$); \foreach \y in {A1,A2} {\fill (\y) circle (6.0pt);} \draw (\x-4.5,1.2) node {$\partial D_k$}; \draw[very thick] (A1)-- (A2); \draw[very thick] ($(A1)+(2,-2)$) -- (A1); \draw[very thick] ($(A1)+(2,-1)$) -- (A1); \foreach \z in {1,2,3} {\draw[very thick] ($(A2)+(-0.3,\z*0.7)$) -- ($(A2)+(0.7,\z*0.7)$);} \ifthenelse{\x = 24} {\draw[very thick] ($(A1)+(-2,-2)$) -- (A1)} {\coordinate (A3) at ($(A1)+(0,-6)+({6*cos(11*pi/24 r)},{6*sin(11*pi/24 r)})$); \fill(A3) circle(6pt); \draw [very thick] (A2) -- (A3) -- ($(A1)+(-2,-2)$)} ; } \draw[->, very thick] (16.7,-1) -- (18.5,-1); \end{tikzpicture} \caption{Splitting a vertex of degree $3$ and a vertex of degree $4$.} \label{figure:splitting} \end{figure} However, this may not be true for thrackle drawings of class $T_d$, as the required vertex-splitting operation on a vertex of degree $3$ may not be permitted within the class $T_d$. The problem is that in order to remain within the class $T_d$, vertex-splitting on a vertex of degree 3 may only be performed by doubling the ``middle" edge (as on the left in Figure~\ref{figure:splitting}) and this is too restrictive; for example, starting with a dumbbell, vertex-splitting within the class $T_d$ might only \emph{increase} the length of the dumbbell handle. So one might not be able to reduce a dumbbell to a figure-$8$ graph. Nevertheless, if we are given a thrackle drawing of class $T_d$ of a figure-$8$ graph, we can always perform the vertex-splitting operation on the vertex of degree $4$ to obtain a thrackle drawing of the same class $T_d$ of a dumbbell, as on the right in Figure~\ref{figure:splitting}. This gives the following lemma. \begin{lemma} \label{l:TCTd} To prove Conway's Thrackle Conjecture for thrackle drawings in a class $T_d$ it is sufficient to prove that no dumbbell and no theta-graph admit a thrackle drawing of class $T_d$. In both cases, the corresponding graph contains an even cycle. \end{lemma} The second assertion is clear for a theta-graph, and for a dumbbell, follows from the fact that a thracklable graph contains no two vertex-disjoint odd cycles \cite[Lemma~2.1]{LPS97}. \section{Annular thrackles} \label{s:ann} In this section, we prove Theorem~\ref{t:ann}. We can assume that the thrackle drawing lies in the closed annulus bounded by two concentric circles on the plane, the outer circle $A$ and the inner circle $B$; the vertices lie in $A \cup B$, and the rest of the drawing, in the open annulus. As in Section~\ref{ss:ir} we can associate to a path within a thrackle a word in the alphabet $\{a,b\}$, where the letter $a$ (respectively $b$) corresponds to a vertex lying on $A$ (respectively on $B$). To an annular thrackle drawing of an $n$-cycle there corresponds a word $W$ defined up to cyclic permutation and reversing. The following lemma and the fact that edge removal decreases the length of a cycle by $2$ imply assertion~\eqref{it:annodd}. \begin{lemma} \label{l:3cycle} If an $n$-cycle admits an irreducible annular thrackle drawing, then $n = 3$. \end{lemma} \begin{proof} By Lemma~\ref{l:repeatsgen}\eqref{it:noaabbgen} we can assume that $W$ contains no two consecutive $b$'s. If $W$ contains no letters $b$ at all, then the thrackle is outerplanar and the assertion of the lemma follows from Theorem~\ref{t:outer} \eqref{it:outRei}. Assuming that $W$ contains at least one $b$ we get that $W$ contains a sequence $aba$. Suppose $n>3$; then $n \ge 5$, as no $4$-cycle admits a thrackle drawing on the plane. Consider the next letter in $W$. Up to isotopy, there are three possible ways of adding an extra edge. As the reader may verify, two of them produce a reducible thrackle by Lemma~\ref{lemma:triangle}. The third one is shown in the middle in Figure~\ref{figure:aba}. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.5,>=triangle 45] \foreach \x in {0,10,20} { \draw[thick] (\x,0) circle (4); \draw[thick] (\x,0) circle (1); \coordinate (B) at (\x,1); \coordinate (A2) at ({\x+4*cos((pi/6) r)},{4*sin((pi/6) r)}); \coordinate (A1) at ({\x+4*cos((5*pi/6) r)},{4*sin((5*pi/6) r)}) circle (5.45pt); \fill (A1) circle (6pt); \fill (A2) circle (6pt); \fill (B) circle (6pt); \draw [very thick] (A1)--(B)--(A2); \ifthenelse{\x = 0}{} { \coordinate (A3) at ({\x+4*cos((2*pi/3) r)},{4*sin((2*pi/3) r)}); \fill (A3) circle (6pt); \draw [very thick] (A2) to[out=-90,in=0] ({\x-1},-2) to [out=180,in=-90] (A3); \ifthenelse{\x = 10}{} { \coordinate (A4) at ({\x+4*cos((pi/3) r)},{4*sin((pi/3) r)}); \fill (A4) circle (6pt); \draw [very thick] (A4) to[out=-80,in=0] (\x-0.5,-1.5) to[out=180,in=-90] (A3); } } ; } \draw[->, very thick] (4.5,0) -- (5.5,0); \draw[->, very thick] (14.5,0) -- (15.5,0); \end{tikzpicture} \caption{Adding the third and the fourth edge.} \label{figure:aba} \end{figure} But then there is only one way to add the next edge, as on the right in Figure~\ref{figure:aba} and the resulting thrackle drawing is reducible. \end{proof} By Lemma~\ref{l:TCTd}, if in the class of annular thrackles there exists a counterexample to Conway's Thrackle Conjecture, then there exists such a counterexample whose underlying graph contains an even cycle. So assertion~\eqref{it:annC} follows from assertion~\eqref{it:annodd}. We now prove assertion~\eqref{it:annout}. Suppose a cycle $c$ of an odd length $n$ admits an annular thrackle drawing. We can assume that the corresponding word $W$ contains at least one $b$ and does not contain $b^2$. \begin{lemma} \label{l:alt} Up to cyclic permutation, $W=a^{2p}(ba)^rb$, for some $p \ge 1, \, r \ge 0$. \end{lemma} \begin{proof} As $n$ is odd, $W$ contains a subword $a^2$. Let $a^k, \; k \ge 2$, be a maximal by inclusion string of consecutive $a$'s. If $k=n-1$, we are done. Otherwise, up to cyclic permutation, $W=a^kb w b$ for some word $w$. Consider the edge $e$ defined by the last pair $aa$ in $a^k$. Let $\gamma$ be the arc of $A$ joining the endpoints of $e$ such that the domain bounded by $e \cup \gamma$ does not contain $B$. Every edge of the thrackle not sharing a common vertex with $e$ crosses it, so every second vertex counting from the last $a$ in $a^k$ lies in the interior of $\gamma$. It follows that $W=a^kbay_1ay_2a \dots y_q a b$, where $y_i \in \{a, b\}$, and so $k$ is necessarily even. By the same reasoning, any maximal sequence of more than one consecutive $a$'s in $W$ is even. But then $y_i=b$, for all $i=1, \dots, q$, as otherwise $W$ would contain a maximal sequence of consecutive $a$'s of an odd length greater than one. \end{proof} To prove assertion~\eqref{it:annout} we show any annular thrackled cycle is alternating; then the claim follows from the fact that alternating thrackles are outerplanar, as was proved in \cite[Theorem~2]{GY2012}. Recall that a thrackled cycle is called \emph{alternating} if for every edge $e$ and every two-path $fg$ vertex-disjoint from $e$, the crossings of $e$ by $f$ and $g$ have opposite orientations. Suppose $c$ is a cycle of the shortest possible length which admits a non-alternating annular thrackle drawing $\mathcal{T}(c)$; the length of $c$ must be at least $7$. An easy inspection shows that any edge vertex-disjoint with a two-path $aba$ (or $bab$) crosses its edges with opposite orientations. The same is true for a two-path $a^3$. It remains to show that any edge vertex-disjoint with a two-path $aab$ also crosses the edges of that two-path with opposite orientations. Up to isotopy, the only drawing for which this is not true is the one shown in Figure~\ref{figure:annaab}. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.5,>=triangle 45] \draw[thick] (0,0) circle (4); \draw[thick] (0,0) circle (1); \node[coordinate] (A1) at ({4*cos((7*pi/9) r)},{4*sin((7*pi/9) r)}) [label=90:$a_1$] {}; \node[coordinate] (A2) at ({4*cos((11*pi/9) r)},{4*sin((11*pi/9) r)}) [label=-90:$a_2$] {}; \node[coordinate] (B1) at (-1,0) [label=0:$b_1$] {}; \node[coordinate] (Bp) at (0,1) [label=-90:$b'$] {}; \node[coordinate] (Ap) at ({-sqrt(15)},1) [label=180:$a'$] {}; \foreach \x in {A1,A2,Ap,Bp,B1} {\fill (\x) circle (6pt);} \draw [very thick] (A1)--(A2); \draw [very thick] (Ap)--(Bp); \draw [very thick] (A2) to [out=0,in=-90] (2.5,0) to [out=90,in=0] (0,2) to [out=180,in=90] (B1); \end{tikzpicture} \caption{A non-alternating crossing.} \label{figure:annaab} \end{figure} Note that the edge which violates the alternating condition necessarily joins an $a$-vertex and a $b$-vertex. We claim that such a drawing cannot be a part of $\mathcal{T}(c)$. To see that, we consider possible drawings of the four-path in $c$ which extends the path $a_1a_2b_1$. The vertex following $b_1$ must be an $a$-vertex (call it $a_3$); there are two possible cases: $a_3 = a'$ and $a_3 \ne a'$. In the first case, up to isotopy, we get the drawing on the left in Figure~\ref{figure:annaaba1}, and then there is only one possible way to attach an edge at $a_1$, as shown on the right in Figure~\ref{figure:annaaba1}. But then performing edge removal on $a_1a_2$ we get a shorter non-alternating annular thrackled cycle, a contradiction. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.5,>=triangle 45] \foreach \y in {0,12} { \draw[thick] (\y,0) circle (4); \draw[thick] (\y,0) circle (1); \node[coordinate] (A1) at ({\y+4*cos((7*pi/9) r)},{4*sin((7*pi/9) r)}) [label=90:$a_1$] {}; \node[coordinate] (A2) at ({\y+4*cos((11*pi/9) r)},{4*sin((11*pi/9) r)}) [label=-90:$a_2$] {}; \node[coordinate] (B1) at (\y-1,0) [label=0:$b_1$] {}; \node[coordinate] (Bp) at (\y,1) [label=-90:$b'$] {}; \node[coordinate] (Ap) at ({\y-sqrt(15)},1) [label=180:$a'$] {}; \foreach \x in {A1,A2,Ap,Bp,B1} {\fill (\x) circle (6pt);} \draw [very thick] (A1)--(A2); \draw [very thick] (Ap)--(Bp); \draw [very thick] (A2) to [out=0,in=-90] (\y+2.5,0) to [out=90,in=0] (\y,2) to [out=180,in=90] (B1); \draw [very thick] (Ap)--(B1); \ifthenelse{\y = 12} {\coordinate (AA) at (\y-2,{-sqrt(12)}); \fill(AA) circle(6pt); \draw [very thick] (A1)--(AA); \node[coordinate] (Q) at (\y-2.4,-2.6) [label=45:$Q$] {}; } ; } \draw[->, very thick] (5,0) -- (7,0); \end{tikzpicture} \caption{Path $a_1a_2b_1a_3, \; a_3 = a'$.} \label{figure:annaaba1} \end{figure} Now suppose $a_3 \ne a'$. We have two cases for adding the edge $b_1a_3$, and then by Lemma~\ref{l:alt}, the letter after $a_3$ must be a $b$. In the first case, up to isotopy and a Reidemeister move, we get the drawing on the left in Figure~\ref{figure:annaaba2}, and then we can attach the edge joining $a_3$ to a $b$-vertex uniquely, up to isotopy and a Reidemeister move, as on the right in Figure~\ref{figure:annaaba2}. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.5,>=triangle 45] \foreach \y in {0,12} { \draw[thick] (\y,0) circle (4); \draw[thick] (\y,0) circle (1); \node[coordinate] (A1) at ({\y+4*cos((7*pi/9) r)},{4*sin((7*pi/9) r)}) [label=90:$a_1$] {}; \node[coordinate] (A2) at ({\y+4*cos((11*pi/9) r)},{4*sin((11*pi/9) r)}) [label=-90:$a_2$] {}; \node[coordinate] (B1) at (\y-1,0) [label=0:$b_1$] {}; \node[coordinate] (Bp) at (\y,1) [label=-90:$b'$] {}; \node[coordinate] (Ap) at ({\y-sqrt(15)},1) [label=180:$a'$] {}; \node[coordinate] (A3) at ({\y-sqrt(12)},2) [label=180:$a_3$] {}; \foreach \x in {A1,A2,Ap,Bp,B1,A3} {\fill (\x) circle (6pt);} \draw [very thick] (A1)--(A2); \draw [very thick] (Ap)--(Bp); \draw [very thick] (A2) to [out=0,in=-90] (\y+2.5,0) to [out=90,in=0] (\y,2) to [out=180,in=90] (B1); \draw [very thick] (A3)--(B1); \ifthenelse{\y = 12} {\coordinate (B2) at ({\y-sqrt(2)/2},{sqrt(2)/2}); \fill(B2) circle(6pt); \draw [very thick] (A3) to [out=0,in=90] (B2); \node[coordinate] (Q) at (\y-0.9,1.5) [label=90:$Q$] {}; } ; } \draw[->, very thick] (5,0) -- (7,0); \end{tikzpicture} \caption{Path $a_1a_2b_1a_3, \; a_3 \ne a'$, case 1.} \label{figure:annaaba2} \end{figure} Again, performing edge removal on $b_1a_3$ we get a shorter non-alternating annular thrackled cycle. The second possibility of attaching the edge $b_1a_3, \; a_3 \ne a'$, to the drawing in Figure~\ref{figure:annaab} is the one shown on the left in Figure~\ref{figure:annaaba3}, up to isotopy. Then the edge joining $a_3$ to the next $b$-vertex can be also added uniquely, up to isotopy, as on the right in Figure~\ref{figure:annaaba3}, and yet again, edge removal on $b_1a_3$ results in a shorter non-alternating annular thrackled cycle. This completes the proof of Theorem~\ref{t:ann}. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.5,>=triangle 45] \foreach \y in {0,12} { \draw[thick] (\y,0) circle (4); \draw[thick] (\y,0) circle (1); \node[coordinate] (A1) at ({\y+4*cos((7*pi/9) r)},{4*sin((7*pi/9) r)}) [label=90:$a_1$] {}; \node[coordinate] (A2) at ({\y+4*cos((11*pi/9) r)},{4*sin((11*pi/9) r)}) [label=-90:$a_2$] {}; \node[coordinate] (B1) at (\y-1,0) [label=0:$b_1$] {}; \node[coordinate] (Bp) at (\y,1) [label=-90:$b'$] {}; \node[coordinate] (Ap) at ({\y-sqrt(15)},1) [label=180:$a'$] {}; \node[coordinate] (A3) at ({\y-sqrt(15)},-1) [label=180:$a_3$] {}; \foreach \x in {A1,A2,Ap,Bp,B1,A3} {\fill (\x) circle (6pt);} \draw [very thick] (A1)--(A2); \draw [very thick] (Ap)--(Bp); \draw [very thick] (A2) to [out=0,in=-90] (\y+2.5,0) to [out=90,in=0] (\y,2) to [out=180,in=90] (B1); \draw [very thick] (A3) to [out=0,in=-90] (\y+2,0) to [out=90,in=0] (\y,1.5) to [out=180,in=90] (B1); \ifthenelse{\y = 12} {\coordinate (B2) at (\y,-1); \fill(B2) circle(6pt); \draw [very thick] (A3) to [out=-20,in=180] (\y,-2) to [out=0,in=-90] (\y+3,0) to [out=90,in=0] (\y,3) to [out=180,in=90] (\y-2,0) to [out=-90,in=180] (B2); \node[coordinate] (Q) at (\y+1.8,-2.9) [label=90:$Q$] {}; } ; } \draw[->, very thick] (5,0) -- (7,0); \end{tikzpicture} \caption{Path $a_1a_2b_1a_3, \; a_3 \ne a'$, case 2.} \label{figure:annaaba3} \end{figure} \section{Pants thrackles} \label{s:pants} In this section, we prove Theorem~\ref{t:pants}. We represent the pair of pants domain $P$ whose closure contains the drawing as the interior of an ellipse, with two disjoint closed discs removed. To a path in a pants thrackle drawing we associate a word in the alphabet $\{a, b, c\}$, where $a$ corresponds to the vertices on the ellipse, and $b$ and $c$, to the vertices on the circles bounding the discs (e.g., as in Figure~\ref{figure:cabac}). We start with the following proposition which implies assertion~\eqref{it:pantsodd} of Theorem~\ref{t:pants} and will also be used in the proof of assertion~\eqref{it:pantseven}. { \begin{proposition*} If a cycle $C$ admits an irreducible pants thrackle drawing, then $C$ is either a three-cycle or a six-cycle, and in the latter case, the drawing is Reidemeister equivalent to the one in Figure~\ref{figure:sixcycle}. \end{proposition*} \begin{proof} Let $W$ be the (cyclic) word corresponding to an irreducible pants thrackle drawing of a cycle $C$. The following lemma can be compared to Lemma~\ref{l:alt}. { \begin{lemma} \label{l:repeats} If $W$ contains $a^2$, then one of the two domains of the complement of the corresponding edge in $P$ is a disc, the cycle $C$ is odd, and $W=aay_1ay_2 \dots y_{m-1}ay_m$, where $y_i \in \{b, c\}$ for $i=1, \dots, m$. \end{lemma} \begin{proof} Suppose no domain of the complement of an edge $aa$ is a disc. By Lemma~\ref{l:repeatsgen}\eqref{it:noaaagen}, neither the letter which precedes $a^2$ in $W$, nor the next letter after $a^2$ is $a$, and for the corresponding edges to cross, those two letters must be the same, say $b$. If the corresponding three-path $baab$ is irreducible, it has to be isotopic to the path on the left in Figure~\ref{figure:aanonzero}. But then there is a unique, up to isotopy, way to add to the path the starting segment of the next edge, and it produces a reducible three-path, as on the right in Figure~\ref{figure:aanonzero}. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.6,>=triangle 45] \foreach \y in {0,13} { \draw[thick] (\y,0) ellipse (5 and 3); \draw[thick] (\y-2,0) circle (1); \draw[thick] (\y+2,0) circle (1); \node[coordinate] (A1) at (\y,3) [label=-45:$a$] {}; \node[coordinate] (A2) at (\y,-3) [label=45:$a$] {}; \node[coordinate] (B1) at (\y-2,-1) [label=90:$b$] {}; \node[coordinate] (B2) at (\y-1,0) [label=180:$b$] {}; \foreach \x in {A1,A2,B1,B2} {\fill (\x) circle (5.0pt);} \draw [very thick] (A1) to (A2); \draw [very thick] (A2) to (B1); \draw [very thick] (A1) to[out=-135,in=90] (\y-4,0) to [out=-90,in=180] (\y-2,-2) to [out=0,in=-45] (B2); } \draw [very thick] (B1) to (13.5,-1); \draw [very thick,dashed] (13.5,-1)--(14.5,-1); \draw[->, very thick] (5.5,0) -- (7.5,0); \end{tikzpicture} \caption{Adding the fourth edge produces a reducible path.} \label{figure:aanonzero} \end{figure} It follows that if $W$ contains $a^2$, then one of the two domains of the complement of the corresponding edge in $P$ is a disc. But then by the thrackle condition, every second vertex counting from the second $a$ in $aa$ is again $a$, so $W=aay_1ay_2 \dots y_{m-1}ay_m$, for some $y_i \in \{a, b, c\}$. In particular, $C$ is an odd cycle and furthermore, none of the $y_i$ can be equal to $a$ by Lemma~\ref{l:repeatsgen}\eqref{it:noaaagen}. \end{proof} } \begin{lemma} \label{l:caba} Suppose the word $W$ contains no subwords $bb$ and $cc$. Then it contains no subwords $caba$ or $baca$. \end{lemma} \begin{proof} Arguing by contradiction (and renaming the letters if necessary) suppose that $W$ contains the subword $caba$. The only irreducible three-path corresponding to that subword, up to isotopy, is shown on the left in Figure~\ref{figure:cabac}. Suppose that the next letter in $W$ is not $a$. Then the only irreducible four-path extending $caba$, up to isotopy, is the one shown on the right in Figure~\ref{figure:cabac}. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.6,>=triangle 45] \foreach \y in {0,13} { \draw[thick] (\y,0) ellipse (5 and 3); \draw[thick] (\y-2,0) circle (1); \draw[thick] (\y+2,0) circle (1); \node[coordinate] (A1) at (\y,3) [label=-90:$a$] {}; \node[coordinate] (A2) at ({\y+5*cos((pi/3) r)},{3*sin((pi/3) r)}) [label=-90:$a$] {}; \node[coordinate] (B1) at (\y-2,1) [label=-90:$b$] {}; \node[coordinate] (C1) at (\y+2,1) [label=-90:$c$] {}; \foreach \x in {A1,A2,B1,C1} {\fill (\x) circle (5.0pt);} \draw [very thick] (C1) to (A1) to (B1); \draw [very thick] (B1) to [out=120,in=90] (\y-3.3,0) to [out=-90,in=180] (\y-2.2,-1.5) to [out=0,in=-150] (A2); } \node[coordinate] (C2) at (15,-1) [label=90:$c$] {}; \fill (C2) circle (5.0pt); \draw [very thick] (A2) to [out=-160,in=90] (9,0) to [out=-90,in=-90] (C2); \draw[->, very thick] (5.5,0) -- (7.5,0); \end{tikzpicture} \caption{The irreducible path $caba$ and the next edge ending not in $a$.} \label{figure:cabac} \end{figure} If $C$ is of length five, then $W=cabac$ which contradicts the fact that the (cyclic) word $W$ does not contain a subword $cc$. Otherwise, there are only three possible ways, up to isotopy and a Reidemeister move, to add another edge starting at the last added vertex $c$ in such a way that the resulting drawing is a thrackled path. But one of them results in a reducible drawing, and the other two end in $c$ contradicting the fact that $W$ does not contain a subword $cc$. It follows that the letter following $caba$ in $W$ must be an $a$, so we get a subword $cabaa$. If the length of the cycle $C$ is greater than $5$, then by Lemma~\ref{l:repeats}, the letter which precedes $c$ must be $a$, so $W$ contains the subword $acabaa$. But then the above argument applied to the subword $acab$ (if we reverse the direction of $C$ and swap $b$ and $c$) implies that the letter which precedes the starting $a$ is another $a$, so that $W$ contains the subword $aacabaa$, giving a contradiction with Lemma~\ref{l:repeats}. If $C$ is of length five, then $W=cabaa$ and the resulting drawing is reducible by Lemma~\ref{lemma:triangle}, as there is just a single $b$ in $W$, and so the triangular domain corresponding to the three-path $caba$ on the left in Figure~\ref{figure:cabac} contains no other vertices of the thrackle. \end{proof} Now if $W$ contains the subword $aa$, then by Lemma~\ref{l:repeats} and Lemma~\ref{l:caba}, the word $W$ may contain only one of the letters $b$ or $c$. Then the drawing is annular, and hence by Lemma~\ref{l:3cycle} is reducible unless $C$ is a three-cycle. Suppose $W$ contains no letter repetitions. Then Lemma~\ref{l:caba} applies to any subword $xyzy$ such that $\{x, y, z\} = \{a, b, c\}$. Furthermore, up to renaming the letters we can assume that $W$ starts with $ab$. If $W$ contains no subword $abc$, then $W=(ab)^m$, and so the drawing is annular. We can therefore assume that $W$ contains a subword $abc$. Then the following letter cannot be any of $b$ or $c$, so it must be an $a$. Repeating this argument we obtain that $W=(abc)^m$. We now modify the word $W$ by attaching to every letter a subscript plus (respectively minus) if the tangent vector to the drawing in the direction of the cycle $C$ makes a positive (respectively negative) turn at the corresponding vertex; in other words, the subscript is a plus (respectively a minus) if the path turns left (respectively right) at the vertex. We will occasionally omit the subscript when it is unknown or unimportant. Note that if the length of $C$ is greater than $3$, then no two consecutive subscripts in the word $W=(abc)^m$ can be the same. Indeed, assume that $W$ contains a subword $ab_+c_+a$. Then the corresponding irreducible three-path is unique up to isotopy, as shown on the left in Figure~\ref{figure:ab+c+a}, and the only possible way to attach an edge $ab$ results in a reducible drawing, as on the right in Figure~\ref{figure:ab+c+a}. By reflection, a similar comment applies to subwords of the form $ab_-c_-a$. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.6,>=triangle 45] \foreach \y in {0,13} { \draw[thick] (\y,0) ellipse (5 and 3); \draw[thick] (\y-2,0) circle (1); \draw[thick] (\y+2,0) circle (1); \node[coordinate] (A1) at (\y-2,{3*sqrt(21)/5}) [label=-45:$a$] {}; \node[coordinate] (A2) at ({\y+5*cos((pi/3) r)},{3*sin((pi/3) r)}) [label=-90:$a$] {}; \node[coordinate] (B1) at (\y-2,1) [label=-90:$b$] {}; \node[coordinate] (C1) at (\y+2,1) [label=-90:$c$] {}; \foreach \x in {A1,A2,B1,C1} {\fill (\x) circle (5.0pt);} \draw [very thick] (A1) to (B1) to (C1); \draw [very thick] (C1) to [out=60,in=90] (\y+3.5,0) to [out=-90,in=-30] (\y-3.3,-1) to [out=150,in=-180] (A2); } \node[coordinate] (B2) at (12,0) [label=180:$b$] {}; \fill (B2) circle (5.0pt); \draw [very thick] (A2) to [out=170,in=150] (9,-1.2) to [out=-30,in=-90] (17,0) to [out=90,in=90] (13,1) to [out=-90,in=0] (B2); \draw[->, very thick] (5.5,0) -- (7.5,0); \end{tikzpicture} \caption{The path $ab_+c_+a$ extends to a reducible drawing.} \label{figure:ab+c+a} \end{figure} It follows that the subscripts in $W$ alternate and in particular, the length of $C$ is divisible by $6$. There are two drawings of the three-path $ab_+c_-a$, both irreducible, as shown in Figure~\ref{figure:ab+c-a1}. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.6,>=triangle 45] \draw[thick] (0,0) ellipse (5 and 3); \draw[thick] (-2,0) circle (1); \draw[thick] (2,0) circle (1); \foreach \y in {0,13} { \draw[thick] (\y,0) ellipse (5 and 3); \draw[thick] (\y-2,0) circle (1); \draw[thick] (\y+2,0) circle (1); \node[coordinate] (A1) at ({\y + 5*cos((8*pi/9) r)},{3*sin((8*pi/9) r)}) [label=-30:$a$] {}; \node[coordinate] (B1) at ({\y -2+cos((-pi/9) r)},{sin((-pi/9) r)}) [label=180:$b$] {}; \node[coordinate] (C1) at ({\y + 2+cos((pi/9) r)},{sin((pi/9) r)}) [label=180:$c$] {}; \draw [very thick] (A1) to[out=30,in=150] (\y -1.5,2) to[out=-30,in=30] (B1); \draw [very thick] (B1) to[out=-30,in=-150] (\y + 2.5,-1.5) to[out=30,in=-30] (C1); \ifthenelse{\y = 0} {\node[coordinate] (A2) at ({\y + 5*cos((10*pi/9) r)},{3*sin((10*pi/9) r)}) [label=0:$a$] {}; \draw [very thick] (A2) to[out=60,in=180] (\y + 1,2.5) to[out=0,in=60] (C1)} {\node[coordinate] (A2) at (\y,3) [label=-90:$a$] {}; \draw [very thick] (C1) to [out=0,in=45] (\y+3,-1.8) to [out=-135,in=-90] (\y-3.5,0) to [out=90,in=-160] (A2)} ; \foreach \x in {A1,A2,B1,C1} {\fill (\x) circle (5.0pt);} } \end{tikzpicture} \caption{Two paths $ab_+c_-a$.} \label{figure:ab+c-a1} \end{figure} They differ by the orientation of the crossing of the edges $ab$ and $ca$. If we change the direction on $C$ and swap the letters $b$ and $c$, the subword $ab_+c_-a$ does not change. By reflection, a similar comment applies to subwords of the form $ab_-c_+a$. Hence the whole word $W$ is unchanged, with all the subscripts, but the orientation of the crossings of the edges $ab$ and $ca$ are reversed. We therefore lose no generality by assuming that the subword $ab_+c_-a$ is represented by the three-path on the left in Figure~\ref{figure:ab+c-a1}. We can then uniquely, up to isotopy, add an edge $ca$ to the starting vertex $a$, as on the left in Figure~\ref{figure:ca-b+c-a}, which produces the four-paths corresponding to the subword $ca_-b_+c_-a$. Furthermore, up to isotopy and a Reidemeister move, we can uniquely add an edge $bc$ to the starting vertex $c$, as on the right in Figure~\ref{figure:ca-b+c-a}. We get the five-paths corresponding to the subword $bc_+a_-b_+c_-a$. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.6,>=triangle 45] \draw[thick] (0,0) ellipse (5 and 3); \draw[thick] (-2,0) circle (1); \draw[thick] (2,0) circle (1); \foreach \y in {0,13} { \draw[thick] (\y,0) ellipse (5 and 3); \draw[thick] (\y-2,0) circle (1); \draw[thick] (\y+2,0) circle (1); \node[coordinate] (A1) at ({\y + 5*cos((8*pi/9) r)},{3*sin((8*pi/9) r)}) [label={[xshift=0.3cm, yshift=-0.25cm]:$a$}] {}; \node[coordinate] (B1) at ({\y -2+cos((-pi/9) r)},{sin((-pi/9) r)}) [label=180:$b$] {}; \node[coordinate] (C1) at ({\y + 2+cos((pi/9) r)},{sin((pi/9) r)}) [label=180:$c$] {}; \node[coordinate] (A2) at ({\y + 5*cos((10*pi/9) r)},{3*sin((10*pi/9) r)}) [label={[xshift=0.3cm, yshift=-0.2cm]:$a$}] {}; \node[coordinate] (C2) at ({\y+2+cos((-pi/9) r)},{sin((-pi/9) r)}) [label=180:$c$] {}; \draw [very thick] (A1) to[out=30,in=150] (\y -1.5,2) to[out=-30,in=30] (B1); \draw [very thick] (B1) to[out=-30,in=-150] (\y + 2.5,-1.5) to[out=30,in=-30] (C1); \draw [very thick] (A2) to[out=60,in=180] (\y + 1,2.5) to[out=0,in=60] (C1); \draw [very thick] (A1) to[out=-60,in=180] (\y+1,-2.5) to[out=0,in=-60] (C2); \foreach \x in {A1,A2,B1,C1,C2} {\fill (\x) circle (5.0pt);} } \node[coordinate] (B2) at ({13 -2+cos((pi/9) r)},{sin((pi/9) r)}) [label=180:$b$] {}; \draw [very thick] (B2) to[out=30,in=150] (13+2.5,1.5) to[out=-30,in=30] (C2); \fill (B2) circle (5.0pt); \draw[->, very thick] (5.5,0) -- (7.5,0); \end{tikzpicture} \caption{The four-path paths $ca_-b_+c_-a$ and the five-path $bc_+a_-b_+c_-a$.} \label{figure:ca-b+c-a} \end{figure} One possibility for completing the cycle would be to now join the degree one vertices $a$ and $b$ of the five-path by an edge. This can be done uniquely up to isotopy and produces an irreducible pants thrackle drawing of a six-cycle corresponding to the word $W=b_-c_+a_-b_+c_-a_+$, as in Figure~\ref{figure:sixcycle}. Any other such drawing is equivalent to that up to isotopy and Reidemeister moves (which were possible at the intermediate steps of our construction). Otherwise, we can extend the five-path to a six-path corresponding to the subword $ab_-c_+a_-b_+c_-a$ by adding an edge $ab$ at the start. The resulting six-path is equivalent, up to isotopy and Reidemeister moves, to the one on the left in Figure~\ref{figure:abcabca}. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.6,>=triangle 45] \foreach \y in {0,13} { \draw[thick] (\y,0) ellipse (5 and 3); \draw[thick] (\y-2,0) circle (1); \draw[thick] (\y+2,0) circle (1); \node[coordinate] (A1) at ({\y + 5*cos((8*pi/9) r)},{3*sin((8*pi/9) r)}) [label={[xshift=0.3cm, yshift=-0.25cm]:$a$}] {}; \node[coordinate] (B1) at ({\y -2+cos((-pi/9) r)},{sin((-pi/9) r)}) [label=180:$b$] {}; \node[coordinate] (C1) at ({\y + 2+cos((pi/9) r)},{sin((pi/9) r)}) [label=180:$c$] {}; \node[coordinate] (A2) at ({\y + 5*cos((10*pi/9) r)},{3*sin((10*pi/9) r)}) [label={[xshift=0.3cm, yshift=-0.2cm]:$a$}] {}; \node[coordinate] (C2) at ({\y+2+cos((-pi/9) r)},{sin((-pi/9) r)}) [label=180:$c$] {}; \node[coordinate] (B2) at ({\y -2+cos((pi/9) r)},{sin((pi/9) r)}) [label=180:$b$] {}; \node[coordinate] (A3) at ({\y - 5},0) [label={[xshift=0.3cm, yshift=-0.25cm]:$a$}] {}; \draw [very thick] (B1) to[out=-30,in=-150] (\y + 2.5,-1.5) to[out=30,in=-30] (C1); \draw [very thick] (A2) to[out=60,in=180] (\y + 1,2.5) to[out=0,in=60] (C1); \draw [very thick] (B2) to[out=30,in=150] (\y+2.5,1.5) to[out=-30,in=30] (C2); \draw [very thick] (A1) to[out=30,in=150] (\y -1.5,2) to[out=-30,in=30] (B1); \draw [very thick] (A1) to[out=-60,in=180] (\y+1,-2.5) to[out=0,in=-60] (C2); \draw [very thick] (A3) to[out=-30,in=-150] (\y-1.5,-2) to[out=30,in=-30] (B2); \foreach \x in {A1,A2,A3,B1,B2,C1,C2} {\fill (\x) circle (5.0pt);} } \node[coordinate] (X) at (13,-1.5) {}; \draw [thick] ($ (X) + (-0.02,-0.15) $) circle (4.0pt); \draw [very thick] (A3) to[out=45,in=180] (11,1.5) to[out=0,in=75] (X); \draw[->, very thick] (5.5,0) -- (7.5,0); \end{tikzpicture} \caption{The six-path $ab_-c_+a_-b_+c_-a$ cannot be extended to a seven-path $ca_+b_-c_+a_-b_+c_-a$.} \label{figure:abcabca} \end{figure} But then no edge $ca$ (with the correct orientation at $a$) can be added at the start of the six-path: up to isotopy and Reidemeister moves, the only edge we can add does not start at $c$, as on the right in Figure~\ref{figure:abcabca}. This completes the proof of the Proposition. \end{proof} } By the Proposition, if an even cycle of length greater than $6$ has a pants thrackle drawing, then it must be reducible. Hence, to prove assertion~\eqref{it:pantseven} of Theorem~\ref{t:pants}, it suffices to show that the pants thrackle drawing of the six-cycle in Figure~\ref{figure:sixcycle} (or Reidemeister equivalent to it) admits no edge insertion such that the resulting thrackle drawing of the eight-cycle is again a pants thrackle drawing. One possible way is to consider all edge insertions following the procedure in Section~\ref{ss:ir}. But as the resulting thrackles are sufficiently small, all these cases can be treated by computer. Using the algorithm given in the end of Section~3 of \cite{MNajc} we found that up to isotopy and Reidemeister moves, there exist exactly three thrackled eight-cycles; they are shown in Figure~\ref{figure:alleight}. Each of them is obtained by edge insertion in a thrackled six-cycle and belongs to class $T_4$, but none of them is a pants thrackle. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.6,>=triangle 45] \foreach \y in {0,8,16} { \begin{scope}[rotate=90] \coordinate (A1) at ({5*cos((10*pi/9) r)},{\y+3*sin((10*pi/9) r)}); \coordinate (A2) at ({5*cos((8*pi/9) r)},{\y + 3*sin((8*pi/9) r)}); \coordinate (B1) at ({-2+cos((-pi/5) r)},{\y + sin((-pi/5) r)}); \coordinate (B2) at ({-2+cos((pi/5) r)},{\y + sin((pi/5) r)}); \coordinate (C2) at ({1.3+cos((pi/5) r)},{\y+sin((pi/5) r)}); \coordinate (C1) at ({1.3+cos((-pi/5) r)},{\y+sin((-pi/5) r)}); \foreach \x in {A1,A2,B1,B2,C1,C2} {\fill (\x) circle (5.0pt);} \draw [very thick] (A1) to[out=-30,in=-150] ({-1.5},\y-2) to[out=30,in=-30] (B2); \draw [very thick] (B1) to[out=-30,in=-150] ({2.5},\y-1.5) to[out=30,in=-30] (C2); \draw [very thick] (B2) to[out=30,in=150] ({2.5},\y+1.5) to[out=-30,in=30] (C1); \draw [very thick] (A1) to[out=60,in=180] ({1},\y+2.5) to[out=0,in=60] (C2); \draw [very thick] (A2) to[out=30,in=150] ({-1.5},\y+2) to[out=-30,in=30] (B1); \ifthenelse{\y = 16} {\node[coordinate] (A21) at ($(A2)+(0.7,0.1)$) {}; \fill (A21) circle (5.0pt); \coordinate (C12) at ({1.3+cos((-pi/5) r)},\y); \fill (C12) circle (5.0pt); \draw [very thick] (A2) to [out=-60,in=210] (2,\y-2.2) to [out=25,in=-60] (C12); \draw [very thick] (A21) to [out=105,in=120] ($(A2)+(-0.4,-0.2)$) to [out=-60,in=180] (0.8,\y-3) to [out=0,in=-90] (C1); \draw [very thick] (A21) to [out=105,in=120] ($(A2)+(-0.6,-0.2)$) to [out=-60,in=180] (0.8,\y-3.2) to [out=0,in=-90] ($(C1)+(0.6,-0.4)$) to [out=90,in=30] (C12); } { \ifthenelse{\y = 8} {\node[coordinate] (A21) at ($(A2)+(0.2,0.5)$) {}; \fill (A21) circle (5.0pt); \coordinate (C12) at ($(C1)+ (0.5,0)$); \fill (C12) circle (5.0pt); \draw [very thick] (A21) to [out=-60,in=210] (2.2,\y-2.2) to [out=25,in=0] (C1); \draw [very thick] (A2) to [out=-60,in=200] (0.6,\y-1.5) to [out=20,in=180] ($0.5*(C1)+0.5*(C2)$) to [out=0,in=-90] (C12); \draw [very thick] (A21) to [out=-40,in=200] (0.6,\y-1.1) to [out=20,in=180] ($0.5*(C1)+0.5*(C2)+(0,0.3)$) to [out=0,in=-100] (C12); } {\node[coordinate] (A21) at ($(A2)+(0.2,0.5)$) {}; \fill (A21) circle (5.0pt); \coordinate (C12) at (0.5,2); \fill (C12) circle (5.0pt); \draw [very thick] (A21) to [out=-60,in=210] (2.2,\y-2.2) to [out=25,in=0] (C1); \draw [very thick] (A2) to [out=-60,in=200] (0.6,\y-1.5) to [out=20,in=-40] (C12); \draw [very thick] (A21) to [out=-40,in=200] (0.6,\y-1.1) to [out=20,in=-60] (C12); } } ; \end{scope} } \end{tikzpicture} \caption{All thrackled eight-cycles up to Reidemeister equivalency.} \label{figure:alleight} \end{figure} This proves assertion~\eqref{it:pantseven} of Theorem~\ref{t:pants}. It remains to prove assertion~\eqref{it:pantsC}. By Lemma~\ref{l:TCTd}, it suffices to show that if $G$ is either a theta-graph or a dumbbell, then it admits no pants thrackle drawing. We also know from Lemma~\ref{l:TCTd} that in both cases, $G$ contains an even cycle which by assertion~\eqref{it:pantseven} must be a six-cycle whose thrackle drawing is Reidemeister equivalent to the one in Figure~\ref{figure:sixcycle}. The proof goes as follows: we explicitly construct pants thrackle drawings of a six-cycle with certain small trees attached to one of its vertices and first show that in a pants thrackle drawing of a three-path attached to a six-cycle, the drawing of the three-path is reducible. Repeatedly performing edge removals we get a pants thrackle drawing either of a theta-graph obtained from a six-cycle by joining two of its vertices by a path of length at most $2$, or of a dumbbell consisting of a six-cycle and some other cycle joined by a path of length at most $2$. The resulting theta-graphs are very small, and from \cite{FP2011, MNajc} we know that they admit no thrackle drawing at all, and in particular, no pants thrackle drawing (the latter fact will also be confirmed in the course of the proof). Every resulting dumbbell contains one of two subgraphs obtained from the six-cycle by attaching a small tree, as in Figure~\ref{figure:sixtree}. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.6,>=triangle 45] \foreach \y in {0,13} { \node[coordinate] (A1) at (\y + 2,0) {}; \node[coordinate] (A2) at ({\y + 2*cos((pi/3) r)},{2*sin((pi/3) r)}) {}; \node[coordinate] (A3) at ({\y + 2*cos((2*pi/3) r)},{2*sin((2*pi/3) r)}) {}; \node[coordinate] (A4) at ({\y - 2},0) {}; \node[coordinate] (A5) at ({\y + 2*cos((-2*pi/3) r)},{2*sin((-2*pi/3) r)}) {}; \node[coordinate] (A6) at ({\y + 2*cos((-pi/3) r)},{2*sin((-pi/3) r)}) {}; \foreach \x in {A1,A2,A3,A4,A5,A6} {\fill (\x) circle (5.0pt);} \draw [very thick] (A1) -- (A2) -- (A3) -- (A4) -- (A5) -- (A6) -- (A1); \ifthenelse{\y = 0} {\node[coordinate] (B1) at ({\y + 4},0) [label=0:$v$] {}; \fill (B1) circle (5.0pt); \node[coordinate] (C1) at ({\y + 4 + 2*cos((pi/3) r)},{2*sin((pi/3) r)}) [label=0:$v_1$] {}; \fill (C1) circle (5.0pt); \node[coordinate] (C2) at ({\y + 4 + 2*cos((-pi/3) r)},{2*sin((-pi/3) r)}) [label=0:$v_2$] {}; \fill (C2) circle (5.0pt); \draw [very thick] (C1) -- (B1) -- (C2); \draw [very thick] (A1) -- (B1); } {\node[coordinate] (B1) at ({\y + 4},0) {}; \fill (B1) circle (5.0pt); \node[coordinate] (B2) at ({\y + 6},0) [label=0:$v$] {}; \fill (B2) circle (5.0pt); \node[coordinate] (C1) at ({\y + 6 + 2*cos((pi/3) r)},{2*sin((pi/3) r)}) [label=0:$v_1$] {}; \fill (C1) circle (5.0pt); \node[coordinate] (C2) at ({\y + 6 + 2*cos((-pi/3) r)},{2*sin((-pi/3) r)}) [label=0:$v_2$] {}; \fill (C2) circle (5.0pt); \draw [very thick] (C1) -- (B2) -- (C2); \draw [very thick] (A1) -- (B1) -- (B2); } } \end{tikzpicture} \caption{The six-path cycle with a tree attached.} \label{figure:sixtree} \end{figure} We show that for a pants thrackle drawing of each of these two subgraphs, to at least one of the two vertices $v_1, v_2$, it is not possible to attach another edge so that the resulting drawing is a pants thrackle drawing. We start with the pants thrackle drawing of the six-cycle and attach a path to one of its vertices. By cyclic symmetry, we can choose any vertex to attach a path. Moreover, from the arguments in Section~\ref{ss:R} it follows that Reidemeister moves on the original six-cycle and on the intermediate steps of adding edges will result in a Reidemeister equivalent drawing in the end. So we can attach a path edge-by-edge choosing one of Reidemeister equivalent drawings arbitrarily at each step. Up to isotopy and Reidemeister moves, there are two ways to attach an edge to a vertex of the drawing of the six-cycle, as in Figure~\ref{figure:sixplusone}. Note that the second endpoint of this edge is not one of the vertices of the six-cycle (so that no theta-graph obtained by joining two vertices of a six-cycle by an edge admits a pants thrackle drawing) and that in the two cases shown in Figure~\ref{figure:sixplusone}, it lies on different boundary components of $P$. It follows that renaming $b$ and $c$ and changing the direction on the cycle and the orientation on the plane, we obtain two Reidemeister equivalent drawings. We continue with the one on the left in Figure~\ref{figure:sixplusone} and attach another edge at the vertex of degree $1$. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.6,>=triangle 45] \foreach \y in {0,13} { \draw[thick] (\y,0) ellipse (5 and 3); \draw[thick] (\y-2,0) circle (1); \draw[thick] (\y+2,0) circle (1); \coordinate (A1) at ({\y+5*cos((10*pi/9) r)},{3*sin((10*pi/9) r)}); \node[coordinate] (A2) at ({\y + 5*cos((8*pi/9) r)},{3*sin((8*pi/9) r)}) {}; \node[coordinate] (B1) at ({\y -2+cos((-pi/9) r)},{sin((-pi/9) r)}) {}; \coordinate (B2) at ({\y-2+cos((pi/9) r)},{sin((pi/9) r)}); \coordinate (C2) at ({\y+2+cos((pi/6) r)},{sin((pi/6) r)}); \coordinate (C1) at ({\y+2+cos((-pi/6) r)},{sin((-pi/6) r)}); \foreach \x in {A1,A2,B1,B2,C1,C2} {\fill (\x) circle (5.0pt);} \draw [very thick] (A1) to[out=-30,in=-150] ({\y-1.5},-2) to[out=30,in=-30] (B2); \draw [very thick] (B1) to[out=-30,in=-150] ({\y+2.5},-1.5) to[out=30,in=-15] (C2); \draw [very thick] (A2) to[out=-60,in=180] ({\y+1},-2.5) to[out=0,in=-60] (C1); \draw [very thick] (A2) to[out=30,in=150] ({\y-1.5},2) to[out=-30,in=30] (B1); \draw [very thick] (B2) to[out=30,in=150] ({\y+2.5},1.5) to[out=-30,in=15] (C1); \draw [very thick] (A1) to[out=60,in=180] ({\y+1},2.5) to[out=0,in=60] (C2); \ifthenelse{\y = 0} { \coordinate (B3) at ({\y-2},-1); \fill (B3) circle (5.0pt); \draw [very thick] (A2) to [out=45,in=180] ({\y-1.5},2.5) to [out=0,in=90] ({\y+0.5},0) to [out=-90,in=-60] (B3); } { \coordinate (C3) at ({\y+3},0); \fill (C3) circle (5.0pt); \draw [very thick] (A2) to[out=-80,in=180] ({\y+1},-2.7) to[out=0,in=-60] (C3); } ; } \end{tikzpicture} \caption{Six-cycle with an edge attached.} \label{figure:sixplusone} \end{figure} This can be done uniquely up to isotopy and Reidemeister equivalence resulting in the drawing as on the left in Figure~\ref{figure:sixplustwo}. Again, the second endpoint of the attached edge cannot be one of the vertices of the six-cycle (so that no theta-graph obtained by joining two vertices of a six-cycle by a two-path admits a pants thrackle drawing). Then we can attach another edge at that vertex. This can be done uniquely up to isotopy and Reidemeister equivalence, as on the right in Figure~\ref{figure:sixplustwo}. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.6,>=triangle 45] \foreach \y in {0,13} { \draw[thick] (\y,0) ellipse (5 and 3); \draw[thick] (\y-2,0) circle (1); \draw[thick] (\y+2,0) circle (1); \coordinate (A1) at ({\y+5*cos((10*pi/9) r)},{3*sin((10*pi/9) r)}); \node[coordinate] (A2) at ({\y + 5*cos((8*pi/9) r)},{3*sin((8*pi/9) r)}) {}; \node[coordinate] (B1) at ({\y -2+cos((-pi/9) r)},{sin((-pi/9) r)}) {}; \coordinate (B2) at ({\y-2+cos((pi/9) r)},{sin((pi/9) r)}); \coordinate (C2) at ({\y+2+cos((pi/6) r)},{sin((pi/6) r)}); \coordinate (C1) at ({\y+2+cos((-pi/6) r)},{sin((-pi/6) r)}); \foreach \x in {A1,A2,B1,B2,C1,C2} {\fill (\x) circle (5.0pt);} \draw [very thick] (A1) to[out=-30,in=-150] ({\y-1.5},-2) to[out=30,in=-30] (B2); \draw [very thick] (B1) to[out=-30,in=-150] ({\y+2.5},-1.5) to[out=30,in=-15] (C2); \draw [very thick] (A2) to[out=-60,in=180] ({\y+1},-2.5) to[out=0,in=-60] (C1); \draw [very thick] (A2) to[out=30,in=150] ({\y-1.5},2) to[out=-30,in=30] (B1); \draw [very thick] (B2) to[out=30,in=150] ({\y+2.5},1.5) to[out=-30,in=15] (C1); \draw [very thick] (A1) to[out=60,in=180] ({\y+1},2.5) to[out=0,in=60] (C2); \node[coordinate] (B3) at ({\y-2+cos((pi/3) r)},{sin((-pi/3) r)}) {}; \fill (B3) circle (5.0pt); \draw [very thick] (A2) to [out=45,in=180] ({\y-1.5},2.5) to [out=0,in=90] ({\y+0.5},0) to [out=-90,in=-60] (B3); \coordinate (A3) at ({\y+5*cos((19*pi/18) r)},{3*sin((19*pi/18) r)}); \fill (A3) circle (5.0pt); \draw [very thick] (A3) to [out=45,in=180] ({\y-1.5},2.2) to [out=0,in=90] ({\y+0.2},0) to [out=-90,in=-30] (B3); \ifthenelse{\y = 0}{} { \coordinate (B4) at ({\y-2+cos((-pi/2-pi/9) r)},{sin((-pi/2-pi/9) r)});\fill (B4) circle (5.0pt); \draw [very thick] (A3) to [out=60,in=180] ({\y-1.2},2.7) to [out=0,in=90] ({\y+0.7},0) to [out=-90,in=-45] (B4); } ; } \draw[->, very thick] (5.5,0) -- (7.5,0); \end{tikzpicture} \caption{Six-cycle with two- and three-paths attached.} \label{figure:sixplustwo} \end{figure} But if the two vertices other than the endpoints in the so attached three-path have degree $2$ in $G$, then the three-path is reducible by Lemma~\ref{lemma:triangle}. Now if $G$ is a theta-graph, then by repeatedly performing edge removals we obtain a pants thrackle drawing of a theta-graph obtained by joining two vertices of a six-cycle by a path of length at most $2$, which is impossible, as we have shown above. If $G$ is a dumbbell, then by repeatedly performing edge removals we obtain a pants thrackle drawing of a dumbbell consisting of the six-cycle and a cycle $C'$, with a vertex of the six-cycle joined to the vertex $v$ of $C'$ by either an edge or a two-path. Such a dumbbell contains one of the two subgraphs given in Figure~\ref{figure:sixtree}. So it remains to deal with these two cases. The vertex $v$ has degree $3$ in $G$, so we have to attach two edges to it. In the first case, we start with the drawing on the left in Figure~\ref{figure:sixplusone} and attach two edges to the vertex $v$. We obtain a unique drawing, up to isotopy and Reidemeister moves, as on the right in Figure~\ref{figure:sixplusonev}. But then no edge can be attached to the vertex $a_2$ in such a way that the resulting drawing is a pants thrackle drawing. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.6,>=triangle 45] \foreach \y in {0,14} { \draw[thick] (\y,0) ellipse (5 and 3); \draw[thick] (\y-2,0) circle (1); \draw[thick] (\y+2,0) circle (1); \coordinate (A1) at ({\y+5*cos((10*pi/9) r)},{3*sin((10*pi/9) r)}); \node[coordinate] (A2) at ({\y + 5*cos((8*pi/9) r)},{3*sin((8*pi/9) r)}) {}; \node[coordinate] (B1) at ({\y -2+cos((-pi/9) r)},{sin((-pi/9) r)}) {}; \coordinate (B2) at ({\y-2+cos((pi/9) r)},{sin((pi/9) r)}); \coordinate (C2) at ({\y+2+cos((pi/6) r)},{sin((pi/6) r)}); \coordinate (C1) at ({\y+2+cos((-pi/6) r)},{sin((-pi/6) r)}); \foreach \x in {A1,A2,B1,B2,C1,C2} {\fill (\x) circle (5.0pt);} \draw [very thick] (A1) to[out=-30,in=-150] ({\y-1.5},-2) to[out=30,in=-30] (B2); \draw [very thick] (B1) to[out=-30,in=-150] ({\y+2.5},-1.5) to[out=30,in=-15] (C2); \draw [very thick] (A2) to[out=-60,in=180] ({\y+1},-2.5) to[out=0,in=-60] (C1); \draw [very thick] (A2) to[out=30,in=150] ({\y-1.5},2) to[out=-30,in=30] (B1); \draw [very thick] (B2) to[out=30,in=150] ({\y+2.5},1.5) to[out=-30,in=15] (C1); \draw [very thick] (A1) to[out=60,in=180] ({\y+1},2.5) to[out=0,in=60] (C2); \node[coordinate] (B3) at ({\y-2},-1) [label=90:$v$] {}; \fill (B3) circle (5.0pt); \draw [very thick] (A2) to [out=45,in=180] ({\y-1.5},2.5) to [out=0,in=90] ({\y+0.5},0) to [out=-90,in=-60] (B3); \ifthenelse{\y = 0} {} {\node[coordinate] (A3) at ({\y+5*cos((19*pi/18) r)},{3*sin((19*pi/18) r)}) [label=180:$a_1$] {}; \fill (A3) circle (5.0pt); \draw [very thick] (A3) to [out=45,in=180] ({\y-1.5},1.8) to [out=0,in=90] ({\y-0.3},0) to [out=-90,in=0] (B3); \node[coordinate] (A4) at ({\y+5*cos((35*pi/36) r)},{3*sin((35*pi/36) r)}) [label=180:$a_2$] {}; \fill (A4) circle (5.0pt); \draw [very thick] (A4) to [out=45,in=180] ({\y-1.5},2.1) to [out=0,in=90] ({\y},0) to [out=-90,in=-40] (B3); } ; } \draw[->, very thick] (5.75,0) -- (7.75,0); \end{tikzpicture} \caption{Pants drawing of the graph on the left in Figure~\ref{figure:sixtree}.} \label{figure:sixplusonev} \end{figure} Similarly, in the second case, we start with the drawing on the left in Figure~\ref{figure:sixplustwo} and attach two edges to the vertex $v$. We obtain a unique drawing, up to isotopy and Reidemeister moves, as on the right in Figure~\ref{figure:sixplustwov}. But then no edge can be attached to the vertex $b_1$ in such a way that the resulting drawing is a pants thrackle drawing. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.6,>=triangle 45] \foreach \y in {0,13.8} { \pgfmathparse{abs(\y) < 0.001 ? int(1) : int(0)} \draw[thick] (\y,0) ellipse (5 and 3); \draw[thick] (\y-2,0) circle (1); \draw[thick] (\y+2,0) circle (1); \coordinate (A1) at ({\y+5*cos((10*pi/9) r)},{3*sin((10*pi/9) r)}); \node[coordinate] (A2) at ({\y + 5*cos((8*pi/9) r)},{3*sin((8*pi/9) r)}) {}; \node[coordinate] (B1) at ({\y -2+cos((-pi/9) r)},{sin((-pi/9) r)}) {}; \coordinate (B2) at ({\y-2+cos((pi/9) r)},{sin((pi/9) r)}); \coordinate (C2) at ({\y+2+cos((pi/6) r)},{sin((pi/6) r)}); \coordinate (C1) at ({\y+2+cos((-pi/6) r)},{sin((-pi/6) r)}); \foreach \x in {A1,A2,B1,B2,C1,C2} {\fill (\x) circle (5.0pt);} \draw [very thick] (A1) to[out=-30,in=-150] ({\y-1.5},-2) to[out=30,in=-30] (B2); \draw [very thick] (B1) to[out=-30,in=-150] ({\y+2.5},-1.5) to[out=30,in=-15] (C2); \draw [very thick] (A2) to[out=-60,in=180] ({\y+1},-2.5) to[out=0,in=-60] (C1); \draw [very thick] (A2) to[out=30,in=150] ({\y-1.5},2) to[out=-30,in=30] (B1); \draw [very thick] (B2) to[out=30,in=150] ({\y+2.5},1.5) to[out=-30,in=15] (C1); \draw [very thick] (A1) to[out=60,in=180] ({\y+1},2.5) to[out=0,in=60] (C2); \node[coordinate] (B3) at ({\y-2+cos((pi/3) r)},{sin((-pi/3) r)}) {}; \fill (B3) circle (5.0pt); \draw [very thick] (A2) to [out=45,in=180] ({\y-1.5},2.5) to [out=0,in=90] ({\y+0.5},0) to [out=-90,in=-60] (B3); \node[coordinate] (A3) at ({\y+5*cos((19*pi/18) r)},{3*sin((19*pi/18) r)}) [label=180:$v$] {}; \fill (A3) circle (5.0pt); \draw [very thick] (A3) to [out=45,in=180] ({\y-1.5},2.2) to [out=0,in=90] ({\y+0.2},0) to [out=-90,in=-30] (B3); \ifnum\pgfmathresult=0 { \node[coordinate] (B4) at ({\y-2+cos((-pi/2-pi/9) r)},{sin((-pi/2-pi/9) r)}) [label=90:$b_1$] {};\fill (B4) circle (5.0pt); \draw [very thick] (A3) to [out=55,in=180] ({\y-1.2},2.6) to [out=0,in=45] ({\y+0.6},0) to [out=-135,in=-45] (B4); \node[coordinate] (B5) at ({\y-2+cos((-pi/2-3*pi/9) r)},{sin((-pi/2-3*pi/9) r)}) [label=135:$b_2$] {};\fill (B5) circle (5.0pt); \draw [very thick] (A3) to [out=70,in=180] ({\y-1.2},2.75) to [out=0,in=80] ({\y+1},0.4) to [out=-100,in=20] ({\y-1.25},-1.55) to [out=-160,in=-90] (B5); } \fi } \draw[->, very thick] (5.75,0) -- (7.75,0); \end{tikzpicture} \caption{Pants drawing of the graph on the right in Figure~\ref{figure:sixtree}. \label{figure:sixplustwov} \end{figure} This completes the proof of Theorem~\ref{t:pants}. \acknowledgements \label{sec:ack} We express our deep gratitude to Grant Cairns for his generous contribution to this paper, at all the stages, from mathematics to presentation. We are thankful to the reviewer for their kind permission to include a brief description of the ideas underlining the proof borrowed from their report. \nocite{*} \bibliographystyle{alpha}
1,116,691,501,063
arxiv
\section{Introduction} \label{sec:introduction} Raster images often have distortions connected with their raster structure. These distortions can for example be an undersampling, distorted intensity response curves or processing like sharpening or unsharp mask. Upsampling the distorted images, using for example the bicubic interpolation \cite{keys1981bicubic, mitchell1988reconstruction}, might in effect substantially yield the raster structure of the original image, what is known in image processing as aliasing \cite{mitchell1988reconstruction}. Additionally, upsampling methods that attempt to produce sharp images, might have an intrinsic trait of introducing the aliasing \cite{ mitchell1988reconstruction}. The presented method attempts to remove the aliasing artifacts using frequency filters based on the discrete fast Fourier transform, and applied directionally in certain regions placed along the edges detected in the image. The selective directional applying of these filters serves the purpose of estimating the presence of the aliasing in the places where it is likely to occur, and where it is at the same time unlikely that the objects in the image will be confused with the aliasing. The special feature of the method is that it aims to selectively reduce the aliasing, trying at the same time to preserve the sharpness of image details. It makes it different from typically used interpolations like the bilinear or bicubic ones \cite{keys1981bicubic, mitchell1988reconstruction}, that produce images that are blurry or aliased, or various anisotropic smoothing methods like these described in \cite{tschumperle2005regularization, tschumperle2006anisotropic}, that aim to generally smoothen objects in the image, what might lead, as it will be illustrated in tests, to very unnatural looking images. On of the more widely used complex image restoration methods -- NEDI \cite{xinli2001interpolation}, also makes some textures look unnatural and still produces substantial aliasing in some images. The following sections discuss, in order, aliasing, a custom sub--pixel precision edge detection method used to direct the filtering, and the frequency filtering. Finally, some tests are presented. \section{Aliasing} The discussed aliasing in the upsampled images is connected with the raster of the source image, and not of the upsampled image. In Fig.~\ref{fig:raster-artifacts}, a schematic example of an object upsampled four times in each direction is shown. Bold lines show borders of the original pixels, smallest rectangles show borders of the pixels in the upsampled image. \psfigure{raster-artifacts} {1.4in} {A schematic example of upsampling. } The image shows a dark object on a white background. The object boundary in the original image consisted of pixels whose brightness changed approximately periodically, with the period connected to the period of passing of the horizontal line between the pixels in the original raster. It can be seen in the upsampled image -- the brighter boundary pixels in the original image have corresponding \(4 \times 4\) pixel blocks in the upsampled image that consist of mostly white pixels, and conversely, the darker pixels in the original image have corresponding blocks of mostly dark pixels. Similarly, of course, if boundary would be more close to a vertical one, the period of passing of the vertical raster lines would be important in turn. As can be seen in the example in Fig.~\ref{fig:raster-artifacts-example}, various distortions of the image may cause `waving' of location, color or sharpness of the upsampled boundaries, depending on the particular distortion and the upscaling method. What is important here, though, is that the period \(l_{0}\) of the `waving' for a straight boundary is the same as the period of the brightness variability of the pixels in the original image, which in turn, as it was discussed and also can be seen in Fig.~\ref{fig:raster-artifacts-example}, is approximately equal to the length of the object border between two either horizontal or vertical subsequent lines of the original raster, depending on if the border is either more close to, respectively, the horizontal or the vertical direction. For a straight border, \(l_{0}\) is thus as follows: \begin{equation} l_{0} = \left\{ \begin{array}{ll}\displaystyle U \left|\frac{x_{l} - x_{0}}{y_{l} - y_{0}}\right| & \textrm{if \(|x_{l} - x_{0}| \ge |y_{l} - y_{0}|\)} \\[8pt] \displaystyle U \left|\frac{y_{l} - y_{0}}{x_{l} - x_{0}}\right| & \textrm{if \(|x_{l} - x_{0}| < |y_{l} - y_{0}|\)} \\ \end{array} \right. \end{equation} where \(U\) is the scale of the upsampling, \((x_{0}, y_{0})\) is the first pixel of a straight fragment of a boundary and \((x_{l}, y_{l})\) is the last pixel of the fragment, using the coordinates of the upsampled image. If the fragment is only approximately straight, the equation gives an approximate common \(l_{0}\), while local periods can vary along the fragment. An example of such an approximately straight fragment is illustrated in Fig.~\ref{fig:frequency-filtering}. Thus, estimation the period on basis of the orientation of a border might be a good way of detecting the corresponding artifacts, what in turn might be the first stage of reducing these detected artifacts. This is the basic presume of the presented method. \begin{figure} \begin{center} \begin{tabular}{cccc} \includegraphics[width=0.4in]{artifacts-n.eps} & \includegraphics[width=0.15in]{arrow.eps} & \includegraphics[width=0.4in]{artifacts-n-s4-b.eps} \\ \includegraphics[width=0.4in]{artifacts-c.eps} & \includegraphics[width=0.15in]{arrow.eps} & \includegraphics[width=0.4in]{artifacts-c-s4-b.eps}\\ \includegraphics[width=0.4in]{artifacts-s.eps} & \includegraphics[width=0.15in]{arrow.eps} & \includegraphics[width=0.4in]{artifacts-s-s4-b.eps}\\ \end{tabular} \vspace{-0.1in} \caption{An example of aliasing. The first column contains original \(64 \times 64\) images, that are, in subsequent rows, with only small distortions, distorted gray response curves and sharpened. The second column contains corresponding \(128 \times 128\) upsampled images, using the bicubic interpolation.} \label{fig:raster-artifacts-example} \end{center} \end{figure} \section{Sub--pixel precision edge detection} Edge detection \cite{marr1980edge, canny1986edge, jain1989fundamentals, ziou1998over} in raster images is one of the basic methods of feature extraction from images. This paper employs a simple low--level definition of an edge described in \cite{martin2002learning}: an abrupt change in some low--level image feature as brightness or color, as opposed to a boundary, described in the cited paper as a higher--level feature. The presented edge detection method is designed to give edges with sub--pixel precision, and to detect even small discontinuities in the image. This is because the aim is, as opposed to typical edge detection methods, not to extract the more prominent edges, but to get a precise edge map for frequency filtering. Additionally, the edge detector employed must have a high resistance to the image distortions discussed, like undersampling. This is why a custom edge detector was designed. \subsection{Finding edges} \label{sec:edge_detection} In the first step of the edge detection, a Sobel operator \cite{sobel1970camera} is applied to the upsampled image. If the image has multiple bands, each one is processed separately and then the resulting images are averaged into one single--band image. Then, the roof edges \cite{perona1991detecting, baker1998parametric} are searched for in that resulting image. As the Sobel operator produces a gradient map--like image, the roof edges obtained are effectively the discontinuity edges as discussed in \cite{martin2002learning}. To detect the roof edges, an operator called peakiness detection is used. The edge detection basically works by finding `bumps' in the gradient image, at various angles. The computational complexity is kept low by using the following approach. For each of the angles \(a_{i} = (i + 0.5)\frac{1}{2}\pi/N\), \(i = 0,\,\ldots\,N - 1\), scan the image along lines that are at the angle \(a_{i}\) to the horizontal axis, so that: \begin{itemize} \item if \(a_{1} \le \pi/4\), let the consecutive lines be one vertical pixel apart; \item if \(a_{1} > \pi/4\), let the consecutive lines be one horizontal pixel apart; \end{itemize} \psfigure{scan_lines} {1.4in} {A series of scan lines for a given angle.} and let the lines cover such a range, that, together, they cover the entire area of the image. An example case for \(a_{1} \le \pi/4\) is illustrated in Fig.~\ref{fig:scan_lines}. As it can be seen, such a way of aligning subsequent lines provides that all pixels are covered, in scans for each \(a_{1}\). Yet, there is not a separate searching for `bumps' around each pixel at an angle \(a_{i}\) -- sequential searching for `bumps' on a single line at an angle \(a_{i}\) covers searching for `bumps' for each pixel on that line, what decreases the mentioned computational complexity. The value of \(N = 7\) was chosen, as a precise enough and making the scanning reasonably fast at the same time. The `bump' criterion is as follows. Let \(p_{0},\,p_{1},\,\ldots\,p_{M - 1}\) be intensities of subsequent pixels on a given scanned line of \(M\) pixels. The searching for `bumps' within a single line works as follows: for the pixel \(n\)th, if its intensity is larger by \(d\) than both the intensity of the pixel \((n - r)\)th and the intensity of the pixel \((n + r)\)th, then increase the `peakiness' of the pixel by 1. The coefficients \(d\) and \(r\) should be large enough to reduce single--pixel level noise, and small enough to maintain good edge location. To improve the detection of edges at various scales, \(p_{\max} = 3\) passes of the edge detection are performed, each modifying common `peakiness' of a pixel, with three different sets of values for \(d\) and \(r\): \begin{equation} \begin{array}{c} p = 1, 2, \ldots p_{\max} \\ r_{p} = p + 2 \\ d_{p} = 0.015 + 0.005p \\ \end{array} \end{equation} where the index \(p\) denotes a respective pass. If the image processed is very blurry, \(r\) might require an appropriate increase. Because \(p_{\max}N > 1\) scanning lines pass through each pixel, one for each angle, the `peakiness' is an averaged value of several tests for the `bumps', what may obviously reduce single--pixel level noise. Only pixels whose accumulated peakiness value is equal or larger than a given threshold \(e_{\min}\) are regarded as the edge ones, to reduce the detection of what is an image noise, and not a real edge. The value of \(e_{\min} = 6\) was adjusted in tests. It can be decreased for images with low noise and weak edges, and increased for images with high level of noise. The roof edges obtained using this method can be thick, while the needed edges must be one--pixel wide. To correct that, centers of the roof edges are extracted using a simple thinning method, for example that described in \cite{gonzales1993image}, with the 8--neighborhood criterion. \subsection{Correction of the edges} \label{sec:undersampling} The discussed distortions, the edge detection method itself, or image noise may decrease the quality of the obtained edges. Therefore, cleaning of the edges from small branches and protruding pixels, and the reduction of `waving' of the edges, is used. \subsubsection{Cleaning the edges} \label{sec:cleaning} Both the method of the waving reduction, and the finding of approximately straight fragments discussed later in Sec.~\ref{sec:frequency-filtering}, are sensitive to two kinds of `noise' of the edges -- small branches and single protruding pixels. Example of such distortions is shown in Fig.~\ref{fig:edge-distortions}. The work--around is straightforward -- edges below a given length are deleted, where each pixel connecting three or more branches is considered a boundary between the edges. In the first iteration, edges of the length of \(1\) are deleted, then edges of the length \(2\), and so on, till some value \(L_{\min} - 1\), with re--measuring of the edge lengths after each iteration. If, instead, we'd immediately begin with deleting all edges of length less or equal than \(L_{\min}\), then the edges like the grayed one in Fig.~\ref{fig:edge-distortions} would be deleted, instead of only the two small branches visible in the image. The small protruding single pixels, like that seen in Fig.~\ref{fig:edge-distortions}, are moved back to the edge, using a trivial method. \psfigure{edge-distortions} {1.4in} {Example edge `noise' to be cleaned.} \subsubsection{Reduction of waving} \label{sec:reduction-of-waving} Aliasing in the upsampled image may produce variously `waving' edges, An example of `waving', and its correction, is shown in Fig.~\ref{fig:undersampling}. The edge detector should be resistant to the aliasing artifacts, thus, the reduction of the waving is performed. \begin{figure} \begin{center} \begin{tabular}{ccccc} \includegraphics[width=0.4in]{rr.eps} & \includegraphics[width=0.15in]{arrow.eps} & \includegraphics[width=0.4in]{rr-e-1.eps} & \includegraphics[width=0.15in]{arrow.eps} & \includegraphics[width=0.4in]{rr-e-2.eps} \\[-5pt] \includegraphics[width=0.4in]{rr-u.eps} & \includegraphics[width=0.15in]{arrow.eps} & \includegraphics[width=0.4in]{rr-u-e-1.eps} & \includegraphics[width=0.15in]{arrow.eps} & \includegraphics[width=0.4in]{rr-u-e-2.eps} \\ \end{tabular} \vspace{-0.1in} \caption{An example of the reduction of waving for an image with two different gray response curves.} \label{fig:undersampling} \end{center} \end{figure} The procedure to reduce waving has the following steps: \begin{enumerate}\addtolength{\itemsep}{-0.3\baselineskip} \item Find junctions, that is corners of pixels that have two neighboring edge pixels. \item For each of these two pixels, find the length of rectangular sequences \(S\) that begin at the subsequent edge pixel. A rectangular sequence is a sequence of 4--neighboring pixels that is either vertical or horizontal. \item If one of these cases occurs: both rectangular sequences \(S\) are horizontal, or both are vertical, one consists of a single pixel and the other has more than one pixel, then such a junction can be classified as, accordingly, either horizontal or vertical. In such a case, then, it is assumed that the junction is a part of some edge \(E\) that can, respectively, be classified as being locally either closer to some horizontal or vertical direction. \item if \(E\) could be classified as locally closer to horizontal or vertical direction, the junction is marked as a movable one along that closer direction, that is, able to modify \(E\), by shortening the longer \(S\) and extending the shorter \(S\), as it is shown in the example in Fig.~\ref{fig:junctions}. \psfigureab{junctions} {junctions-1} {junctions-2} {0.9in} {(a) Examples of junctions. The edge pixels are marked with rectangles. All junctions are marked with crosses. Only two junctions, for clarity, have their sequences \(S\) marked with gray color. Possible moving direction of these two junctions are marked with arrows. The junction \(H\) is the horizontal one, and the junction \(V\) is the vertical one. (b) The same edge, the junctions \(H\) and \(V\) were moved by the length of one pixel.} \item For each movable junction, set the maximum length \(l_{\max}\) the junction is allowed to move. This constraint exists to prevent the edges from too large moves. Let the two rectangular sequences \(S\) neighboring to a junction have their respective lengths \(s_{1}\) and \(s_{2}\). Then, \begin{equation} \label{eq:junction-move} l_{\max} = \min\big[\max\left(s_{1}, s_{2}\right),\, l_{1}\min\left(s_{1}, s_{2}\right) + l_{2}\big] \end{equation} The basic limitation in (\ref{eq:junction-move}) is that \(l_{\max} \le \max\left(s_{1}, s_{2}\right)\), thus, an approximately straight edge can not be moved aside by more than about one pixel. The coefficients \(l_{1}\) and \(l_{2}\) precisely regulate \(l_{\max}\). It was found in tests, that \(l_{1} = 3\) and \(l_{2} = 1\) gives a good trade--off between effective waving reduction and a none to moderate displacement of the edges. \item After \(l_{\max}\) was determined for each movable junction, the following sub--procedure \(W\) is repeatedly performed, each time for the whole image, until either the number of the repetitions of performing \(W\) reaches a given number \(N_{w} = 50\), or the stability is reached, that is a given run of \(W\) does not change anything in the image. The limitation by using \(N_{w}\) is only to prevent the wave reduction from taking too long time. \(W\) is as follows. For each movable junction, if \(|s_{1} - s_{2}| > 1\), and the junction did not reach its \(l_{\max}\) value, move the junction so that to shorten its larger sequence \(S\) by one pixel and extend its shorter sequence \(S\) by one pixel. The condition \(|s_{1} - s_{2}| > 1\) exists to cause the rectangular sequences \(S\) to have more similar length, with the prevention of a junction to be moved back and forth in subsequent executions of the procedure \(W\). \end{enumerate} \section{Frequency filtering} \label{sec:frequency-filtering} The frequency filtering has two stages: in the first stage, find approximately straight fragments of edges, and in the second stage, do frequency filtering directed along each such a fragment. Because the fragments are approximately straight, a common approximate base period of aliasing artifacts can be determined for each fragment. Such a base period \(l_{0}\) is used then in filtering the frequency spectrums. \subsection{Finding approximately straight fragments} \label{sec:approximately-straight-fragments} An approximately straight fragment \(F\) is an edge or a part of the edge. A fragment \(F\) can not include the branch pixels, that is those that have more than two neighbors being edge pixels. The criterion of a fragment to be approximately straight is very simple: all pixels of the fragment must not be further from the straight line between two endings of \(F\) by more than \(d = s_{d}U\). \(U\) is the scale of upsampling, and it occurs in the formula because image features are linearly proportional to \(U\), and \(s_{d}\) is a coefficient regulating how approximately straight \(F\) should be. It was determined in tests that the value of \(s_{d} = 0.4\) is a good trade--off between many short fragments for small \(s_{d}\) and bad approximation of common \(l_{0}\) for the whole \(F\) for large \(s_{d}\). There is yet another condition, \(Q\), for \(F\): its subsequent pixels must all either have always increasing or always decreasing \(x\) coordinates or \(y\) coordinates. If the condition applies to the \(x\) coordinates, the fragment \(F\) is called a horizontal one, and otherwise it is called the vertical one. An example of a horizontal \(F\) is shown in Fig.\ref{fig:frequency-filtering}. The need for the condition \(Q\) will be explained in Sec.~\ref{sec:fragment-directed}. The fragments \(F\) are searched for as follows: find an edge pixel \(P\) that is a part of an edge whose pixels are not assigned to any fragment \(F\) yet. Trace the unassigned edge pixels from the pixel \(P\), using the 8--neighborhood criterion, the same that was used during thinning of the edges. Do that until the end of the unassigned edge pixels is found, or a branch pixel is found. Then trace the pixels back to search for the other end of these unassigned pixels, until the other end is reached or the criterion of approximate straight edge stops to be fulfilled, and this way find a new \(F\). Then set the pixels of the new \(F\) as ones assigned to \(F\), and continue searching for fragments \(F\) until all pixels, excluding the branch pixels, are assigned to some \(F\). \subsection{Fragment-directed frequency filtering} \label{sec:fragment-directed} It is important for the frequency filtering to be applied possibly along object boundaries. Applying it, for example, across fence pales, may alter important image matter, as the periodicity of occurring of the pales might be confused with aliasing. This is why the edge detector is employed, and then the fragments \(F\) are extracted. With each \(F\), the filtering strength \(S_{f}\) is estimated. \(S_{f}\) is directly related to the size of the region along \(F\) that is filtered, as illustrated in Fig.~\ref{fig:frequency-filtering} -- the fragment is moved \(S_{f}\) times up and \(S_{f}\) times down for horizontal \(F\), or \(S\) times left and \(S\) times right for vertical \(F\). For each of the resulting placements, the brightness of subsequent pixels in the upsampled image, covered by the moved \(F\), is determining the brightness functions \(B^{b}_{i}(x)\), \(x = 0,\, \ldots\, N_{B} - 1\), where \(N_{B}\) is the number of pixels in \(F\) and \(i = -S_{f}, -S_{f} + 1,\, \ldots\, S_{f}\) is assigned for each move of \(F\) as shown in the example in Fig.~\ref{fig:frequency-filtering}, The index \(b = 0, 1,\, \ldots\, c - 1\) determines one of the \(c\) bands of the upsampled image. Each of these functions is subject to frequency filtering as described in the next section. \psfigure{frequency-filtering}{2.3in} {An example of a region filtered along a fragment. The fragment is marked black.} The pixels across different \(i\) do not overlap, that is the band within each pixel is frequency filtered once, thanks to the condition \(Q\) described in Sec.~\ref{sec:approximately-straight-fragments}. The variability of \(S_{f}\) comes from the presume that an aliasing artifact is better detectable if it has the size of at least several lengths of \(l_{0}\). It is because the artifact might be otherwise too easily mistaken with something that is not such an artifact. For example a region being a normal image matter without any artifacts might likely have the brightness that is approximately given by a fragment of a single lobe of the sine function that has the period of \(l_{0}\), yet it might be much less likely that the brightness of that region is approximately given by as much as several lobes of such a sine function. The formula for computing \(S_{f}\) on basis on the number of pixels \(N_{B}\) in \(F\) is as follows: \begin{equation} S_{f} = \left\{ \begin{array}{ll}\displaystyle 0 & \textrm{if \(N_{B} < s_{l}l_{0}\)} \\ \displaystyle s_{u}N_{B} & \textrm{if \(N_{B} \ge s_{l}l_{0}\)} \\ \end{array} \right. \end{equation} As it can be seen, \(S_{f} = 0\) if the fragment is too short, to decrease the probability of confusing an artifact with image matter, as discussed earlier in this section. Otherwise, \(S_{f}\) gradually increases with \(s_{u}N_{B}\). The coefficients \(s_{l}\) and \(s_{u}\) were tuned in a series of tests. Small \(s_{l}\) means a greater probability of an undesired distortion caused by the frequency filtering of image objects that are not artifacts. Conversely, large \(s_{l}\) means that more artifacts might be left uncorrected. The coefficient \(s_{u}\) regulates the strength of \(S\), which in turn is connected with the range along \(F\) that is filtered. Thus, small \(s_{u}\) means that an artifact might be corrected only in its small part closer to \(F\), and large \(s_{u}\) means that some regions lying further to \(F\) might be undesirably distorted by the filtering. It was found experimentally, that \(s_{l} = 2\) and \(s_{u} = 0.25\) give relatively good results. \subsection{Filtering of the brightness function} The FFT requires the transformed function to have the number of pairs to be the power of 2, what is untrue in general for \(B^{b}_{i}(x)\). To prevent spurious high frequency components and to fulfill the requirement for the number of pairs, \(B^{b}_{i}(x)\) is padded with additional elements to create \(C^{b}_{i}(x)\). Let the number of pairs in \(C^{b}_{i}(x)\) be such a smallest possible value \(N_{C}\) that it is the power of 2 and the number of pad pairs \(N_{C} - N_{B}\) to add to \(B^{b}_{i}(x)\) to create \(C^{b}_{i}(x)\) is greater than or equal to \(\lfloor N_{B}/2 \rfloor\), Let the mean value of \(B^{b}_{i}(x)\) be \(t^{b}_{i}\). The function \(C^{b}_{i}(x)\) is defined as follows: \begin{equation} \label{eq:padding} \begin{array}{c} x = 0 \ldots N_{C} - 1 \\ m_{C} = \lfloor (N_{C} - N_{B})/2 \rfloor \\ e_{C} = \lfloor m_{C} + N_{B} - 1 \rfloor \\ w_{l} = x/(m_{C} - 1) \\ w_{r} = (N_{C} - 1 - x)/(N_{C} - 2 - e_{C}) \\[8pt] C^{b}_{i}(x) = \left\{ \begin{array}{ll} w_{l}B^{b}_{i}(m_{C} - x) + (1 - w_{l})t^{b}_{i} & \textrm{for \(x < m_{C}\)} \\ B^{b}_{i}(x - m_{C}) & \textrm{for \(x \ge m_{C} \,\land\)} \\ & \textrm{\(x \le e_{C}\)} \\ w_{r}B^{b}_{i}(2e_{C} - x) + (1 - w_{r})t^{b}_{i} & \textrm{for \(x > e_{C}\)} \\ \end{array}\right. \end{array} \end{equation} Thus, there are two mirror margins added with their widths of at least \(\lfloor N_{B}/4 \rfloor\) each, that converge to the mean value of \(B^{b}_{i}(x)\) at the lowest and the highest arguments of \(C^{b}_{i}(x)\). The requirement for minimum width of the margins and the common convergence value minimize spurious high frequency components in the spectrum of \(C^{b}_{i}(x)\). There is an example of the function \(C^{b}_{i}(x)\) in Fig.~\ref{fig:chart-buffer}. \pslatexfigurepst{chart-buffer} {An example of the functions \(C^{b}_{i}(x)\) and \(C'^{b}_{i}(x)\). Their fragments from index 18 to index 44 are exact repetitions of, respectively, \(B^{b}_{i}(x - 18)\) and \(B'^{b}_{i}(x - 18)\).} Let the brightness function \(C^{b}_{i}(x)\) after transforming it with the FFT be \(F^{b}_{i}(f)\), \(f = 0, \ldots\, N_{C} - 1\). Because \(C^{b}_{i}(x)\) is real, it holds true that \(F^{b}_{i}(f) = F^{b}_{i}(N_{C} - 1 - f)\) for the whole domain of \(F^{b}_{i}(f)\). Further, because of that symmetry, each operation to \(F^{b}_{i}(f)\) will also implicitly be applied to \(F^{b}_{i}(N_{C} - 1 - f)\), and charts will show \(F^{b}_{i}(f)\) only for \(f = 0, \ldots\, N_{C}/2 - 1\). Let \(f_{0} = N_{C}/l_{0}\) be the frequency corresponding to \(l_{0}\). If \(B^{b}_{i}(x)\) contains aliasing, peaks are expected near to \(f_{0}\) and its harmonics in \(F^{b}_{i}(f)\). Because altering only the peak at \(f_{0}\) appeared to be very effective, the peaks at harmonics of \(f_{0}\) are ignored. Simply setting the values of \(F^{b}_{i}(f)\) at or near \(f_{0}\) to 0 might produce the valley that would distort fragments that do not have any artifacts, because a valley might appear in the frequency spectrum where there was not even any peak resulting from the aliasing. The solution is to compute the mean \(m\) around the expected peak at \(f_{0}\), and if the modulo of peak values exceed \(m\), flatten the peak down to the mean \(m\). The mean \(m\) is weighted using the weight function \(W(f)\) such that it has its maximum values at approximately \(1/2\) and \(3/2\) of \(f_{0}\), and thus, these regions of maximum values are placed away from both \(f_{0}\) and the harmonics of \(f_{0}\), which in turn could contain peaks resulting from the aliasing, and thus skew the value of \(m\). The corresponding formulas for computing \(m\) are as follows: \begin{equation} \begin{array}{c}\displaystyle \begin{array}{rl}\displaystyle W(f) = & 1/\big[1 + w_{s}(f - \frac{1}{2}f_{0})^{2}\big] + \\[6pt] & 1/\big[1 + w_{s}(f - \frac{3}{2}f_{0})^{2}\big] \\[6pt] \end{array} \\ \displaystyle S = \sum_{f = 0}^{f < N_{C}}W(f) \\[16pt] \displaystyle m = \frac{\sum_{f = 0}^{f < N_{C}}W(f)F^{b}_{i}(f)}{S} \end{array} \end{equation} The coefficient \(w_{s}\) determines the width of each of the two peaks and was tuned to 3 using test images. The value \(S\) is computed to normalize the weight function \(W(f)\). An example diagram of \(W(f)\) is shown in Fig.~\ref{fig:chart-frequency-filtering}. \pslatexfigurepst{chart-frequency-filtering} {An example of filtering of a function \(B^{b}_{i}(x)\) from Fig.~\ref{fig:chart-buffer}. The functions \(W(f)\) and \(M(f)\) are so distorted because of the low value of \(N_{C}\).} The reduction of the peak is in detail performed as follows. Firstly, let the peak be located by a function \(M(f)\). The value of the function is interpreted as follows: \(1\) for no altering of the spectrum at \(f\), \(0\) for a maximum altering of the frequency spectrum at \(f\), that is, lowering its modulo values to \(m\) if greater, Values of \(M(f)\) in between \(0\) and \(1\) determine respective partial alteration. The function \(M(f)\) is computed as follows: \begin{equation} M(f) = \left\{\begin{array}{ll} 1 & \textrm{if \(f = 0\)} \\ \tanh\left[m_{s}\left(N_{C}/f - l_{0}\right)^2\right] & \textrm{if \(f > 0\)} \\ \end{array}\right. \end{equation} The function is constructed so that \(M(f)\) creates a valley at and near the peak with the lowest value close to \(0\), is equal to \(1\) at the constant component \(f = 0\), almost equal to \(1\) for frequencies substantially lower or higher than \(f\). The coefficient \(m_{s} = 0.03\) was tuned to regulate the width and slopes of the valley. The limited steepness of the slopes of \(M(f)\) reduces the the possible distortions in the space domain caused by the frequency filtering. The left slope is so steep to decrease the reduction of low frequencies. Reducing them, because of their usually large values, appeared to produce strong discontinuity effects between the filtered region and the rest of the image. The discussed alteration of \(F^{b}_{i}(f)\) so that it creates \(F'^{b}_{i}(f)\) has the following equation: \begin{equation} \forall_{f}\quad |F'^{b}_{i}(f)| = \left\{\begin{array}{ll} M(f)|F^{b}_{i}(f)| + & \\ \quad + \big[1 - M(f)\big]m & \textrm{if \(|F^{b}_{i}(f)| > m\)} \\ |F^{b}_{i}(f)| & \textrm{if \(|F^{b}_{i}(f)| \le m\)} \\ \end{array}\right. \end{equation} An example of filtering of \(F^{b}_{i}(f)\) is shown in Fig.~\ref{fig:chart-frequency-filtering}. The function \(F'^{b}_{i}(f)\) is transformed using reverse FFT into \(C'^{b}_{i}(x)\), from which is extracted \(B'^{b}_{i}(x) = C'^{b}_{i}(x - m_{C})\), \(x = 0\, \ldots\, N_{B} - 1\), to remove the padding introduced in (\ref{eq:padding}). \(B'^{b}_{i}(x)\) is thus a frequency--filtered \(B^{b}_{i}(x)\), and is written back to the upsampled image, to the exact pixels from which \(B^{b}_{i}(x)\) was constructed. \section{Tests} \label{sec:tests} An example image processed with the presented method is shown in Fig.~\ref{fig:edge-detection}(d). The image was upsampled four times using bicubic interpolation that employed Catmull--Rom spline \cite{mitchell1988reconstruction}. The edge map of the image is shown in Fig.~\ref{fig:edge-detection}(b). As it can be seen, the image with reduced aliasing is visually radically improved over the image obtained using plain upscaling without the frequency filtering, shown in Fig.~\ref{fig:edge-detection}(c). The aliasing in Fig.~\ref{fig:edge-detection}(d) is almost reduced, without any substantial blur, loss of small details or other distortions visible. It differs the presented method from that of an anisotropic smoothing \cite{tschumperle2006anisotropic} shown in Fig.~\ref{fig:flower_128_greyc}, which, while reducing the aliasing, distorts the image so that it looks very unnatural and blurred. For example, most of the details in the center of the petal in Fig.~\ref{fig:flower_128_greyc} are almost lost. \psfigureabcd{edge-detection} {flower_128} {flower_128_e} {flower_128_s2} {flower_128_s2_f} {1.8in} {An example of filtering a photograph: (a) original image, (b) subpixel precision edges found, thickened in the illustration to make them better visible, (c) upsampled image without the frequency filtering, (d) upsampled image with the frequency filtering.} \psfigure{flower_128_greyc}{1.8in} {The image from Fig.~\ref{fig:edge-detection}(a) upsampled using a GREYC anisotropic smoothing.} It can be seen in the image, that the introduced method works well for various non--straight curves, even that it splits them into the approximately straight fragments before the frequency filtering. \section{Conclusion} The presented method can be applied to images upsampled using different interpolation method, and can radically reduce aliasing, with a very good preservation of the rest of the filtered image. The method has the side effect of producing a subpixel precision edge map, that can be used in various edge processing algorithms, like the sharpening of edges in the upsampled image, for further improvement of its quality. {\scriptsize
1,116,691,501,064
arxiv
\section{Introduction} The Quadratic Unconstrained Binary Optimization (QUBO) modeling format, $max \; x'Qx; \; x \in \{0,1\}$, has grown in popularity in the last decade and it has been shown that all of Karp's NP-complete problems as well as many constrained problems can be transformed to QUBO (see \cite{kochenberger2014unconstrained} for more details). More recently, QUBO instantiations are a requirement for quantum annealers (\cite{hauke2020perspectives}) which has led to significant interest from the research community. Many QUBO heuristics rely on a starting set of elite solutions (\cite{wang2013backbone,wang2012path,glover2010diversification,samorani2019clustering}) and these starting solutions are key to their performance. The set of starting solutions are either generated randomly, or more commonly, through an improvement heuristics such as path relinking, restarts and scatter search (\cite{samorani2019clustering,boros2007local,wang2012path}). This process is limited by the heuristics' ability to find local optima and insure diversity in the elite set. In this paper, we address this shortcoming via a constraint programming (CP) approach for the generation of local optima. Additionally, we present a learning-based method that utilizes the set of local optima to enhance the performance of an existing QUBO solver. The one-flip local optima $\hat{x}$ for a QUBO has the following characteristics for a maximization problem $max \; x'Qx$: \begin{align} & \hat{x}' Q \hat{x} \geq y' Q y \; \forall y \in S_1 (\hat{x}) ; \; \hat{x},y \in \{0,1\} \label{cons} \end{align} where $S_1 (\hat{x})$ represents the set of all one-flip neighbors. Hence the solution vectors $y$ and $\hat{x}$ differ by exactly one bit. The total number of such one-flip neighbors is $N$ where $N$ represents the number of variables and $Q$ is an $N$x$N$ matrix of integer or real coefficients. The relationship between $\hat{x}$ and i'th one-flip neighbor $y_i$ is given by $y_i = 1- \hat{x_i}$ and $y_k = \hat{x_k} \; \forall k \in [1,N] : \; k \neq i$. Thus, Equation \ref{cons} leads to $N$ inequalities. $x'Qx$ can be rewritten as $\sum_{i=1}^N (q_{i} x_i + \sum_j q_{ij} x_i x_j)$. We could isolate the impact of flipping the bit corresponding to variable $x_i$ and transform Equation \ref{cons} as: \begin{align} & q_i \hat{x_i} + \sum_j q_{ij} \hat{x_i} \hat{x_j} \geq q_i (1-\hat{x_i}) + \sum_j q_{ij} (1-\hat{x_i}) \hat{x_j} \; \forall i \in [1,N] \end{align} Note that terms not involving variable $\hat{x_i}$ are eliminated on both sides. Rearranging the terms, we get: \begin{align} & 2 q_i \hat{x_i} + 2 \sum_j q_{ij} \hat{x_i} \hat{x_j} \geq q_i + \sum_j q_{ij} \hat{x_j} \; \forall i \in [1,N] \label{final1} \end{align} Upon further simplification, the set of equations are reduced to $2 \hat{x_i} \; expr \geq expr$ where $expr = q_i + \sum_j q_{ij} \hat{x_j}$ and further reduces to $\hat{x_i} > \frac{1}{2}$ if $expr$ is positive and $\hat{x_i} < \frac{1}{2}$ if $expr$ is negative. The following lemma help us in identifying the local optima based on the values of $expr$: \begin{lemma} If $expr < 0$ then $\hat{x_i} = 0$ and if $expr > 0$ then $\hat{x_i} = 1$, else $\hat{x_i}$ can be either $0$ or $1$. \end{lemma} The lemma could be enforced by the following set of linear constraints: \begin{align} & q_i + \sum_j q_{ij} \hat{x_j} \leq M \hat{x_i} \; \forall i \in [1,N]\\ & q_i + \sum_j q_{ij} \hat{x_j} \geq -M(1-\hat{x_i}) \; \forall i \in [1,N] \end{align} where $M$ is a large positive number. \begin{comment} \begin{lemma} $\hat{x_i} = 0$ for a maximization problem if $q_i + \sum_j q_{ij} \hat{x_j} \leq 0$; $1$ otherwise \end{lemma} \begin{proof} For the first case, let $expr = q_i + \sum_j q_{ij} \hat{x_j} = -k$ where $k > 0$. Thus, Equation \ref{final1} is reduced to $2 \hat{x_i} (-k) \geq (-k)$ which is equivalent to $2 \hat{x_i} k \leq k$. If $\hat{x_i} = 1$, this is transformed into $2 k \leq k$ which is a contradiction. Hence $\hat{x_i} = 0$ is the correct assignment which in turn yields $0 \leq k$. On the contrary, let $expr = q_i + \sum_j q_{ij} \hat{x_j} = k$ where $k > 0$. Thus, Equation \ref{final1} is reduced to $2 \hat{x_i} k \geq k$. If $\hat{x_i} = 0$, this leads to a contradiction of the form $0 \geq k$. Hence, $\hat{x_i} = 1$ is the correct assignment which transforms the equation to $2k \geq k$. \end{proof} The lemma could be enforced by the following set of linear constraints: \begin{align} & q_i + \sum_j q_{ij} \hat{x_j} \leq M \hat{x_i} \; \forall i \in [1,N]\\ & q_i + \sum_j q_{ij} \hat{x_j} \geq -M(1-\hat{x_i}) \; \forall i \in [1,N] \end{align} where $M$ is a large positive number. These set of linear constraints could replace the non-linear (\ref{final1}). \end{comment} \begin{comment} Moreover, we could substitute $z_{ij} = x_i x_j$ and use a linear programming solver with the additional set of constraints given by: \begin{align} & \hat{z_{ij}} \geq \hat{x_i} + \hat{x_j} -1 \\ & \hat{z_{ij}} \leq \hat{x_i} \\ & \hat{z_{ij}} \leq \hat{x_j} \end{align} This introduces $O(N^2)$ additional binary variables and increase the size of the model significantly. However, more constraints are sometimes favored by the CP solvers In a similar manner, we could determine a set of two-flip local optimal solutions satisfying the following set of constraints: \begin{align} & \hat{x}' Q \hat{x} \geq y' Q y \; \forall y \in S_2 (\hat{x}) \label{cons2} \end{align} Here $S_2 (\hat{x})$ represents the set containing all two-flip neighbors of a solution vector $\hat{x}$. Thus, $\hat{x}$ and $y$ differ by exactly two bits. If we assume that the position of these two difference bits are $i$ and $j$ respectively, then Equation \ref{cons2} is reduced to: \begin{align} \begin{split} &q_i \hat{x_i} + q_j \hat{x_j} + \sum_k q_{ik} \hat{x_i} \hat{x_k} + \sum_k q_{jk} \hat{x_j} \hat{x_k} - q_{ij} \hat{x_i} \hat{x_j} \geq \\ &q_i (1-\hat{x_i}) + q_j (1-\hat{x_j}) + \sum_k q_{ik} (1-\hat{x_i}) \hat{x_k} + \sum_k q_{jk} (1-\hat{x_j}) \hat{x_k} - q_{ij} (1-\hat{x_i})(1-\hat{x_j}) \; \forall (i,j) \in E \end{split} \end{align} Here $E$ represents the set of all pairs of variables with $|E| = M$. Note that we subtract $q_{ij} \hat{x_i} \hat{x_j}$ on left hand side to account for double counting in the summation involving variables $\hat{x_i}$ and $\hat{x_j}$. Also, the terms not involving $\hat{x_i}$ and $\hat{x_j}$ cancel each other on both sides. Collecting the common terms on both sides, we get: \begin{align} \begin{split} & 2 q_i \hat{x_i} + 2 q_j \hat{x_j} + 2 \sum_k q_{ik} \hat{x_i} \hat{x_k} + 2 \sum_k q_{jk} \hat{x_j} \hat{x_k} \geq \\ & q_i + q_j + \sum_k q_{ik} \hat{x_k} + \sum_k q_{jk} \hat{x_k} - q_{ij} + q_{ij} \hat{x_i} + q_{ij} \hat{x_j} \; \forall (i,j) \in E \label{final2} \end{split} \end{align} Further, we could simplify the condition for a two-flip local optima as follows: \begin{align} (q_i + \sum_k q_{ik} \hat{x_k}) (2 \hat{x_i} - 1) + (q_j + \sum_k q_{jk} \hat{x_k}) (2 \hat{x_j} - 1) \geq 0 \; \forall (i,j) \in E \label{final3} \end{align} We could solve these system of equations to yield the set of two-flip local optimal solution given by $\hat{x}$. Note that we have been interested in non-strict local optima so far. However, it is easy to adapt the analysis to strict local optima. Assuming integral coefficients, a strict local optima $\hat{x}$ characterized by $\hat{x}' Q \hat{x} > y' Q y \; \forall y \in S (\hat{x})$ is transformed into $\hat{x}' Q \hat{x} \geq y' Q y + 1 \; \forall y \in S (\hat{x})$. Here $S(\hat{x})$ denotes the one-flip or two-flip neighborhood depending on the type of the local optima. Next we will discuss some of the computational challenges with the constraint programming formulation. Finding a one-flip local optima involves $O(N)$ constraints where $N$ is the number of variables. It is somewhat easier for any CP solver to handle these constraints. However, search for a two-flip local optima includes $O(N^2)$ constraints for each pair of variables. This makes task of the solver to find feasible solutions very daunting. We could reduce the total number of constraints from $O(N^2)$ to $O(M)$ such that the solver focuses on only the off-diagonal entries present in the $Q$ matrix. Note that this corresponds to a solution approximation. DISCUSS OTHER STRATEGIES LIKE SLACK VARIABLES. NOTE THAT THESE ARE NEEDED ONLY FOR APPROXIMATING THE two-flip LOCAL OPTIMAS. THE one-flip LOCAL OPTIMAS ARE SOMEWHAT EASIER TO FIND. To aid the solver, we could also include an objective function that corresponds to maximize the number of satisfied constraints. For this purpose, we need to include additional variables that track whether a specific constraint is satisfied. For instance, the variable $u_i$ determines whether the constraint $\sum_i a_i x_i \geq b_i$ is active through the transformation $\sum_i a_i x_i \geq b_i - M_i (1-u_i)$. Note that $u_i = 1$ enforces the corresponding inequality to hold. On the other hand, $u_i = 0$ leads to a trivial constraint through a big positive number $M_i$. The choice of $M_i$ could have a significant impact on the computation performance. A lower bound on $M_i$ established as $\sum_i^{a_i < 0} a_i$ leads to a tighter model. In this way, we include an additional variable $u_i$ as an indicator variable for each of the $O(m)$ constraints. An objective function of $max \; \sum_i u_i$ guarantees that a maximum number of constraints are satisfied and leads to a good approximation. Note that we could also use a Mixed Integer Programming (MIP) solver instead of a CP solver to solve the set of equations given by Equations \ref{final1} and \ref{final2}. \end{comment} A problem instantiated by this model is solved by a CP solver yielding multiple solutions for one-flip local optima. While a similar set of expressions could also be derived for a two-flip local optima (and in general a r-flip local optima), the number of constraints and the associated computational complexity increases significantly. Consider the following $Q$ matrix involving three variables where the coefficients have been doubled and moved to its upper triangular portion: \[ \begin{bmatrix} -4 & 12 & -12\\ 0 & -8 & -8\\ 0 & 0 & 9 \end{bmatrix} \] We are interested in obtaining the set of one-flip local optima $\hat{x}$ that satisfies the following constraints based on (\ref{final1}): \begin{align*} -8 \hat{x_1} -12 \hat{x_2} + 12 \hat{x_3} + 24 \hat{x_1} \hat{x_2} -24 \hat{x_1} \hat{x_3} & \geq -4 \\ -12 \hat{x_1} -16 \hat{x_2} + 8 \hat{x_3} + 24 \hat{x_1} \hat{x_2} - 16 \hat{x_2} \hat{x_3} & \geq -8 \\ 12 \hat{x_1} + 8 \hat{x_2} + 18 \hat{x_3} - 24 \hat{x_1} \hat{x_3} -16 \hat{x_2} \hat{x_3} & \geq 9 \end{align*} Solving yields a single one-flip local optima $\hat{x}$ given by $[0,0,1]$. Verifying one-flip optimality, the objective function value of $9$ associated with $[0,0,1]$ is greater than those of the one-flip neighbors $[1,0,1],[0,1,1]$ and $[0,0,0]$ with corresponding objective function evaluations of $-7,-7$ and $0$ respectively. \begin{comment} \begin{align*} 2*-4*\hat{x_1} + 2*12*\hat{x_1}*\hat{x_2} + 2*-12*\hat{x_1}*\hat{x_3} & \geq -4 + 12*\hat{x_2} - 12*\hat{x_3} \\ 2*-8*\hat{x_2} + 2*12*\hat{x_1}*\hat{x_2} + 2*-8*\hat{x_2}*\hat{x_3} & \geq -8 + 12*\hat{x_1} - 8*\hat{x_3} \\ 2*9*\hat{x_3} + 2*-12*\hat{x_1}*\hat{x_3} + 2*-8*\hat{x_2}*\hat{x_3} & \geq 9 - 12*\hat{x_1} - 8*\hat{x_2} \end{align*} which can be further reduced to the following set of inequalities: \end{comment} \begin{comment} Next we will study the relationship between two-flip and one-flip local optima. The possible scenarios for a maximization problem are captured in Figure 1. Note that the dashed values represent the range of solutions for the set of neighbors. Given a two-flip local optima $x$, we know that $Obj(x)) \geq Obj(S_2 (x))$. Hence the objective function at the solution vector $x$ is greater than or equal to its two-flip neighbors. However, we do not have any advance information about the behavior of its one-flip neighbors given by the set $S_1 (x)$. Thus, all the objective function evaluation could be lower than those of $x$ thereby establishing that the two-flip local optima is also the one-flip local optima (Fig 1(a)). However, there could be some neighbor $x'$ of the two-flip local optima $(x)$ such that its function evaluation is better than all of its neighbors. Hence, it could be the case that a one-flip neighbor of $x$ could be a one-flip local optima (Fig 1(b)). In an uninteresting case, the function evaluation at neighbors of $x$ could not lead to any insights because the dominating solution vectors could not be identified. Hence, there is no clear connection between two-flip and one-flip local optima. \begin{figure}[htbp] \centerline{\includegraphics[scale=0.46]{one-flip-vs-two-flip.png}} \caption{Scenarios for the relationship between two-flip and one-flip local optima} \label{fig:fig1} \end{figure} \end{comment} It is worth noting that all \textit{global} optima are also locally optimal with respect to all possible r-flips with the impact or r-flips being extensively studied. The authors in \cite{alidaee2010theorems} present theoretical formulas based on partial derivatives for quickly determining effects of r-flips on the objective function. \cite{anacleto2020closed} proposed two formulas for quickly evaluating r-flip moves. However, the number of possible r-flip moves to evaluate grows exponentially and one-flip moves are the most commonly implemented approach. Elite sets of high-quality solutions are often used in the design of algorithms for fixing variables. For example, \cite{chardaire1995thermostatistical} fix the variables as the temperature associated with simulated annealing decreases. Their learning process also relies on thresholds and requires some parameter tuning. \cite{wang2011effective} investigated two variable fixing strategies inside their tabu search (TS) routine for QUBO and \cite{zhou2017data} uses a data mining routine to learn frequent patterns from a set of high-quality solutions Fixing/freeing variables have also been explored in the context of a quantum annealer by \cite{karimi2017boosting}. The authors reduce the QUBO by fixing some variables to values that have a high probability of occurrence in the sample set of solutions. In contrast to fixing, some of the approaches learn to avoid local optima in the search process. \cite{basharu2007escaping} compare two strategies of escaping local optima: (a) assigning penalties to violated constraints (b) assign penalties to individual variable values participating in a constraint violation. Their results quantify the impact of penalties on the solution landscape The reference/elite set and other problem features are also useful in designing various metaheuristics. For example, \cite{wang2013backbone} apply backbone guided TS to QUBO alternating between a TS phase and a phase that fixes/frees strongly determined variables. \cite{voudouris2003guided} and \cite{whittley2004attribute} utilize problem features to guide the local search routine where the objective function is augmented with penalty terms based on problem features. The contribution of our paper is twofold. First, we present a new constraint programming approach to obtain a set of one-flip local optima for QUBO. These high-quality samples can be used as a starting elite solutions set and can also be utilized for the construction of Local Optima Networks (see \cite{ochoa2014local} for more details) for a wide variety of combinatorial problems that fit into the QUBO framework. Second, we provide an approach to utilize the information contained in the set of local optima through penalties and rewards by transforming the $Q$ matrix using two variants that favor or avoid the set $L$ of locally optimal solutions. Our reformulations could be used to improve the performance of existing QUBO solvers (like \cite{verma2020penalty} and \cite{verma2020optimal}). \section{Learning Approach} There are various ways to utilize the information provided by the set of local optima $L$. Note that $L$ could be obtained by satisfying the constraints corresponding to one-flip local optima detailed in Section 1. Herein we present a simple approach that relies on the number of times a specific variable $x_i$ is set to $0$ or $1$. If the variable $x_i$ takes a specific value more frequently in the set of local optima, there are two schools of thought in the literature to handle it. First, favor local optima and hypothesize that there is a high chance that the global optima would also have such a variable $x_i$ set to $0$ or $1$ respectively. Second, design heuristics to avoid the set of local optima and aid the solver to explore new areas in the solution landscape while avoiding local optima. Our approach to calculate the frequency of occurrence for each variable $x_i$ in the set of local optima is outlined in Algorithm 1 \begin{algorithm} \scriptsiz \caption{Frequency calculation based on the set of local optima} \label{learning} \begin{algorithmic}[1] \Procedure{Frequency}{$L$} \Comment{Returns the relative frequency of setting $x_i = 0/1$ in $L$} \State $freq_0 \gets 0$ \State $freq_1 \gets 0$ \For{$i = [1,N]$} \Comment{For all variables} \For{$k = [1,|L|]$} \Comment{For all locally optimal solutions} \State $If \; x[i] == 0, freq_0[i] \gets freq_0[i] + 1$ \State $If \; x[i] == 1, freq_1[i] \gets freq_1[i] + 1$ \EndFor \EndFor \State $freq_0 \gets freq_0/|L|$ \State $freq_1 \gets freq_1/|L|$ \State \textbf{return} $freq_0$ and $freq_1$\Comment{The chance of setting a variable $x_i$ to $0/1$} \EndProcedure \end{algorithmic} \end{algorithm} At the end of this process, we return the relative frequency by dividing each $freq$ entry with the number of locally optimal solutions $|L|$. We use this information contained in $freq$ in multiple ways. Noting that $freq$ is the chance of setting a specific variable $x_i$ to $0/1$ in the set $L$, then if the value of $freq_1[i]$ is close to $1$, the variable $x_i$ is set to $1$ in majority of the locally optimal solutions. Thus, we consider setting a variable $x_i = 1$ if $freq_1[i] >= \alpha$ where $\alpha$ is a user-defined parameter. On the other hand, the solver should escape the locally optimal solutions by disincentivizing $x_i = 1$ if $freq_1[i] >= \alpha$ so that the solver avoids replicating the behavior observed in the set of local optima. We explore both variants of the transformation approach designed to (i) favor local optima (ii) escape local optima. For this purpose, we will adjust the linear coefficients of the original $Q$ matrix to generate $Q_1$ and $Q_2$ for the two strategies with the typical values of $\alpha$ ranging from $95-100\%$. The technique for generating the two transformed matrices $Q_1$ and $Q_2$ based on strategies of favoring and escaping local optima are implemented as soft constraints and summarized in Algorithm 2. Specifically, if the chance of a variable $x_i$ to be set to $1$ (given by $freq_1[i]$) is greater than or equal to $\alpha$, we add a reward $\delta$ to the linear coefficient $q_i$ for favoring local optima. Thus, for strategy (i), we use the transformed $Q_1$ matrix involving $q_i^{1} \leftarrow q_i^1 + \delta$. This change incentivizes any solver to set $x_i=1$. Similarly, we make updates to every linear coefficient in the transformed matrix whenever $freq_1[i] \geq \alpha$. A large value of $\delta$ enforces the constraint $x_i = 1$ strictly. However, it could also alter the solution landscape for the solver. Conversely, a penalty term $-\delta$ added to the linear coefficient $q_i$ is utilized as a proxy for the constraint $x_i = 0$ in a maximization problem (if $freq_{0}[i] \geq \alpha$). The changes are reversed for the second strategy of avoiding local optima. \begin{algorithm} \scriptsiz \caption{Transformation Approach} \label{transform} \begin{algorithmic}[1] \Procedure{Transformation}{$Q,freq,\alpha,\delta$} \Comment{Returns the transformed matrices $Q_1$ and $Q_2$} \State $Q_1 \gets Q$ \State $Q_2 \gets Q$ \For{$i = [1,N]$} \Comment{For all variables} \State $If \; freq_0[i] \geq \alpha, Q_1[i,i] \gets Q_1[i,i] - \delta$ \State $If \; freq_1[i] \geq \alpha, Q_1[i,i] \gets Q_1[i,i] + \delta$ \State $If \; freq_0[i] \geq \alpha, Q_2[i,i] \gets Q_2[i,i] + \delta$ \State $If \; freq_1[i] \geq \alpha, Q_2[i,i] \gets Q_2[i,i] - \delta$ \EndFor \State \textbf{return} $Q_1$ and $Q_2$ \Comment{The transformed matrices based on strategies (i) and (ii)} \EndProcedure \end{algorithmic} \end{algorithm} \section{Computational Experiments} \label{expt} For testing we use the QUBO instances presented in \cite{glover2018logical} and \cite{beasley1990or}. The algorithms were implemented in Python 3.6. The experiments were performed on a 3.40 GHz Intel Core i7 processor with 16 GB RAM running 64 bit Windows 7 OS. The datasets described in \cite{glover2018logical} have $1000$ nodes while the ORLIB instances \cite{beasley1990or} have $1000$ and $2500$ nodes. Our experiments utilize a path relinking and tabu search based QUBO solver. A one-flip tabu search with path relinking was modified from (\cite{rna}). The primary power of a one-flip search is its ability to quickly evaluate the effect of flipping a single bit, $x_i \rightarrow 1- x_i$, allowing selection of the variable having the greatest effect on a local solution in $O(n)$ time (\cite{kochenberger2004unified}) as opposed to directly evaluating $x'Qx$ which is $O(n^2)$. The search used in this paper accepts an input $Q$ matrix as well as a starting elite set of solutions of size $S$. It performs path relinking between the solutions in $S$ to derive a starting solution where path relinking is implemented as a greedy search of the restricted solution space defined by the difference bits of a solution pair. The relinking generates a starting solution from which a greedy search is performed by repeatedly selecting the single non-tabu variable that has the largest positive impact on the current solution. Variables selected to be flipped are given a tabu tenure to avoid cycling. When there are no non-tabu variables available to improve the current solution then a backtracking operation is performed to undo previous flips. When no variable (tabu or not) is available to improve the current solution then a local optimum has been encountered and backtracking is performed. The diversity attributes are measured as the mean hamming distance between all pairs of solution vectors. Note that the hamming distance $d(a,b)$ between two binary vectors $a$ and $b$ is given by the number of difference bits. The mean hamming distance $\mu_d$ is given by $\sum_{a,b \in L}^{a \neq b} d(a,b)/|L|$. Similarly, we can measure the quality of the elite set by the mean objective function, $\mu_{Obj}$. For benchmarking, we utilize a common approach presented in the literature (\cite{wang2013backbone,wang2012path,glover2010diversification,samorani2019clustering}) to generate elite sets consisting of a randomized solution improved by a greedy heuristic until a local one-flip optima is reached and the solution added to the elite set and the process repeated. For both the CP solver and the greedy heuristic, we allot $600$ seconds and extract the top $500$ local optima sorted by the objective function to favor high-quality solutions. \begin{table}[htbp] \centering \caption{Diversity Attributes of CP Approach} \scalebox{0.67}{ \begin{tabular}{r|r|r|r|r} \hline \multicolumn{1}{l}{bqp2500} & \multicolumn{2}{c}{CP Solver} & \multicolumn{2}{c}{Greedy Search Heuristic} \\ \multicolumn{1}{l}{Instance} & \multicolumn{1}{l}{$\mu_d$} & \multicolumn{1}{l}{$\mu_{Obj}$} & \multicolumn{1}{l}{$\mu_d$} & \multicolumn{1}{l}{$\mu_{Obj}$} \\ \hline 1 & 521.5 & 996297.9 & 266.0 & 1504173.5 \\ 2 & 500.0 & 1008336.9 & 286.5 & 1460415.7 \\ 3 & 535.3 & 941038.3 & 266.6 & 1403657.1 \\ 4 & 547.8 & 979253.4 & 221.0 & 1499321.1 \\ 5 & 510.7 & 1009030.4 & 240.4 & 1481973.9 \\ 6 & 473.4 & 991556.1 & 237.2 & 1460578.2 \\ 7 & 492.8 & 993305.4 & 292.7 & 1467269.2 \\ 8 & 569.5 & 969050.3 & 216.5 & 1476406.9 \\ 9 & 458.0 & 1014991.4 & 251.5 & 1472053.8 \\ 10 & 462.7 & 1002726.8 & 295.2 & 1470710.0 \\ \hline \end{tabular}}% \label{tab:tab1}% \end{table}% The results are presented in Table 1. The CP approach leads to more diverse solutions since $\mu_d$ for the CP solver is almost double those of the greedy heuristic. While the greedy approach obtains solutions with higher objective values, they are less diverse, hence the CP approach provides a compromise between solution diversity and the objective value. To assess the impact of different soft constraint thresholds, we experiment with the following values of $\alpha$: (a) $0.99$ (b) $0.975$ (c) $0.95$ and the following settings of $\delta$: (a) $2\%$ (b) $5\%$ (c) $10\%$ wherein the percentage is expressed in terms of the maximum value of the coefficients of the $Q$ matrix. For example, the coefficients of the ORLIB datasets lie in the range $[-100,100]$. Thus, the linear coefficients of the $Q$ matrix are adjusted by $\delta = 2, 5$ or $10$ units. For the nine different parameter combinations of $(\alpha,\delta)$, we allotted $100$ seconds each. For a fairer comparison, the benchmark experiments on the original $Q$ matrix are run for a total of $900$ seconds. We present two different versions of our heuristic based on $Q_1$ and $Q_2$ in Table 2. The columns ``$Obj_Q$", ``$Improv_{Q1}$" (and ``$Improv_{Q2}$") represent the best objective function obtained by the QUBO solver using the $Q$ matrix within $900$ seconds and the percentage improvement in the best objective function among nine parameter combinations of $(\alpha,\delta)$ using $Q_1$ (and $Q_2$) matrix respectively \begin{table}[htbp] \centering \caption{Results of Algorithm 2} \scalebox{0.67}{ \begin{tabular}{|lrrr|lrrr|} \hline Instance & \multicolumn{1}{l}{$Obj_Q$} & \multicolumn{1}{l}{$Improv_{Q1}$} & \multicolumn{1}{l}{$Improv_{Q2}$} & Instance & \multicolumn{1}{l}{$Obj_Q$} & \multicolumn{1}{l}{$Improv_{Q1}$} & \multicolumn{1}{l}{$Improv_{Q2}$} \\ \hline 1000\_5000\_1 & 25934 & 0.38 & 0.28 & 1000\_10000\_1 & 42920 & 0.91 & 1.35 \\ 1000\_5000\_2 & 483289 & 0.03 & 0.06 & 1000\_10000\_2 & 893493 & 0.59 & 0.63 \\ 1000\_5000\_3 & 52469 & 0.03 & 0.02 & 1000\_10000\_3 & 96764 & 0 & -0.01 \\ 1000\_5000\_4 & 214726 & -0.39 & 0.24 & 1000\_10000\_4 & 371621 & 1.66 & 0.9 \\ 1000\_5000\_5 & 18644 & 0.01 & -0.02 & 1000\_10000\_5 & 29870 & -0.03 & -0.01 \\ 1000\_5000\_6 & 275332 & 0.2 & 0.37 & 1000\_10000\_6 & 476253 & 0.31 & 0.26 \\ 1000\_5000\_7 & 32141 & 0.26 & 0.31 & 1000\_10000\_7 & 55732 & 0.17 & 0.19 \\ 1000\_5000\_8 & 155738 & 0.12 & -0.13 & 1000\_10000\_8 & 250964 & 0.17 & -0.02 \\ 1000\_5000\_9 & 270749 & 0.49 & 0.25 & 1000\_10000\_9 & 479986 & -0.17 & 0.14 \\ 1000\_5000\_10 & 18385 & 0.05 & -0.04 & 1000\_10000\_10 & 29624 & 0 & 0.02 \\ 1000\_5000\_11 & 158718 & 0.02 & 0.07 & 1000\_10000\_11 & 255999 & 0.83 & 0.96 \\ 1000\_5000\_12 & 32297 & 0.01 & 0.01 & 1000\_10000\_12 & 54825 & 0.01 & 0.01 \\ 1000\_5000\_13 & 477743 & 0.54 & 0.08 & 1000\_10000\_13 & 870231 & 0.04 & 0.26 \\ 1000\_5000\_14 & 25848 & 0.04 & 0.01 & 1000\_10000\_14 & 43236 & 0.12 & 0.01 \\ 1000\_5000\_15 & 214435 & -0.01 & 0.56 & 1000\_10000\_15 & 374992 & 0.62 & 0.85 \\ 1000\_5000\_16 & 52686 & 0 & -0.01 & 1000\_10000\_16 & 97105 & 0.02 & 0.19 \\ \hline bqp\_1000\_1 & 371155 & 0.07 & 0.07 & bqp\_2500\_1 & 1512444 & 0.23 & 0.18 \\ bqp\_1000\_2 & 354822 & 0.02 & 0.03 & bqp\_2500\_2 & 1469553 & 0.03 & 0.09 \\ bqp\_1000\_3 & 371236 & 0 & 0 & bqp\_2500\_3 & 1413186 & 0.04 & 0.04 \\ bqp\_1000\_4 & 370638 & -0.01 & 0 & bqp\_2500\_4 & 1506521 & 0.07 & 0.07 \\ bqp\_1000\_5 & 352730 & 0 & 0 & bqp\_2500\_5 & 1491700 & 0.01 & -0.01 \\ bqp\_1000\_6 & 359629 & 0 & 0 & bqp\_2500\_6 & 1468745 & -0.01 & -0.05 \\ bqp\_1000\_7 & 370718 & 0.13 & 0.11 & bqp\_2500\_7 & 1478073 & 0.02 & -0.03 \\ bqp\_1000\_8 & 351975 & 0 & 0.01 & bqp\_2500\_8 & 1483757 & 0.03 & 0.02 \\ bqp\_1000\_9 & 349044 & 0.06 & 0.08 & bqp\_2500\_9 & 1482091 & 0.01 & 0.02 \\ bqp\_1000\_10 & 351272 & 0.04 & -0.01 & bqp\_2500\_10 & 1482220 & -0.02 & 0 \\ \hline \end{tabular}}% \label{tab:addlabel}% \end{table}% \begin{comment} \begin{table}[htbp] \centering \caption{Add caption} \scalebox{0.8}{ \begin{tabular}{r|r|r|r} \hline \multicolumn{1}{l}{Instance} & \multicolumn{1}{l}{$Obj_Q$} & \multicolumn{1}{l}{$Improv_{Q1}$} & \multicolumn{1}{l}{$Improv_{Q2}$} \\ \hline 1000\_5000\_1 & 25934 & 0.38 & 0.28 \\ 1000\_5000\_2 & 483289 & 0.03 & 0.06 \\ 1000\_5000\_3 & 52469 & 0.03 & 0.02 \\ 1000\_5000\_4 & 214726 & -0.39 & 0.24 \\ 1000\_5000\_5 & 18644 & 0.01 & -0.02 \\ 1000\_5000\_6 & 275332 & 0.2 & 0.37 \\ 1000\_5000\_7 & 32141 & 0.26 & 0.31 \\ 1000\_5000\_8 & 155738 & 0.12 & -0.13 \\ 1000\_5000\_9 & 270749 & 0.49 & 0.25 \\ 1000\_5000\_10 & 18385 & 0.05 & -0.04 \\ 1000\_5000\_11 & 158718 & 0.02 & 0.07 \\ 1000\_5000\_12 & 32297 & 0.01 & 0.01 \\ 1000\_5000\_13 & 477743 & 0.54 & 0.08 \\ 1000\_5000\_14 & 25848 & 0.04 & 0.01 \\ 1000\_5000\_15 & 214435 & -0.01 & 0.56 \\ 1000\_5000\_16 & 52686 & 0 & -0.01 \\ \hline 1000\_10000\_1 & 42920 & 0.91 & 1.35 \\ 1000\_10000\_2 & 893493 & 0.59 & 0.63 \\ 1000\_10000\_3 & 96764 & 0 & -0.01 \\ 1000\_10000\_4 & 371621 & 1.66 & 0.9 \\ 1000\_10000\_5 & 29870 & -0.03 & -0.01 \\ 1000\_10000\_6 & 476253 & 0.31 & 0.26 \\ 1000\_10000\_7 & 55732 & 0.17 & 0.19 \\ 1000\_10000\_8 & 250964 & 0.17 & -0.02 \\ 1000\_10000\_9 & 479986 & -0.17 & 0.14 \\ 1000\_10000\_10 & 29624 & 0 & 0.02 \\ 1000\_10000\_11 & 255999 & 0.83 & 0.96 \\ 1000\_10000\_12 & 54825 & 0.01 & 0.01 \\ 1000\_10000\_13 & 870231 & 0.04 & 0.26 \\ 1000\_10000\_14 & 43236 & 0.12 & 0.01 \\ 1000\_10000\_15 & 374992 & 0.62 & 0.85 \\ 1000\_10000\_16 & 97105 & 0.02 & 0.19 \\ \hline bqp\_1000\_1 & 371155 & 0.07 & 0.07 \\ bqp\_1000\_2 & 354822 & 0.02 & 0.03 \\ bqp\_1000\_3 & 371236 & 0 & 0 \\ bqp\_1000\_4 & 370638 & -0.01 & 0 \\ bqp\_1000\_5 & 352730 & 0 & 0 \\ bqp\_1000\_6 & 359629 & 0 & 0 \\ bqp\_1000\_7 & 370718 & 0.13 & 0.11 \\ bqp\_1000\_8 & 351975 & 0 & 0.01 \\ bqp\_1000\_9 & 349044 & 0.06 & 0.08 \\ bqp\_1000\_10 & 351272 & 0.04 & -0.01 \\ \hline bqp\_2500\_1 & 1512444 & 0.23 & 0.18 \\ bqp\_2500\_2 & 1469553 & 0.03 & 0.09 \\ bqp\_2500\_3 & 1413186 & 0.04 & 0.04 \\ bqp\_2500\_4 & 1506521 & 0.07 & 0.07 \\ bqp\_2500\_5 & 1491700 & 0.01 & -0.01 \\ bqp\_2500\_6 & 1468745 & -0.01 & -0.05 \\ bqp\_2500\_7 & 1478073 & 0.02 & -0.03 \\ bqp\_2500\_8 & 1483757 & 0.03 & 0.02 \\ bqp\_2500\_9 & 1482091 & 0.01 & 0.02 \\ bqp\_2500\_10 & 1482220 & -0.02 & 0 \\ \hline \end{tabular}}% \label{tab:tab2}% \end{table}% \end{comment} In summary, favoring or escaping local optima based on $Q_1$ and $Q_2$ leads to improvement in solution quality in the majority of the instances. Moreover, utilizing both $Q_1$ and $Q_2$ results (i.e. looking at $max(Improv_{Q1},Improv_{Q2})$) leads to a guaranteed improvement in all but two instances (1000\_10000\_5 and bqp\_2500\_6). Future research will explore this dynamic through a parallelized tabu search with alternating phases between $Q, Q_1$ and $Q_2$. We conducted a paired two-sample t-test between ``$Improv_{Q1}$ and ``$Improv_{Q2}$" columns to determine whether the population mean of the $Q_1$ results was different that that of $Q_2$ and found there is no statistically significant difference between the two techniques. Moreover, no specific combination of $(\alpha,\delta)$ was dominant over all others \begin{comment} \begin{figure}[htbp] \centerline{\includegraphics[scale=0.9]{Vars_Freq_1.png}} \caption{Number of Variables vs Relative Frequency of setting $x_i = 1$} \label{fig:fig11} \end{figure} \end{comment} \section{Conclusions} \label{conc} We present a Constraint Programming approach to obtain a diverse set of local optima which could be utilized in the elite sets or local optima networks, and we present a learning-based technique that modifies the linear coefficients of the $Q$ matrix while favoring or avoiding local optima. Testing indicates this technique leads to improvement in solution quality for benchmark QUBO instances. Future work involves combining the effects of $Q_1$ and $Q_2$ in an alternating phase tabu search heuristic. \bibliographystyle{elsarticle-num} {\footnotesiz
1,116,691,501,065
arxiv
\section{Introduction} In the areas of computer communications and electronic transactions, one of important topic is how to send data in a confidential and authenticated way. Usually, the confidentiality of delivered data is provided by encryption algorithms, and the authentication of messages is guaranteed by digital signatures. In the traditional paradigm, these two cryptographic operations are performed in the order of encrypt-then-sign. In \cite{HMP94}, Horster et al. proposed an efficient scheme such that messages can be encrypted and authenticated simultaneously. Later, Lee and Chang \cite{LC95} improved Horster et al.'s authenticated encryption scheme so that no hash function is needed. However, both schemes does not provide the property of not-repudiation, i.e., the receiver cannot prove to a third party that some messages are indeed originated from a specific sender. In \cite{Zhe97}, Zheng introduced signcryption schemes such that unforgeablility, confidentiality, and not-repudiation can be provided {\it simultaneously}. Since the non-repudiation protocols in \cite{Zhe97a} are based on zero-knowledge proofs, Zheng's schemes are inefficient when there are disputes between the receiver and the sender. In \cite{MC03}, Ma and Chen proposed an efficient authenticated encryption scheme with public verifiability. That is, in their scheme the receiver Bob can efficiently prove to a third party that a message is indeed originated from the sender Alice. However, this paper shows that their scheme is insecure since dishonest Bob can forge a valid ciphertext so that it looks as if it were generated by Alice. In our attack, the only assumption is that Bob registers his public key with a certification authority (CA) after he knows Alice's public key. This assumption almost always holds in the existing public key infrastructures (PKIs). Another problem in their scheme is that their public verification protocol does not work due to a mathematical error. To overcome these weaknesses in the Ma-Chen scheme, we propose a new scheme based on the Schnorr signature. In addition, technical discussions are provided to show that our scheme is secure and efficient. The rest of this paper is organized as follows. Section 2 first reviews the Ma-Chen scheme. Then, the security analysis is presented in Section 3. After that, we propose a new scheme and analyze it in Section 4. Finally, the conclusion is given in Section 5. \section{Review of the Ma-Chen Scheme} A third trusted party (TTP) selects a triple $(p,q,g)$, where $p$ and $q$ are two large primes satisfying $q|(p-1)$, and $g\in \mathbb{Z}_p^*$ is an element of order $q$. It is assumed that the Decisional Diffie-Hellman (DDH) problem are difficult in the cyclic group $G_q=\langle g\rangle$. That is, given $g, g^a, g^b, g^c\in G_q$ where $a$, $b$ and $c$ are unknown random numbers, it is infeasible to determine whether $g^{ab}\mod p$ equals $g^{c} \mod p$ \footnote{There are another two related numerical assumptions, i.e., the discrete logarithm (DL) problem and the computational Diffie-Hellman (CDH) problem are difficult in the cyclic group $G_q=\langle g\rangle$. That is, given $g, g^a, g^b\in G_q$ where $a$ and $b$ are unknown random numbers, it is infeasible to compute $a$ or $g^{ab}\mod p$. Actually, it is easy to know that DDH assumption is at least as strong as CDH assumption, and CDH assumption is at least as strong as DL assumption.}. In addition, the TTP publishes a secure hash function $H(\cdot)$. We assume that Alice and Bob have the certified secret/public key pairs $(x_A,y_A=g^{x_A}\mod p)$ and $(x_B,y_B=g^{x_B}\mod p)$, respectively. \vskip 1mm To send a message $m\in {\mathbb Z}_p^*$ to the receiver Bob, the sender Alice does as follows. \begin{description} \item [(A-1)] Pick a random number $k\in \mathbb Z_q^*$, and then compute $v=(g\cdot y_B)^k \mod p$, $e=v \mod q$. \item [(A-2)] Set $c=m\cdot H(v)^{-1} \mod p$, $r=H(e,H(m))$, and $s=k-x_A\cdot r \mod q$. \item [(A-3)] Send the triple $(c,r,s)$ to Bob via a public channel. \end{description} Upon receiving $(c,r,s)$ from Alice, the receiver Bob does the following: \begin{description} \item [(B-1)] Compute $v=(g\cdot y_B)^s\cdot y_A^{r(x_B+1)} \mod p$ and $e=v \mod q$. \item [(B-2)] Recover the message $m=c\cdot H(v) \mod p$, and check whether $r\equiv H(e,H(m))$. \item [(B-3)] If $r\equiv H(e,H(m))$ holds, Bob concludes that $(c,r,s)$ is indeed encrypted by Alice. \end{description} For public verification, Bob first computes $K_1=(y_B^s \cdot y_A^{r\cdot x_B} \mod p)\mod q$, and then forwards $(H(m),K_1,r,s)$ to the arbitrator TTP. The TTP performs as follows: \begin{description} \item [(TTP-1)]\quad Compute $e'=(g^s\cdot y_A^r\cdot K_1\mod p)\mod q$. \item [(TTP-2)]\quad If $r\equiv H(e',H(m))$, the TTP knows that Alice is the originator of encryption and signature. \end{description} \section{Security Analysis of the Ma-Chen Scheme} We now give some explanations and remarks on the Ma-Chen scheme. Note that to decrypt and verify the triple $(c,r,s)$, one needs to know the value of $x_B$ or $y_{AB}$, where $y_{AB}=g^{x_A\cdot x_B} \mod p$. Actually, using $y_{AB}$ the ciphertext $(c,r,s)$ can be easily decrypted and verified as follows: (a) compute $v=(g\cdot y_B)^s \cdot y_{AB}^{r}\cdot y_A^r\mod p$, and $e=v \mod q$; (b) recover $m=c\cdot H(v)\mod p$, and check whether $r\equiv H(e,H(m))$. Therefore, the value of $y_{AB}$ can not be revealed to anybody other than Alice and Bob. The authors of \cite{MC03} noticed this problem. Therefore, in their scheme only the tuple $(H(m),K_1,r,s)$ (not including the value of $v$) is revealed to the TTP, so that even the TTP cannot derive the value of $y_{AB}$. The reason is that if one value of $v$ is known by the TTP (or anybody else), then $y_{AB}$ can be obtained easily by $y_{AB}=y_A^{-1}\cdot v^{r^{-1}}\cdot (g\cdot y_B)^{-s\cdot r^{-1}}\mod p$. In addition, recall that anybody cannot derive the value of $y_{AB}$ directly from Alice's public key $y_A$ and Bob's public key $y_B$, since it has been assumed that the DDH assumption holds in the multiplicative cyclic subgroup $G_q=\langle g\rangle$. Based on the above observations and the fact that the value of $s$ in the Ma-Chen scheme is computed in a very similar way as in the secure Schnorr signature scheme \cite{Sch91}, the authors of \cite{MC03} analyzed the security of their scheme, and claimed that their scheme satisfies the following security properties: \begin{itemize} \item [(1)] {\it Unforgeability}: Except Alice, any attacker (including Bob) cannot generate a valid ciphertext $(c,r,s)$ for a message $m$ so that the verification procedure (B-1 to B-3) or the the public verification procedure (TTP-1 to TTP-2) is satisfied. \item [(2)] {\it Confidentiality}: Under the DDH assumption, any third party cannot derive the message $m$ from the ciphertext $(c,r,s)$. \item [(3)] {\it Non-repudiation}: Once Bob reveals $(H(m),K_1,r,s)$, anybody can verify that $(r,s)$ is Alice's signature. Therefore, the TTP can settle possible disputes between Alice and Bob. \end{itemize} However, in the following we show that their scheme is forgeable actually. Moreover, we identify an design error in their public verification procedure. That is, even all parties follow the specifications of their scheme honestly, the TTP cannot conclude that $(H(m), K_1,r,s)$ is generated by Alice. Therefore, the Ma-Chen scheme does not meet the properties of unforgeability, non-repudiation and public verifiability. \vskip 1mm {\bf Forgeability}. Firstly, the authors observed that Bob is the strongest attacker to forge a triple $(c,r,s)$, since he knows $x_B$ which is used in verification procedure. Then, they claimed that their scheme is {\it equivalent} to the Schnorr signature \cite{Sch91}. So they concluded that their scheme is unforgeable against adaptive attacks, as the Schnorr signature is proved to be unforgeable (in the random oracle model) \cite{PS00}. Unfortunately, we notice that this is not the fact, though the value of $s$ is indeed calculated in a very similar way as in the Schnorr signature \cite{Sch91}. To show this fact directly, we now demonstrate a concrete attack on the Ma-Chen scheme. In our attack, we assume that Bob registers his public key $y_B$ with a certification authority (CA) after he knows Alice's public key $y_A$. This assumption almost always holds in the existing public key infrastructures (PKIs). Anyway, in the original paper \cite{MC03}, it is not specified that Alice and Bob have to register their public simultaneously. Moreover, in many scenarios it seems worth to register or update a (new) public key in the point view of the (malicious) verifier Bob, even if such an action can only enable him to forge one valid ciphertex for one message. \vskip 1mm To amount this attack, the verifier Bob forges a valid ciphertext $(c,r,s)$ for a message $m$ of his choice as follows. \begin{itemize} \item [(1)] Pick two random numbers $a, b\in \mathbb Z_q^*$, and compute $v=g^a\cdot y_A^b \mod p$. \item [(2)] Compute $e=v \mod q$, $c=m\cdot H(v)^{-1} \mod p$, $r=H(e,H(m))$, and $s=rab^{-1}\mod q$. \item [(3)] Set his secret key $x_B=br^{-1}-1 \mod q$, and then register the public key $y_B=g^{x_B}\mod p$ with a certification authority (CA). \end{itemize} We now show that the forged triple $(c,r,s)$ is a valid ciphertext for message $m$ with respect to the public keys $y_A$ and $y_B$. Firstly, the following equalities hold: $$\begin{array}{lcl} (g\cdot y_B)^s\cdot y_A^{r(x_B+1)}\mod p & \equiv & g^{(1+x_B)s} \cdot y_A^b\mod p\\ & \equiv & g^a \cdot y_A^b\mod p \\ & \equiv & v. \end{array}$$ Then, we have $e\equiv v\mod q$, $m \equiv c\cdot H(v) \mod q$, and $r\equiv H(e,H(m))$. So $(c,r,s)$ is a valid triple. \vskip 1mm In addition, note that even if Alice realizes later a triple $(c,r,s)$ was forged via the above attacking procedure, she cannot defence for herself by computing Bob's secret $x_B$. We explain the reasons as follows. Since the hash function $H(\cdot)$ is usually modelled as a random function, to derive the secret key $x_B$ the useful information for Alice is the following three equations: $v=g^a\cdot y_A^b \mod p$, $s=rab^{-1}\mod q$, and $x_B=br^{-1}-1 \mod q$. Firstly, note that Alice (with her secret key $x_A$) cannot derive the values of $a$ and $b$ from equation $v=g^a\cdot y_A^b \mod p$, as there are $q-1$ candidates of such pairs $(a,b)$. More specifically, for any given $a\in Z_q^*$, there is a fixed value of $b$ such that $v=g^a\cdot y_A^b \mod p$. Secondly, Alice can try to derive the value of $x_B$ by eliminating $a$ and $b$ in the above three equations. To do so, she knows that $a=s(1+x_B)\mod q$ and $b=r(1+x_B) \mod q$. Consequently, Alice gets equation $v=(g^s\cdot y_A^r)^{(1+x_B)} \mod p$. To get the value of $x_B$ from this equation, Alice has to face the difficult discrete logarithm problem, which is widely believed intractable. \vskip 2mm {\bf Design Error}. We note that the TTP cannot validate a valid tuple $(H(m),K_1,r,s)$ by the public verification procedure, i.e., TTP-1 to TTP-2. The reason is that $e'\neq e$ even if Alice, Bob, and the TTP all are honest. Namely, $$[(g\cdot y_B)^s\cdot y_A^{r(x_B+1)} \mod p] \mod q \neq [g^s\cdot y_A^r\cdot K_1\mod p ] \mod q,$$ where $K_1=(y_B^s \cdot y_A^{r\cdot x_B} \mod p)\mod q$, $p$ and $q$ are two primes such that $q|(p-1)$. In the original paper \cite{MC03}, however, the those two expressions are considered as equivalent. This problem is also identified independently by Wen et al. \cite{WLH03}. For more details, please refer to their paper. \section{Improved Scheme} In this section, we propose an improvement of the Ma-Chen scheme by exploiting the similar idea of Bao and Deng \cite{BD98}. However, different from Bao and Deng's work, we use the provably secure Schnorr signature as the underlying signature scheme. Furthermore, technical discussions are provided to show that our scheme is secure and efficient. \subsection{Description of the Scheme} In our scheme, we assume that $(E_K(\cdot), D_K(\cdot))$ is a pair of ideal symmetric key encryption/decryption algorithms under the session key $K$. In addition, $h(\cdot)$ is another suitable hash function, which maps a number of $Z_p$ to a session key for our symmetric key encryption/decryption algorithms. Other notations are the same as in Section II. \vskip 1mm To send a message $m\in {\mathbb Z}_p^*$ to the receiver Bob in an authenticated and encrypted way, the sender Alice does as follows. \begin{description} \item [(A-1)] Pick a random number $k\in \mathbb Z_q^*$, and then compute $t_1=g^k \mod p$, and $t_2=y_B^k \mod p$. \item [(A-2)] Set $c=E_{h(t_2)}(m)$, $r=H(m,t_1)$, and $s=k+r\cdot x_A\mod q$. \item [(A-3)] Send the triple $(c,r,s)$ to Bob via a public channel. \end{description} Upon receiving $(c,r,s)$ from Alice, the receiver Bob does the followings: \begin{description} \item [(B-1)] Compute $t_1=g^s y_A^{-r} \mod p$, and $t_2=t_1^{x_B} \mod p$. \item [(B-2)] Recovers the message $m=D_{h(t_2)}(c)$, and check whether $r\equiv H(m,t_1)$. \item [(B-3)] If $r\equiv H(m,t_1)$ holds, Bob concludes that $(c,r,s)$ is indeed encrypted by Alice. \end{description} For public verification, Bob just needs to release $(m,r,s)$. Then, any verifier can check whether $(r,s)$ is a standard Schnorr signature for message $m$ as follows: \begin{description} \item [(V-1)] Compute $t_1=g^s y_A^{-r} \mod p$. \item [(V-2)] $(r,s)$ is Bob's valid signature for message $m$ if and only if $r\equiv H(m,t_1)$. \end{description} \subsection{Security} Our scheme is very simple in the logic structure. That is, we exactly use the Schnorr signature scheme to generate the pair $(r,s)$, i.e., the standard Schnorr signature for a message $m$. Therefore, according the provable security of the Schnorr signature given in \cite{PS00}, any adaptive attacker (including Bob) cannot forge a valid ciphertext $(c,r,s)$ for any message $m$ such that $m=D_{h(t_2)}(c)$ and $r\equiv H(m,t_1)$, where $t_1=g^s y_A^{-r} \mod p$, $t_2=t_1^{x_B} \mod p$. Otherwise, this implies that the attacker has successfully forged a valid Schnorr signature $(r,s)$ for a message $m$, which is in turn contrary to the provable security of the Schnorr signature scheme. So our scheme satisfied the unforgeability. Now we discuss the confidentiality, i.e., except the receiver Bob, anyone else cannot extract the plaintext $m$ from the ciphertext $(c,r,s)$. Firstly, note that an attacker cannot extract the plaintext $m$ from the equality $r\equiv H(m,t_1)$ after recovering $t_1=g^s y_A^{-r} \mod p$, since the secure hash function $H(m,t_1)$ hides the information of $m$. Another way for getting the message $m$ is to decrypt the ciphertext $c$ directly. To do so, the attacker has to obtain the session key $h(t_2)$ since $(E_K(\cdot), D_K(\cdot))$ is assumed to be an ideal (so secure) symmetric key encryption/decryption algorithm pair. This means that to get the session key $h(t_2)$, the attacker has to get the value of $t_2$ first, since $h(\cdot)$ is also a secure hash function. However, the attacker cannot get the value of $t_2$ from the values $t_1$ and $y_B$. In fact, the latter problem is the CDH problem, which is widely believed intractable in security community. Therefore, we conclude that our scheme meets the the confidentiality. Finally, the property of non-repudiation is also satisfied in our scheme due to the following two facts: (a) Only Alice can generate a valid ciphertext $(c,r,s)$; and (b) Anybody can very that $(r,s)$ is a standard Schnorr signature if the receiver Bob releases the triple $(m,r,s)$. Consequently, a TTP can easily settle potential disputes between Alice and Bob by checking whether $r\equiv H(m,t_1)$, where $t_1$ is computed by $t_1=g^s y_A^{-r} \mod p$. \subsection{Efficiency} In our scheme and the Ma-Chen scheme, the length of ciphertext $(c,r,s)$ is the same, i.e., $|p|+2|q|$ bits. For a real system, $p$ and $q$ can be selected as primes with lengths of 1024-bit and 160-bit, respectively. In this setting, the length of ciphertext $(c,r,s)$ is 1344 bits. \vskip 1mm We now discuss the computation overhead. We only count the numbers of exponentiations that are performed by each party, due to the fact that the exponentiation is the most time-costing computation operation in most cryptosystems. In the Ma-Chen scheme, to generate and verify a ciphertext 3 exponentiations are needed (by Alice and Bob). In our scheme, this number is 5, increased a little. However, to convert a ciphertext $(c,r,s)$ into public verification, Bob does not need to perform any exponentiation in our scheme, while in the Ma-Chen scheme 2 exponentiations are required. In addition, it is the same that the TTP needs to do 2 exponentiations in the public verification procedures of both schemes. In one words, these two schemes have no much difference in the performance. \section{Conclusion} In this paper, we identified two security weaknesses in the Ma-Chen authenticated encryption scheme proposed in \cite{MC03}. Our results showed that their scheme is insecure. Moreover, based on the Schnorr signature scheme, we proposed a new scheme such that all desired security requirements are satisfied.
1,116,691,501,066
arxiv
\section{Background} We earlier described two taggers for French: the statistical one having an accuracy of 95--97~\% and the constraint-based one 97--99~\% (see \cite{CT94,CT95}). The disambiguation has been already described, and here we discuss the other stages of the process, namely the definition of the tagset, transforming a current lexicon into a new one and guessing the words that do not appear in the lexicon. Our lexicon is based on a finite-state transducer lexicon \cite{KKZ92}. The French description was originally built by Annie Zaenen and Carol Neidle, and later refined by Jean-Pierre Chanod \shortcite{Ch94}. Related work on French can be found in \cite{AMD85}. \section{Tagset} We describe in this section criteria for selecting the tagset. The following is based on what we noticed to be useful during the developing the taggers. \subsection{The size of the tagset} Our basic French morphological analyser was not originally designed for a (statistical) tagger and the number of different tag combinations it has is quite high. The size of the tagset is only 88. But because a word is typically associated with a sequence of tags, the number of different combinations is higher, 353 possible sequences for single French words. If we also consider words joined with clitics, the number of different combinations is much higher, namely 6525. A big tagset does not cause trouble for a constraint-based tagger because one can refer to a combination of tags as easily as to a single tag. For a statistical tagger however, a big tagset may be a major problem. We therefore used two principles for forming the tagset: (1) the tagset should not be big and (2) the tagset should not introduce distinctions that cannot be resolved at this level of analysis. \subsection{Verb tense and mood} As distinctions that cannot be resolved at this level of analysis should be avoided, we do not have information about the tense of the verbs. Some of this information can be recovered later by performing another lexicon lookup after the analysis. Thus, if the verb tense is not ambiguous, we have not lost any information and, even if it is, a part-of-speech tagger could not resolve the ambiguity very reliably anyway. For instance, {\em dort} (present; {\em sleeps}) and {\em dormira} (future; {\em will sleep}) have the same tag {\em VERB-SG-P3}, because they are both singular, third-person forms and they can both be the main verb of a clause. If needed, we can do another lexicon lookup for words that have the tag {\em VERB-SG-P3} and assign a tense to them after the disambiguation. Therefore, the tagset and the lexicon together may make finer distinctions than the tagger alone. On the other hand, the same verb form {\em dit} can be third person singular present indicative or third person singular past historic (pass\'{e} simple) of the verb {\em dire} ({\em to say}). We do not introduce the distinction between those two forms, both tagged as {\em VERB-SG-P3}, because determining which of the two tenses is to be selected in a given context goes beyond the scope of the tagger. However, we do keep the distinction between {\em dit} as a finite verb (present or past) on one side and as a past participle on the other, because this distinction is properly handled with a limited contextual analysis. Morphological information concerning mood is also collapsed in the same way, so that a large class of ambiguity between present indicative and present subjunctive is not resolved: again this is motivated by the fact that the mood is determined by remote elements such as, among others, connectors that can be located at (theoretically) any distance from the verb. For instance, a conjunction like {\em quoique} requires the subjunctive mood: \begin{quote} Quoique, en principe, ce cas {\bf soit} fr\'{e}quent. (Though, in principle, this case {\bf is} [subjunctive] frequent.) \end{quote} The polarity of the main verb to which a subordinate clause is attached also plays a role. For instance, compare: \begin{quote} Je pense que les petits enfants {\bf font} de jolis dessins. (I think that small kids {\bf make} [indicative] nice drawings.) \\ \\ Je ne pense pas pas que les petits enfants {\bf fassent} de jolis dessins. (I do not think that small kids {\bf make} [subjunctive] nice drawings.) \\ \end{quote} Consequently, forms like {\em chante} are tagged as VERB-P3SG regardless of their mood. In the case of {\em faire} (to do, to make) however, the mood information can easily be recovered as the third person plural are {\em font} and {\em fassent} for indicative and subjunctive moods respectively. \subsection{Person} The person seems to be problematic for a statistical tagger (but not for a constraint-based tagger). For instance, the verb {\em pense}, ambiguous between the first- and third-person, in the sentence {\em Je ne le pense pas} (I do not think so) is disambiguated wrongly because the statistical tagger fails to see the first-person pronoun {\em je} and selects more common third-person reading for the verb. We made a choice to collapse the first- and second-person verbs together but not the third person. The reason why we cannot also collapse the third person is that we have an ambiguity class that contains adjective and first- or second-person verbs. In a sentence like {\em Le secteur mati\`{e}res (NOUN-PL) plastiques (ADJ-PL/NOUN-PL/VERB-P1P2)\ldots} the verb reading for {\em plastiques} is impossible. Because noun --- third-person sequence is relatively common, collapsing also the third person would cause trouble in parsing. Because we use the same tag for first- and second-person verbs, the first- and second-person pronouns are also collapsed together to keep the system consistent. Determining the person after the analysis is also quite straightforward: the personal pronouns are not ambiguous, and the verb form, if it is ambiguous, can be recovered from its subject pronoun. \subsection{Lexical word-form} Surface forms under a same lexical item were also collapsed when they can be attached to different lemmata (lexical forms) while sharing the same category, such as {\em peignent} derived from the verb {\em peigner} ({\em to comb}) or {\em peindre} ({\em to paint}). Such coincidental situations are very rare in French \cite{Elb93}. However, in the case of {\em suis} first person singular of the auxiliary {\em \^{e}tre} ({\em to be}) or of the verb {\em suivre} ({\em to follow}), the distinction is maintained, as we introduced special tags for auxiliaries. \subsection{Gender and number} We have not introduced gender distinctions as far as nouns and adjectives (and incidentally determiners) are concerned. Thus a feminine noun like {\em chaise} ({\em chair}) and a masculine noun like {\em tabouret} ({\em stool}) both receive the same tag {\em NOUN-SG}. However, we have introduced distinctions between singular nouns ({\em NOUN-SG}), plural nouns ({\em NOUN-PL}) and number-invariant nouns ({\em NOUN-INV}) such as {\em taux} ({\em rate/rates}). Similar distinctions apply to adjectives and determiners. The main reason for this choice is that number, unlike gender, plays a major role in French with respect to subject/verb agreement, and the noun/verb ambiguity is one of the major cases that we want the tagger to resolve. \subsection{Discussion on Gender} Ignoring gender distinction for a French tagger is certainly counter intuitive. There are three major objections against this choice: \begin{itemize} \item Gender information would provide better disambiguation, \item Gender ambiguous nouns should be resolved, and \item Displaying gender provides more information. \end{itemize} There is obviously a strong objection against leaving out gender information as this information may provide a better disambiguation in some contexts. For instance in {\em le diffuseur diffuse}, the word {\em diffuse} is ambiguous as a verb or as a feminine adjective. This last category is unlikely after a masculine noun like {\em diffuseur}. However, one may observe that gender agreement between nouns and adjectives often involve long distance dependencies, due for instance to coordination or to the adjunction of noun complements as in {\em une envie de soleil diffuse} where the feminine adjective {\em diffuse} agrees with the feminine noun {\em envie}. In other words, introducing linguistically relevant information such as gender into the tagset is fine, but if this information is not used in the linguistically relevant context, the benefit is unclear. Therefore, if a (statistical) tagger is not able to use the relevant context, it may produce some extra errors by using the gender. An interesting, albeit minor interest of not introducing gender distinction, is that there is then no problem with tagging phrases like {\em mon allusion} ({\em my allusion}) where the masculine form of the possessive determiner {\em mon} precedes a feminine singular noun that begins with a vowel, for euphonic reasons. Our position is that situations where the gender distinction would help are rare, and that the expected improvement could well be impaired by new errors in some other contexts. On a test suite \cite{CT95} extracted from the newspaper Le Monde (12~000 words) tagged with either of our two taggers, we counted only three errors that violated gender agreement. Two could have been avoided by other means, i.e.~they belong to other classes of tagging errors. The problematic sentence was: \begin{quote} L'arm\'{e}e interdit d'autre part le passage\ldots\\ (The army forbids the passage\ldots) \end{quote} where {\em interdit} is mistakenly tagged as an adjective rather than a finite verb, while {\em arm\'{e}e} is a feminine noun and {\em interdit} a masculine adjective, which makes the {\em noun--adjective} sequence impossible in this particular sentence\footnote{We have not systematically compared the two approaches, i.e.~with or without gender distinction, but previous experiences \cite{Ch93} with broad coverage parsing of possibly erroneous texts have shown that gender agreement is not as essential as one may think when it comes to French parsing.}. Another argument in favour of gender distinction is that some nouns are ambiguously masculine or feminine, with possible differences in meaning, e.g.~{\em poste}, {\em garde}, {\em manche}, {\em tour}, {\em page}. A tagger that would carry on the distinction would then provide sense disambiguation for such words. Actually, such gender-ambiguous words are not very frequent. On the same 12~000-word test corpus, we counted 46 occurrences of words which have different meanings for the masculine and the feminine noun readings. This number could be further reduced if extremely rare readings were removed from the lexicon, like masculine {\em ombre} (a kind of fish while the feminine reading means shadow or shade) or feminine {\em litre} (a religious ornament). We also counted 325 occurrences of nouns (proper nouns excluded) which do not have different meanings in the masculine and the feminine readings, e.g.~{\em \'{e}l\`{e}ve}, {\em camarade}, {\em jeune}. A reason not to distinguish the gender of such nouns, besides their sparsity, is that the immediate context does not always suffice to resolve the ambiguity. Basically, disambiguation is possible if there is an unambiguous masculine or feminine modifier attached to the noun as in {\em le poste} vs.~{\em la poste}. This is often not the case, especially for {\em preposition + noun} sequences and for plural forms, as plural determiners themselves are often ambiguous with respect to gender. For instance, in our test corpus, we find expressions like {\em en 225 pages}, {\em\`{a} leur tour}, {\em\`{a} ces postes} and {\em pour les postes de responsabilit\'{e}} for which the contextual analysis does not help to disambiguate the gender of the head noun. Finally, carrying the gender information does not itself increase the disambiguation power of the tagger. A disambiguator that would explicitly mark gender distinctions in the tagset would not necessarily provide more information. A reasonable way to assess the disambiguating power of a tagger is to consider the ratio between the initial number of ambiguous tags vs.~the final number of tags after disambiguation. For instance, it does not make any difference if the ambiguity class for a word like {\em table} is {\em [feminine-noun, finite-verb]} or {\em [noun, finite-verb]}, in both cases the tagger reduces the ambiguity by a ratio of 2 to 1. The information that can be derived from this disambiguation is a matter of associating the tagged word with any relevant information like its base form, morphological features such as gender, or even its definition or its translation into some other language. This can be achieved by looking up the disambiguated word in the appropriate lexicon. Providing this derived information is not an intrinsic property of the tagger. Our point is that the objections do not hold very strongly. Gender information is certainly important in itself. We only argue that ignoring it at the level of part-of-speech tagging has no measurable effect on the overall quality of the tagger. On our test corpus of 12~000 words, only three errors violate gender agreement. This indicates how little the accuracy of the tagger could be improved by introducing gender distinction. On the other hand, we do not know how many errors would have been introduced if we had distinguished between the genders. \subsection{Remaining categories} We avoid categories that are too small, i.e.~rare words that do not fit into an existing category are collapsed together. Making a distinction between categories is not useful if there are not enough occurrences of them in the training sample. We made a category {\em MISC} for all those miscellaneous words that do not fit into any existing category. This accounts for words such as: interjection {\em oh}, salutation {\em bonjour}, onomatopoeia {\em miaou}, wordparts i.e.~words that only exist as part of a multi-word expression, such as {\em priori}, as part of {\em a priori}. \subsection{Dividing a category} In a few instances, we introduced new categories for words that have a specific syntactic distribution. For instance, we introduced a word-specific tag {\em PREP-DE} for words {\em de}, {\em des} and {\em du}, and tag {\em PREP-A} for words {\em \`{a}}, {\em au} and {\em aux}. Word-specific tags for other prepositions could be considered too. The other readings of the words were not removed, e.g.~{\em de} is, ambiguously, still a determiner as well as {\em PREP-DE}. When we have only one tag for all the prepositions, for example, a sequence like \begin{quote} determiner noun noun/verb preposition \end{quote} is frequently disambiguated in the wrong way by the statistical tagger, e.g.~{\em Le train part \`a cinq heures} ({\em The train leaves at 5 o'clock}). The word {\em part} is ambiguous between a noun and a verb (singular, third person), and the tagger seems to prefer the noun reading between a singular noun and a preposition. We succeeded in fixing this without modifying the tagset but the side-effect was that overall accuracy deteriorated. The main problem is that the preposition {\em de}, comparable to English {\em of}, is the most common preposition and also has a specific distribution. When we added new tags, say {\em PREP-DE} and {\em PREP-A}, for the specific prepositions while the other prepositions remained marked with {\em PREP}, we got the correct result, with no noticeable change in overall accuracy. \section{Building the lexicon} We have a lexical transducer for French \cite{KKZ92} which was built using Xerox Lexical Tools \cite{Xt92,Xl93}. In our work we do not modify the corresponding source lexicon but we employ our finite-state calculus to map the lexical transducer into a new one. Writing rules that map a tag or a sequence of tags into a new tag is rather straightforward, but redefining the source lexicon would imply complex and time consuming work. The initial lexicon contains all the inflectional information. For instance, the word {\em danses} (the plural of the noun {\em danse} or a second person form of the verb {\em danser} ({\em to dance}) has the following analyses\footnote{The tags represent: {\em present indicative, singular, second person, verb}; {\em present subjunctive, singular, second person, verb}; and {\em feminine, plural, noun}}: \begin{verbatim} danser +IndP +SG +P2 +Verb danser +SubjP +SG +P2 +Verb danse +Fem +PL +Noun \end{verbatim} Forms that include clitics are analysed as a sequence of items separated by the symbols $<$ or $>$ depending on whether the clitics precede or follow the head word. For instance {\em vient-il} ({\em does he come}, lit. {\em comes-he}) is analysed as\footnote{The tags for {\em il} represent: {\em nominative, masculine, singular, third person, clitic pronoun}.}: \begin{verbatim} venir +IndP +SG +P3 +Verb > il +Nom +Masc +SG +P3 +PC \end{verbatim} From this basic morphological transducer, we derived a new lexicon that matches the reduced tagset described above. This involved two major operations: \begin{itemize} \item handling cliticised forms appropriately for the tagger's needs. \item switching tagsets \end{itemize} In order to reduce the number of tags, cliticised items (like {\em vient-il} are split into independent tokens for the tagging application. This splitting is performed at an early stage by the tokeniser, before dictionary lookup. Keeping track of the fact that the tokens were initially agglutinated reduces the overall ambiguity. For instance, if the word {\em danses} is derived from the expression {\em danses-tu} ({\em do you dance}, lit. dance-you), then it can only be a verb reading. This is why forms like {\em danses-tu} are tokenised as {\em danses-} and {\em tu}, and forms like {\em chante-t-il} are tokenised as {\em chante-t-} and {\em il}. This in turn requires that forms like {\em danses-} and {\em chante-t-} be introduced into the new lexicon. With respect to switching tagsets, we use contextual two-level rules that turn the initial tags into new tags or to the void symbol if old tags must simply disappear. For instance, the symbol {\em +Verb} is transformed into {\em +VERB-P3SG} if the immediate left context consists of the symbols {\em +SG +P3}. The symbols {\em +IndP}, {\em +SG} and {\em +P3} are then transduced to the void symbol, so that {\em vient} (or even the new token {\em vient-}) gets analysed merely as {\em +VERB-P3SG} instead of {\em +IndP +SG +P3 +Verb}. A final transformation consists in associating a given surface form with its ambiguity class, i.e.~with the alphabetically ordered sequence of all its possible tags. For instance {\em danses} is associated with the ambiguity class {\em [+NOUN-PL +VERB-P1P2]}, i.e.~it is either a plural noun or a verb form that belongs to the collapsed first or second person paradigm. \section{Guesser} Words not found in the lexicon are analysed by a separate finite-state transducer, the guesser. We developed a simple, extremely compact and efficient guesser for French. It is based on the general assumption that neologisms and uncommon words tend to follow regular inflectional patterns. The guesser is thus based on productive endings (like {\em ment} for adverbs, {\em ible} for adjectives, {\em er} for verbs). A given ending may point to various categories, e.g.~{\em er} identifies not only infinitive verbs but also nouns, due to possible borrowings from English. For instance, the ambiguity class for {\em killer} is {\em [NOUN-SG VERB-INF]}. These endings belong to the most frequent ending patterns in the lexicon, where every rare word weights as much as any frequent word. Endings are not selected according to their frequency in running texts, because highly frequent words tend to have irregular endings, as shown by adverbs like {\em jamais}, {\em toujours}, {\em peut-\^{e}tre}, {\em hier}, {\em souvent} ({\em never}, {\em always}, {\em maybe}\ldots). Similarly, verb neologisms belong to the regular conjugation paradigm characterised by the infinitive ending {\em er}, e.g.~{\em d\'{e}balladuriser}. With respect to nouns, we first selected productive endings ({\em iste}, {\em eau}, {\em eur}, {\em rice}\ldots), until we realised a better choice was to assign a noun tag to all endings, with the exception of those previously assigned to other classes. In the latter case, two situations may arise: either the prefix is shared between nouns and some other category (such as {\em ment}), or it must be barred from the list of noun endings (such as {\em aient}, an inflectional marking of third person plural verbs). We in fact introduced some hierarchy into the endings: e.g.~{\em ment} is shared by adverbs and nouns, while {\em iquement} is assigned to adverbs only. Guessing based on endings offers some side advantages: unknown words often result from alternations, which occur at the beginning of the word, the rest remaining the same, e.g.~derivational prefixes as in {\em isra\'{e}lo-jordano-palestinienne} but also oral transcriptions such as {\em les z'oreilles} ({\em the ears}), with {\em z'} marking the phonological liaison. Similarly, spelling errors which account for many of the unknown words actually affect the ending less than the internal structure of the word, e.g.~the misspelt verb forms {\em appellaient, geulait}. Hyphens used to emphasise a word, e.g.~{\em har-mo-ni-ser}, also leave endings unaltered. Those side advantages do not however operate when the alternation (prefix, spelling error) applies to a frequent word that does not follow regular ending patterns. For instance, the verb {\em construit} and the adverb {\em tr\`{e}s} are respectively misspelt as {\em constuit} and {\em tr\'{e}s}, and are not properly recognised. Generally, the guesser does not recognise words belonging to closed classes (conjunctions, prepositions, etc.) under the assumption that closed classes are fully described in the basic lexicon. A possible improvement to the guesser would be to incorporate frequent spelling errors for words that are not otherwise recognised. \subsection{Testing the guesser} We extracted, from a corpus of newspaper articles (Lib\'{e}ration), a list of 13~500 words unknown to the basic lexicon\footnote{On various large newspaper corpora, an average of 18~\% words are unknown: this is mostly due to the high frequency of proper nouns.}. Of those unknown words, 9385 (i.e.~about 70~\%) are capitalised words, which are correctly and unambiguously analysed by the guesser as proper nouns with more than 95~\% accuracy. Errors are mostly due to foreign capitalised words which are not proper nouns (such as {\em Eight}) and onomatopoeia (such as {\em Ooooh}). The test on the remaining 4000 non-capitalised unknown words is more interesting. We randomly selected 800 of these words and ran the guesser on them. 1192 tags were assigned to those 800 words by the guesser, which gives an average of 1.5 tags per word. For 113 words, at least one required tag was missing (118 tags were missing as a whole, 4 words were lacking more than one tag: they are misspelt irregular verbs that have not been recognised as such). This means that 86~\% of the words got all the required tags from the guesser. 273 of the 1192 tags were classified as irrelevant. This concerned 244 words, which means that 70~\% of the words did not get any irrelevant tags. Finally, 63~\% of the words got all the required tags and only those. If we combine the evaluation on capitalised and non-capitalised words, 85~\% of all unknown words are perfectly tagged by the guesser, and 92~\% get all the necessary tags (with possibly some unwanted ones). The test on the non-capitalised words was tough enough as we counted as irrelevant any tag that would be morphologically acceptable on general grounds, but which is not for a specific word. For instance, the misspelt word {\em statisiticiens} is tagged as {\em [ADJ-PL NOUN-PL]}; we count the {\em ADJ-PL} tag as irrelevant, on the ground that the underlying correct word {\em statisticiens} is a noun only (compare with the adjective {\em platoniciens}). The same occurs with words ending in {\em ement} that are systematically tagged as {\em [ADV NOUN-SG]}, unless a longer ending like {\em iquement} is recognised. This often, but not always, makes the {\em NOUN-SG} tag irrelevant. As for missing tags, more than half are adjective tags for words that are otherwise correctly tagged as nouns or past participles (which somehow reduces the importance of the error, as the syntactic distribution of adjectives overlaps with those of nouns and past participles). The remaining words that lack at least one tag include misspelt words belonging to closed classes ({\em come, tr\'{e}s, vavec}) or to irregular verbs ({\em constuit}), barbarisms resulting from the omission of blanks ({\em proposde}), or from the adjunction of superfluous blanks or hyphens ({\em quand-m\^{e}me, so ci\'{e}t\'{e}}). We also had a few examples of compound nouns improperly tagged as singular nouns, e.g.~{\em rencontres-t\'{e}l\'{e}}, where the plural marking only appears on the first element of the compound. Finally, foreign words represent another class of problematic words, especially if they are not nouns. We found various English examples ({\em at, born, of, enough, easy}) but also Spanish, e.g.~{\em levantarse}, and Italian ones, e.g.~{\em palazzi}. \section{Conclusion} We have described the tagset, lexicon and guesser that we built for our French tagger. In this work, we re-used an existing lexicon. We composed this lexicon with finite-state transducers (mapping rules) in order to produce a new lexical transducer with the new tagset. The guesser for words that are not in the lexicon is described in more detail. Some test results are given. The disambiguation itself is described in \cite{CT95}. \vspace{5mm} \begin{acknowledgments} We want to thank Irene Maxwell and anonymous referees for useful comments. \end{acknowledgments}
1,116,691,501,067
arxiv
\section{Introduction} \label{sec:Introduction} Reinforcement learning (RL) has achieved impressive success at complex tasks, from mastering the game of Go~\citep{silver2016mastering,silver2017mastering} to robotics~\citep{gu2017deep,openai2020learning}. However, this success has been mostly limited to solving a single problem given enormous amounts of experience. In contrast, humans learn to solve a myriad of tasks over their lifetimes, becoming better at solving them and faster at learning them over time. This ability to handle diverse problems stems from our capacity to accumulate, reuse, and recombine perceptual and motor abilities in different manners to handle novel circumstances. In this work, we seek to endow artificial agents with a similar capability to solve RL tasks using {\em functional compositionality} of their knowledge. In lifelong RL, the agent faces a sequence of tasks and must strive to transfer knowledge to future tasks and avoid forgetting how to solve earlier tasks. We formulate the novel problem of lifelong RL of functionally compositional tasks, where tasks can be solved by recombining modules of knowledge in various ways. While temporal compositionality has long been studied in RL, such as in the options framework, the type of {\em functional} compositionality we study here has not been explored in depth, especially not in the more realistic lifelong learning setting. Functional compositionality involves a decomposition into subproblems, where the outputs of one subproblem become inputs to others. This moves beyond standard temporal composition to functional compositions of layered perceptual and action modules, akin to programming where functions are used in combination to solve different problems. For example, a typical robotic manipulation solution interprets perceptual inputs via a sensor module, devises a path for the robot using a high-level planner, and translates this path into motor controls with a robot driver. Each of these modules can be used in other combinations to handle a variety of problems. We present a new method for continually training deep modular architectures, which enables efficient learning of the underlying compositional structures. The modular lifelong RL agent will encounter a sequence of compositional tasks, and must strive to solve them as quickly as possible. Our proposed solution separates the learning process into three stages. First, the learner discovers how to combine its existing modules to solve the current task to the best of its abilities by interacting with the environment with various module combinations. Next, the agent accumulates additional information about the current task via standard RL training with the optimal module combination. Finally, the learner incorporates any newly discovered knowledge from the current task into existing modules, making them more suitable for future learning. We demonstrate that this separation enables faster training and avoids forgetting, even though the agent is not allowed to revisit earlier tasks for further experience. Our main contributions include:% \vspace{-0.5em} \begin{enumerate}[leftmargin=*,noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt] \item We {\bf formally define the lifelong compositional RL problem} as a compositional problem graph, encompassing both zero-shot generalization and fast adaptation to new compositional tasks. \item We propose two {\bf compositional evaluation domains}: a discrete 2-D domain and a realistic robotic manipulation suite, both of which exhibit compositionality at multiple hierarchical levels. \item We create the {\bf first lifelong RL algorithm for functionally compositional structures} and show empirically that it learns meaningful compositional structures in our evaluation domains. While our evaluations focus on explicitly compositional RL tasks, the concept of functional composition is broadly applicable and could be used in future solutions to general lifelong RL. \item We propose to use {\bf batch RL} techniques for avoiding catastrophic forgetting in a lifelong setting and show that this approach is superior to existing lifelong RL methods. \end{enumerate} \vspace{-0.5em} \section{Related work} \label{sec:relatedWork} \paragraph{Lifelong or continual learning} Most work in lifelong learning has focused on the supervised setting, and in particular, on avoiding catastrophic forgetting. This has typically been accomplished by imposing data-driven regularization schemes that discourage parameters from deviating far from earlier tasks' solutions~\citep{zenke2017continual,li2017learning,ritter2018online}, or by replaying real~\citep{lopez2017gradient,nguyen2018variational,chaudhry2018efficient,aljundi2019gradient} or hallucinated~\citep{achille2018life,rao2019continual,van2020brain} data from earlier tasks during the training of future tasks. Other methods have instead aimed at solving the problem of model saturation by increasing the model capacity~\citep{yoon2018lifelong,li2019learn,rajasegaran2019random}. A few works have addressed lifelong RL by following the regularization~\citep{kirkpatrick2017overcoming} or replay~\citep{isele2018selective,rolnick2019experience} paradigms from the supervised setting, exacerbating the stability-plasticity tension. Others have instead proposed multi-stage processes whereby the agent first transfers existing knowledge to the current task and later incorporates newly obtained knowledge into a shared repository~\citep{schwarz2018progress,mendez2020lifelong}. We follow the latter strategy for exploiting and subsequently improving accumulated modules over time. \paragraph{Compositional supervised learning} While deep nets in principle enable learning arbitrarily complex task relations, monolithic agents struggle to find such complex relations given limited data. Compositional multi-task learning (MTL) methods use explicitly modular deep architectures to capture compositional structures that arise in real problems. Such approaches either require the compositional structure to be provided~\citep{andreas2016neural,arad2018compositional} or automatically discover the structure in a hard~\citep{rosenbaum2018routing,alet2018modular,chang2018automatically} or soft~\citep{kirsch2018modular,meyerson2018beyond} manner. A handful of approaches have been proposed that operate in the lifelong setting, under the assumption that each component can be fully learned by training on a single task and then reused for other tasks~\citep{reed2016neural,fernando2017pathnet,valkov2018houdini}. Unfortunately, this is infeasible if the agent has access to little data per task. Recent work proposed a framework for lifelong supervised compositional learning~\citep{mendez2021lifelong}, similar in high-level structure to our proposed method for RL. Typical evaluations of compositional learning use standard benchmarks such as ImageNet or CIFAR-100. While this enables fair performance comparisons, it fails to give insight about the ability to find meaningful compositional structures. Some notable exceptions exist for evaluating compositional generalization in supervised learning~\citep{bahdanau2018systematic,lake2018generalization,sinha2020evaluating}. We extend the ideas of compositional generalization to RL, and introduce the separation of zero-shot compositional generalization and fast adaptation, which is particularly relevant in RL. \paragraph{Compositional RL} A handful of works have considered functionally compositional RL, either assuming full knowledge of the correct compositional structure of the tasks~\citep{devin2017learning} or automatically learning the structure simultaneously with the modules themselves~\citep{goyal2021recurrent,mittal2020learning,yang2020multi}. We propose a method that can handle both of these settings. Two main aspects distinguish our work from existing approaches: 1) our method works in a lifelong setting, where tasks arrive sequentially and the agent is not allowed to revisit previous tasks, and 2) we evaluate our models on tasks that are explicitly compositional at multiple hierarchical levels. A closely related problem that has long been studied in RL is that of temporally extended actions, or options, for hierarchical RL~\citep{sutton1999between,bacon2017option}. Crucially, the problem we consider here differs in that the functional composition occurs at every time step, instead of the temporal chaining considered in the options literature. These two orthogonal dimensions capture real settings in which composition could improve the RL process. Appendix~
1,116,691,501,068
arxiv
\section{Introduction} Supersymmetric field theories in flat backgrounds can be generalized to curved backgrounds by coupling them with supergravity and taking the gravity multiplets non-dynamic \cite{Festuccia Seiberg}. Preserved supersymmetries in those cases are found by calculating the vanishing condition of the gravitino variation under supersymmetry transformations. Similarly, superconformal field theories can also be extended to curved backgrounds by coupling to conformal supergravity and they are also studied by using holography \cite{Klare Tomasiello Zaffaroni, Hristov Tomasiello Zaffaroni, Cassani Martelli, Cassani Klare Martelli Tomasiello Zaffaroni}. In all of these cases, supersymmetry generators are determined by the same equation which comes from the variation of the gravitino. It is called the gauged twistor equation or charged conformal Killing spinor equation. It is a generalization of the twistor equation written in terms of the gauged covariant derivative and the gauged Dirac operator \cite{de Medeiros1, de Medeiros2, Lischewski}. Because of the existence of the gauge fields, the solutions of the spinor field equations correspond to $\text{Spin}{^c}$ spinors, namely spinor fields are sections of the bundle that is written as a product of spinor bundle and gauge bundle. The solutions of the gauged twistor equation are called gauged twistor spinors and generate the preserved supersymmetries of supersymmetric and superconformal field theories in curved backgrounds. One of the methods for finding the solutions of an equation is constructing the symmetry operators of it. Symmetry operators take a solution of the equation and give another solution. The set of mutually commuting symmetry operators are used for finding a general solution by using the method of separation of variables \cite{Miller}. Symmetry operators of some basic spinor field equations can be constructed from the hidden symmetries of the background manifold. Hidden symmetries are defined as the antisymmetric generalizations of Killing vector fields and conformal Killing vector fields to higher degree differential forms. For Killing vector fields, those generalizations are called Killing-Yano (KY) forms and for conformal Killing vector fields, they are conformal Killing-Yano (CKY) forms. The symmetry operators of massless and massive Dirac equations are constructed out of CKY forms and KY forms, respectively \cite{Benn Charlton, Benn Kress, Acik Ertem Onder Vercin, Houri Kubiznak Warnick Yasui, Kubiznak Warnick Krtous, Cariglia Krtous Kubiznak, Cariglia}. Similarly, symmetry operators of geometric Killing spinors are written in terms of odd degree KY forms in constant curvature backgrounds \cite{Ertem1}. CKY forms are used in the construction of the symmetry operators of twistor spinors in constant curvature backgrounds and normal CKY forms play the same role in Einstein manifolds \cite{Ertem2}. Moreover, the symmetry operators of Killing and twistor spinors are also used in the definitions of more general structures such as extended Killing superalgebras and extended conformal superalgebras \cite{Ertem1, Ertem2}. In this paper, we consider the gauged twistor equation and find its integrability conditions in general $n$ dimensions. We write the spinor bilinears of gauged twistor spinors and show that they correspond to gauged CKY forms which are generalizations of CKY forms with respect to a gauged covariant derivative. We propose a symmetry operator for the gauged twistor equation in terms of CKY forms and prove that it satisfies the required conditions of being a symmetry operator in constant curvature manifolds. Since the spinor bilinears of ordinary twistor spinors correspond to CKY forms, this also opens a way to find the solutions of the gauged twistor equation by using ordinary twistor spinors. This provides a way to find the supersymmetry generators of supersymmetric and superconformal field theories in constant curvature backgrounds. The paper is organized as follows. We define the gauged twistor equation and find its integrability conditions in Section 2. In Section 3, we construct the spinor bilinears of gauged twistor spinors and show that they correspond to gauged CKY forms. A symmetry operator of gauged twistor spinors is proposed in Section 4 and it is proved that it satifies the symmetry operator requirements in constant curvature backgrounds. We also show that the symmetry operator can be written in terms of ordinary twistor spinors. Section 5 concludes the paper. \section{Gauged twistor spinors} On a manifold $M$ with $\text{Spin}^c$-structure, one can define a bundle of $U(1)$-valued spinors $S\otimes\Sigma$ where $S$ is the spinor bundle and $\Sigma$ is the $U(1)$ bundle. These types of manifolds can be used to model the backgrounds of supersymmetric field theories on curved space-time. Supersymmetric field theories in flat space-time can be extended to curved space-times by coupling with conformal supergravity and then fixing the gravity multiplets. To preserve some amount of supersymmetry in curved backgrounds, one obtains a condition on the supersymmetry parameters that comes from the variation of the gravitino. The supersymmetry parameters must satisfy the following gauged twistor (or charged conformal Killing spinor) equation in $n$ dimensions; \begin{equation} \widehat{\nabla}_X\psi=\frac{1}{n}\widetilde{X}.\widehat{\displaystyle{\not}D}\psi \end{equation} with respect to any vector field $X$ and its metric dual $\widetilde{X}$ where $\psi$ is a $\text{Spin}^c$ (or $U(1)$-valued) spinor. Gauged spinor covariant derivative $\widehat{\nabla}_X$ with respect to $X$ is defined in terms of the spinor covariant derivative $\nabla_X$ and gauge connection 1-form $A$, which is generally complex, as \begin{equation} \widehat{\nabla}_X:=\nabla_X+i_XA \end{equation} where $i_X$ is the interior derivative or contraction operation with respect to $X$. The Dirac operator $\displaystyle{\not}D$ is defined from spinor covariant derivative, frame basis ${X_a}$ and co-frame basis ${e^a}$ with the property $e^a(X_b)=\delta^a_b$ as $\displaystyle{\not}D=e^a.\nabla_{X_a}$ where $.$ denotes the Clifford product. So, the gauged Dirac operator $\widehat{\displaystyle{\not}D}$ in (1) is written as follows \begin{eqnarray} \widehat{\displaystyle{\not}D}:=e^a.\widehat{\nabla}_{X_a}=\displaystyle{\not}D+A \end{eqnarray} where we have used the expansion of Clifford product in terms of wedge product and interior derivative as $x.\alpha=x\wedge\alpha+i_{\widetilde{x}}\alpha$ for any 1-form $x$, its metric dual $\widetilde{x}$ and any differential $p$-form $\alpha$. We have also used the property $e^a\wedge i_{X_a}\alpha=p\alpha$ \cite{Benn Tucker}. The exterior derivative $d$ and co-derivative $\delta$ can be written in terms of covariant derivative (with vanishing torsion) as \begin{equation} d=e^a\wedge\nabla_{X_a}\quad\quad,\quad\quad\delta=-i_{X^a}\nabla_{X_a} \end{equation} and gauged exterior derivative $\widehat{d}$ and co-derivative $\widehat{\delta}$ can be written in terms of them \begin{eqnarray} \widehat{d}&:=&e^a\wedge\widehat{\nabla}_{X_a}=d+A\wedge\\ \widehat{\delta}&:=&-i_{X^a}\widehat{\nabla}_{X_a}=\delta-i_{\widetilde{A}} \end{eqnarray} where $\widetilde{A}$ is the vector field that is the metric dual of the 1-form $A$. However, on the contrary to the case of $d$ and $\delta$ which satisfy $d^2=\delta^2=0$, the squares of gauged exterior and co-derivatives are written in the following form \begin{eqnarray} \widehat{d}^2&=&F\wedge\\ \widehat{\delta}^2&=&-(i_{X^a}i_{X^b}F)i_{X_a}i_{X_b} \end{eqnarray} where $F=dA$ is the curvature of the gauge connection 1-form $A$ \cite{Charlton}. \subsection{Integrability conditions} The existence of gauged twistor spinors in a manifold depends on some integrability conditions of (1) which constrain the curvature characteristics of the background manifold. They can be obtained by taking second covariant derivatives of the gauged twistor equation and by using the following definition of the curvature operator of the gauged covariant derivative \begin{equation} \widehat{R}(X,Y)=[\widehat{\nabla}_X,\widehat{\nabla}_Y]-\widehat{\nabla}_{[X,Y]} \end{equation} where $X$ and $Y$ are arbitrary vector fields. From the definition (2), it can be written in terms of the curvature operator $R(X,Y)$ of the Levi-Civita connection and the curvature $F$ of the gauge connection as follows \begin{equation} \widehat{R}(X_a,X_b)=R(X_a,X_b)+i_{X_b}i_{X_a}F \end{equation} where $\{X_a\}$ is an orthonormal frame. The action of the curvature operator $R(X_a,X_b)$ on a spinor $\psi$ can be written in terms of curvature 2-forms $R_{ab}$ as $R(X_a, X_b)\psi=\frac{1}{2}R_{ab}.\psi$ \cite{Benn Tucker, Charlton}. From the curvature 2-forms $R_{ab}$, the definition of Ricci 1-forms $P_a$ and curvature scalar ${\cal{R}}$ can be stated as $P_a=i_{X^b}R_{ba}$ and ${\cal{R}}=i_{X^a}P_a$, respectively. The action of the operator in (10) on a gauged twistor spinor $\psi$ must be equal to the action of the right hand side of (9) on the same gauged twistor spinor. By using (1), we obtain \begin{equation} R(X_a,X_b)\psi+(i_{X_b}i_{X_a}F)\psi=\frac{1}{n}\widehat{\nabla}_{X_a}(e_b.\widehat{\displaystyle{\not}D}\psi)-\frac{1}{n}\widehat{\nabla}_{X_b}(e_a.\widehat{\displaystyle{\not}D}\psi). \end{equation} So, one can write the action of curvature 2-forms $R_{ab}$ on gauged twistor spinors as follows \begin{equation} R_{ab}.\psi=\frac{2}{n}\left(e_b.\widehat{\nabla}_{X_a}\widehat{\displaystyle{\not}D}\psi-e_a.\widehat{\nabla}_{X_b}\widehat{\displaystyle{\not}D}\psi\right)-2(i_{X_b}i_{X_a}F)\psi. \end{equation} For zero torsion, we have the equalities $R_{ab}\wedge e^a=0$ and $e^a.R_{ab}=P_b$. By using them, the action of Ricci 1-forms $P_a$ on gauged twistor spinors can be calculated from (12) \begin{eqnarray} P_b.\psi&=&\frac{2}{n}\left(e^a.e_b.\widehat{\nabla}_{X_a}\widehat{\displaystyle{\not}D}\psi-e^a.e_a.\widehat{\nabla}_{X_b}\widehat{\displaystyle{\not}D}\psi\right)-2e^a.(i_{X_b}i_{X_a}F)\psi\nonumber\\ &=&-\frac{2}{n}e_b.\widehat{\displaystyle{\not}D}^2\psi-\frac{2(n-2)}{n}\widehat{\nabla}_{X_b}\widehat{\displaystyle{\not}D}\psi+2(i_{X_b}F).\psi \end{eqnarray} where we have used the Clifford algebra identity $e^a.e_b+e_b.e^a=2g^a_b$ for the metric $g_{ab}$ and the definition (3). Similarly, Ricci 1-forms satisfy the equalities $P_a\wedge e^a=0$ and $e^a.P_a={\cal{R}}$. So, we can write the action of the curvature scalar ${\cal{R}}$ on gauged twistor spinors from (13) \begin{eqnarray} {\cal{R}}\psi&=&-\frac{2}{n}e^a.e_a.\widehat{\displaystyle{\not}D}^2\psi-\frac{2(n-2)}{n}e^a.\widehat{\nabla}_{X_a}\widehat{\displaystyle{\not}D}\psi+2e^a.i_{X_a}F.\psi\nonumber\\ &=&-\frac{4(n-1)}{n}\widehat{\displaystyle{\not}D}^2\psi+4F.\psi. \end{eqnarray} By combining (13) and (14), one can obtain the following two integrability conditions of the gauged twistor equation \begin{equation} \widehat{\displaystyle{\not}D}^2\psi=-\frac{n}{4(n-1)}{\cal{R}}\psi+\frac{n}{n-1}F.\psi \end{equation} \begin{equation} \widehat{\nabla}_{X_a}\widehat{\displaystyle{\not}D}\psi=\frac{n}{2}K_a.\psi-\frac{n}{(n-1)(n-2)}e_a.F.\psi+\frac{n}{n-2}i_{X_a}F.\psi \end{equation} where the 1-form $K_a$ is defined as follows \begin{equation} K_a=\frac{1}{n-2}\left(\frac{\cal{R}}{2(n-1)}e_a-P_a\right). \end{equation} Moreover, from the definition of the conformal 2-forms \begin{equation} C_{ab}=R_{ab}-\frac{1}{n-2}\left(P_a\wedge e_b-P_b\wedge e_a\right)+\frac{1}{(n-1)(n-2)}{\cal{R}}e_{ab} \end{equation} where $e_{ab}=e_a\wedge e_b$, the third integrability condition that corresponds to the action of $C_{ab}$ on gauged twistor spinors can be found from (12), (13) and (14) as \begin{equation} C_{ab}.\psi=2(i_{X_a}i_{X_b}F)\psi+\frac{2}{n-2}\left(e_b.i_{X_a}F-e_a.i_{X_b}F\right).\psi+\frac{4}{(n-1)(n-2)}e_a.e_b.F.\psi. \end{equation} For $A=0$, (15), (16) and (19) reduce to the integrability conditions of the ordinary twistor spinor equation \cite{Baum Friedrich Grunewald Kath, Baum Leitner, Benn Kress2}. Besides being necessary conditions on gauged twistor spinors, the equalities (15), (16) and (19) also determine the existence conditions for gauged twistors. For the Spin$^c$ bundle $S'$, by defining the bundle $E=S'\oplus S'$ and obtaining the curvature operator of the bundle, one can see that the action of the curvature operator on $(\psi, \displaystyle{\not}{\widehat{D}}\psi)$ vanishes for a gauged twistor spinor $\psi$, becasue of (15), (16) and (19) \cite{Lischewski}. From this result, a partial classification for manifolds admitting gauged twistor spinors can be determined. For example, Lorentzian Einstein-Sasaki manifolds, Fefferman spaces and a product of Lorentizan Einsten-Sasaki manifolds and Riemannnian Einstein manifolds can admit gauged twistor spinors \cite{Lischewski}. \section{Spinor bilinears and gauged CKY forms} The tensor product of the spinor space $S$ and dual spinor space $S^*$ correspond to the algebra of endomorphisms over the spinor space; $S\otimes S^*=\text{End}(S)$ and it is isomorphic to the Clifford algebra of the relevant dimension which is also isomorphic to the exteriror algebra $\Lambda M$ of differential forms on $M$. So, the tensor products of spinors and its duals which are called spinor bilinears can be written as a sum of different degree differential forms \begin{equation} \psi\otimes\overline{\psi}=(\psi,\psi)+(\psi, e_a.\psi)e^a+(\psi, e_{ba}.\psi)e^{ab}+...+(\psi, e_{a_p...a_2a_1}.\psi)e^{a_1a_2...a_p}+...+(-1)^{\lfloor{n/2}\rfloor}(\psi, z.\psi)z \end{equation} where $e^{a_1a_2...a_p}=e^{a_1}\wedge e^{a_2}\wedge...\wedge e^{a_p}$, $\lfloor{ }\rfloor$ is the floor function that takes the integer part of the argument, $z$ is the volume form and $(\,,\,)$ denotes the spinor inner product. Every $p$-form component on the right hand side of (20) is called the $p$-form Dirac current as the generalization of the Dirac current that corresponds to the metric dual of the 1-form part of the spinor bilinear \cite{Acik Ertem}. $p$-form Dirac currents will be denoted as follows \begin{equation} (\psi\overline{\psi})_p=(\psi, e_{a_p...a_2a_1}.\psi)e^{a_1a_2...a_p}. \end{equation} For a gagued twistor spinor $\psi$, by requiring that the connection $\widehat{\nabla}$ is compatible with the spinor inner product $(\,,\,)$, we will show that the $p$-form Dirac currents of gauged twistor spinors satisfy the gauged CKY equation which is the generalization of the CKY equation that corresponds to the antisymmetric generalization of the conformal Killing equation to higher degree forms. After applying the gauged covariant derivative to (21) and doing some manipulations, one obtains that \begin{eqnarray} \widehat{\nabla}_{X_a}(\psi\overline{\psi})_p&=&\left((\widehat{\nabla}_{X_a}\psi)\overline{\psi}\right)_p+\left(\psi\overline{\widehat{\nabla}_{X_a}\psi}\right)_p\nonumber\\ &=&\frac{1}{n}\left((e_a.\widehat{\displaystyle{\not}D}\psi)\overline{\psi}\right)_p+\frac{1}{n}\left(\psi\overline{e_a.\widehat{\displaystyle{\not}D}\psi}\right)_p\nonumber\\ &=&\frac{1}{n}\left(e_a.\widehat{\displaystyle{\not}d}(\psi\overline{\psi})\right)_p-\frac{1}{n}\left(e_a.e_b.\psi\overline{\widehat{\nabla}_{X_b}\psi}\right)_p+\frac{1}{n}\left(\psi\overline{\widehat{\nabla}_{X_b}\psi}.e_b.e_a\right)_p\nonumber \end{eqnarray} where we have used (1) and $(\widehat{\nabla}_{X_a}\psi)\overline{\psi}=\widehat{\nabla}_{X_a}(\psi\overline{\psi})-\psi(\overline{\widehat{\nabla}_{X_a}\psi})$ with the definition $\widehat{\displaystyle{\not}d}=e_a.\widehat{\nabla}_{X^a}$ on differential forms. From (5) and (6), one can write $\widehat{\displaystyle{\not}d}=\widehat{d}-\widehat{\delta}$ and obtain the following equality by using the definition of the Clifford product of a 1-form with any $p$-form in terms of the wedge product and interior derivative \begin{eqnarray} \widehat{\nabla}_{X_a}(\psi\overline{\psi})_p&=&\frac{1}{n}\left(e_a\wedge\widehat{d}(\psi\overline{\psi})_{p-2}+i_{X_a}\widehat{d}(\psi\overline{\psi})_p\right)-\frac{1}{n}\left(e_a\wedge(e_b.\psi\overline{\widehat{\nabla}_{X^b}\psi})_{p-1}+i_{X_a}(e_b.\psi\overline{\widehat{\nabla}_{X^b}\psi})_{p+1}\right)\nonumber\\ &&-\frac{1}{n}\left(e_a\wedge\widehat{\delta}(\psi\overline{\psi})_p+i_{X_a}\widehat{\delta}(\psi\overline{\psi})_{p+2}\right)\pm\frac{1}{n}\left(e_a\wedge(\psi\overline{\widehat{\nabla}_{X^b}\psi}.e_b)_{p-1}+i_{X_a}(\psi\overline{\widehat{\nabla}_{X^b}\psi}.e_b)_{p+1}\right)\nonumber\\ \end{eqnarray} where $\pm$ sign depends on the chosen inner automorphism of the Clifford algebra which is used in the definition of the duality operation $\bar{\quad}$. By wedge multiplying (22) with $e^a$ from the left and using (5), we can write \begin{equation} \widehat{d}(\psi\overline{\psi})_p=\frac{p+1}{n}\left(\widehat{d}(\psi\overline{\psi})_p-\widehat{\delta}(\psi\overline{\psi})_{p+2}\right)-\frac{p+1}{n}\left((e_b.\psi\overline{\widehat{\nabla}_{X^b}\psi})_{p+1}\mp(\psi\overline{\widehat{\nabla}_{X^b}\psi}.e_b)_{p+1}\right) \end{equation} and similarly by taking the interior derivative of (22) with respect to $X_a$ and using (6), it can also be written \begin{equation} \widehat{\delta}(\psi\overline{\psi})_p=-\frac{n-p+1}{n}\left(\widehat{d}(\psi\overline{\psi})_{p-2}-\widehat{\delta}(\psi\overline{\psi})_p\right)+\frac{n-p+1}{n}\left((e_b.\psi\overline{\widehat{\nabla}_{X^b}\psi})_{p-1}\mp(\psi\overline{\widehat{\nabla}_{X^b}\psi}.e_b)_{p-1}\right). \end{equation} So, by comparing (22), (23) and (24), one can see that the $p$-form Dirac currents of gauged twistor spinors satisfy the following equation \begin{equation} \widehat{\nabla}_{X_a}(\psi\overline{\psi})_p=\frac{1}{p+1}i_{X_a}\widehat{d}(\psi\overline{\psi})_p-\frac{1}{n-p+1}e_a\wedge\widehat{\delta}(\psi\overline{\psi})_p. \end{equation} This equation is called the gauged CKY equation. In general, a $p$-form $\omega$ is called a gauged CKY $p$-form, if it satisfies the following gauged CKY equation \begin{equation} \widehat{\nabla}_{X_a}\omega=\frac{1}{p+1}i_{X_a}\widehat{d}\omega-\frac{1}{n-p+1}e_a\wedge\widehat{\delta}\omega. \end{equation} From (2), it can also be written in terms of the Levi-Civita connection $\nabla$ and the gauge potential 1-form $A$ as \begin{eqnarray} \nabla_{X_a}\omega-\frac{1}{p+1}i_{X_a}d\omega+\frac{1}{n-p+1}e_a\wedge\delta\omega\nonumber\\ =-\frac{p}{p+1}(i_{X_a}A)\omega-\frac{1}{p+1}A\wedge i_{X_a}\omega+\frac{1}{n-p+1}e_a\wedge i_{\widetilde{A}}\omega. \end{eqnarray} For $A=0$, it reduces to the ordinary CKY equation which is the antisymmetric generalization of the conformal Killing equation to higher degree forms \begin{equation} \nabla_{X_a}\omega=\frac{1}{p+1}i_{X_a}d\omega-\frac{1}{n-p+1}e_a\wedge\delta\omega. \end{equation} For $p=1$, (27) reduces to the shear-free vector field equation which is the generalization of the conformal Killing equation and describes the vector fields that constitute shear-free congruences \cite{Charlton}. Integrability conditions of the gauged CKY equation can be calculated by taking second covariant derivatives of (26). After some manipulations, they can be obtained as follows \begin{eqnarray} \widehat{\nabla}_{X_b}\widehat{d}\omega&=&\frac{p+1}{p}R_{ab}\wedge i_{X^a}\omega+\frac{p+1}{p(n-p+1)}e_b\wedge\widehat{d}\widehat{\delta}\omega\nonumber\\ &&+i_{X_b}F\wedge\omega-\frac{1}{p}F\wedge i_{X_b}\omega+\frac{p+1}{p(n-p+1)}e_b\wedge A\wedge\widehat{\delta}\omega \end{eqnarray} \begin{eqnarray} \widehat{\nabla}_{X_b}\widehat{\delta}\omega&=&\frac{n-p+1}{n-p}\bigg((i_{X_a}P_b)i_{X^a}\omega+i_{X_a}R_{cb}\wedge i_{X^c}i_{X^a}\omega+(i_{X_b}i_{X_a}F)i_{X^a}\omega\bigg)\nonumber\\ &&-\frac{n-p+1}{(p+1)(n-p)}i_{X_b}\widehat{\delta}\widehat{d}\omega-(i_{X_b}A)\widehat{\delta}\omega-\frac{1}{n-p}e_b\wedge\bigg(i_{\widetilde{A}}\widehat{\delta}\omega+(i_{X_a}i_{X_c}F)i_{X^a}i_{X^c}\omega\bigg)\nonumber\\ \end{eqnarray} and their combination gives \begin{eqnarray} \frac{p}{p+1}\widehat{\delta}\widehat{d}\omega+\frac{n-p}{n-p+1}\widehat{d}\widehat{\delta}\omega&=&P_a\wedge i_{X^a}\omega+R_{ab}\wedge i_{X^a}i_{X^b}\omega\nonumber\\ &&-\frac{n-p}{n-p+1}A\wedge\widehat{\delta}\omega+i_{X^a}F\wedge i_{X_a}\omega. \end{eqnarray} For $A=0$, they reduce to the integrability conditions of the ordinary CKY equation which read as \cite{Semmelmann, Ertem3} \begin{equation} \nabla_{X_b}d\omega=\frac{p+1}{p}R_{ab}\wedge i_{X^a}\omega+\frac{p+1}{p(n-p+1)}e_b\wedge d\delta\omega \end{equation} \begin{equation} \nabla_{X_b}\delta\omega=\frac{n-p+1}{n-p}\bigg((i_{X_a}P_b)i_{X^a}\omega+i_{X_a}R_{cb}\wedge i_{X^c}i_{X^a}\omega\bigg)-\frac{n-p+1}{(p+1)(n-p)}i_{X_b}\delta d\omega \end{equation} \begin{equation} \frac{p}{p+1}\delta d\omega+\frac{n-p}{n-p+1}d\delta\omega=P_a\wedge i_{X^a}\omega+R_{ab}\wedge i_{X^a}i_{X^b}\omega. \end{equation} \section{Symmetry operators} Solutions of the gauged twistor equation (1) gives the supersymmetry parameters of supersymmetric field theories coupled with conformal supergravity. So, finding a solution generating technique for (1) is an important problem. Rather than solving an equation directly, one can also construct symmetry operators of it which give the solutions of an equation from a known solution. For example, symmetry operators of massless and massive Dirac equation can be constructed from CKY and KY forms, respectively \cite{Benn Charlton, Benn Kress}. Similarly, symmetry operators of geometric Killing spinors and ordinary twistor spinors can also be written in terms of KY and CKY forms respectively in constant curvature manifolds \cite{Ertem1, Ertem2}. We can search for the symmetry operators of the gauged twistor equation in terms of gauged or ordinary CKY forms. We propose the following operator \begin{eqnarray} L_{\omega}&=&-(-1)^p\frac{p}{n}\omega.\widehat{\displaystyle{\not}D}+\frac{p}{2(p+1)}d\omega+\frac{p}{2(n-p+1)}\delta\omega\nonumber\\ &=&-(-1)^p\frac{p}{n}\omega.\displaystyle{\not}D+\frac{p}{2(p+1)}d\omega+\frac{p}{2(n-p+1)}\delta\omega-(-1)^p\frac{p}{n}\omega.A \end{eqnarray} written in terms of ordinary CKY $p$-forms $\omega$. Note that $d$ and $\delta$ are exterior and co-derivatives with respect to the Levi-Civita connection and $\widehat{\displaystyle{\not}D}$ is the gauged Dirac operator (3). Eq. (35) reduces to the symmetry operators of ordinary twistor spinors for $A=0$. To prove that (35) is a symmetry operator for the gauged twistor equation, we need to show that if $\psi$ is a gauged twistor spinor, then $L_{\omega}\psi$ is a solution of the gauged twistor equation, namely it satisfies the following equality \begin{equation} \widehat{\nabla}_{X_a}L_{\omega}\psi=\frac{1}{n}e_a.\widehat{\displaystyle{\not}D}L_{\omega}\psi \end{equation} which can also be written in terms the Levi-Civita connection as \begin{equation} \nabla_{X_a}L_{\omega}\psi-\frac{1}{n}e_a.\displaystyle{\not}DL_{\omega}\psi=-\frac{n-2}{2n}e_a.A.L_{\omega}\psi-\frac{1}{2}A.e_a.L_{\omega}\psi. \end{equation} So, we will expand all the terms in (37) to check the equality. By using (35), the first term on the left hand side of (37) can be obtained as follows \begin{eqnarray} \nabla_{X_a}L_{\omega}\psi&=&-(-1)^p\frac{p}{n}\nabla_{X_a}\omega.\widehat{\displaystyle{\not}D}\psi-(-1)^p\frac{p}{n}\omega.\nabla_{X_a}\widehat{\displaystyle{\not}D}\psi+\frac{p}{2(p+1)}\nabla_{X_a}d\omega.\psi\nonumber\\ &&+\frac{p}{2(p+1)}d\omega.\nabla_{X_a}\psi+\frac{p}{2(n-p+1)}\nabla_{X_a}\delta\omega.\psi+\frac{p}{2(n-p+1)}\delta\omega.\nabla_{X_a}\psi. \end{eqnarray} Here, we can use (1) and (16) which are written in terms of the Levi-Civita connection as \begin{equation} \nabla_{X_a}\psi=\frac{1}{n}e_a.\widehat{\displaystyle{\not}D}\psi-\frac{1}{2}(e_a.A+A.e_a).\psi \end{equation} \begin{equation} \nabla_{X_a}\widehat{\displaystyle{\not}D}\psi=\frac{n}{2}K_a.\psi-\frac{n}{(n-1)(n-2)}e_a.F.\psi+\frac{n}{n-2}i_{X_a}F.\psi-\frac{1}{2}(e_a.A+A.e_a).\widehat{\displaystyle{\not}D}\psi. \end{equation} Hence, (38) transforms into the following form \begin{eqnarray} \nabla_{X_a}L_{\omega}\psi&=&\bigg[-(-1)^p\frac{p}{n}\nabla_{X_a}\omega+(-1)^p\frac{p}{2n}(e_a.A+A.e_a).\omega\nonumber\\ &&+\frac{p}{2n(p+1)}d\omega.e_a+\frac{p}{2n(n-p+1)}\delta\omega.e_a\bigg].\widehat{\displaystyle{\not}D}\psi\nonumber\\ &&+\bigg[-(-1)^p\frac{p}{2}\omega.K_a+(-1)^p\frac{p}{(n-1)(n-2)}\omega.e_a.F-(-1)^p\frac{p}{n-2}\omega.i_{X_a}F\nonumber\\ &&+\frac{p}{2(p+1)}\nabla_{X_a}d\omega-\frac{p}{4(p+1)}(e_a.A+A.e_a).d\omega+\frac{p}{2(n-p+1)}\nabla_{X_a}\delta\omega\nonumber\\ &&-\frac{p}{4(n-p+1)}(e_a.A+A.e_a).\delta\omega\bigg].\psi \end{eqnarray} where we have used the fact that $e_a.A+A.e_a=2i_{X_a}A$ is a function and it commutes with the differential forms $\omega$, $d\omega$ and $\delta\omega$. The second term on the left hand side of (37) can also be written from (41) and we obtain \begin{eqnarray} -\frac{1}{n}e_a.\displaystyle{\not}DL_{\omega}\psi&=&-\frac{1}{n}e_a.e^b.\nabla_{X_b}L_{\omega}\psi\nonumber\\ &=&\bigg[(-1)^p\frac{p}{n^2}e_a.e^b.\nabla_{X_b}\omega-(-1)^p\frac{p}{n^2}e_a.A.\omega\nonumber\\ &&-\frac{p}{2n^2(p+1)}e_a.e^b.d\omega.e_b-\frac{p}{2n^2(n-p+1)}e_a.e^b.\delta\omega.e_b\bigg].\widehat{\displaystyle{\not}D}\psi\nonumber\\ &&+\bigg[(-1)^p\frac{p}{2n}e_a.e^b.\omega.K_b-(-1)^p\frac{p}{n(n-1)(n-2)}e_a.e^b.\omega.e_b.F\nonumber\\ &&+(-1)^p\frac{p}{n(n-2)}e_a.e^b.\omega.i_{X_b}F-\frac{p}{2n(p+1)}e_a.e^b.\nabla_{X_b}d\omega++\frac{p}{2n(p+1)}e_a.A.d\omega\nonumber\\ &&-\frac{p}{2n(n-p+1)}e_a.e^b.\nabla_{X_b}\delta\omega+\frac{p}{2n(n-p+1)}e_a.A.\delta\omega\bigg].\psi\nonumber\\ \end{eqnarray} where we have simplified the terms by using again the relation $e_a.A+A.e_a=2i_{X_a}A$ and $A=(i_{X_a}A)e^a$. Similarly, by using (35), we can write the terms on the right hand side of (37) in the following way \begin{eqnarray} -\frac{n-2}{2n}e_a.A.L_{\omega}\psi&=&(-1)^p\frac{p(n-2)}{2n^2}e_a.A.\omega.\widehat{\displaystyle{\not}D}\psi\nonumber\\ &&-\bigg[\frac{p(n-2)}{4n(p+1)}e_a.A.d\omega+\frac{p(n-2)}{4n(n-p+1)}e_a.A.\delta\omega\bigg].\psi \end{eqnarray} and \begin{eqnarray} -\frac{1}{2}A.e_a.L_{\omega}\psi&=&(-1)^p\frac{p}{2n}A.e_a.\omega.\widehat{\displaystyle{\not}D}\psi\nonumber\\ &&-\bigg[\frac{p}{4(p+1)}A.e_a.d\omega+\frac{p}{4(n-p+1)}A.e_a.\delta\omega\bigg].\psi. \end{eqnarray} Now, we write all the terms in (37) explicitly and we are in a position to check the correctness of (37) by comparing the equalities in (41)-(44). We will do this in two steps, since we can consider the coefficients of $\widehat{\displaystyle{\not}D}\psi$ and $\psi$ separately in each equality. So, as a first step, the terms in the coefficients of $\widehat{\displaystyle{\not}D}\psi$ for the terms on the left hand side of (37) must be equal to the coefficients of $\widehat{\displaystyle{\not}D}\psi$ for the terms on the right hand side of (37). We know that $\omega$ is an ordinary CKY $p$-form and satisfies (28), so the sum of the coefficients of $\widehat{\displaystyle{\not}D}\psi$ in (41) and (42) (corresponding to the left hand side of (37)) can be written as \begin{eqnarray} &&-(-1)^p\frac{p}{n(p+1)}i_{X_a}d\omega+(-1)^p\frac{p}{n(n-p+1)}e_a\wedge\delta\omega+\frac{p}{2n(p+1)}d\omega.e_a+\frac{p}{2n(n-p+1)}\delta\omega.e_a\nonumber\\ &&+(-1)^p\frac{p}{n^2(p+1)}e_a.e^b.i_{X_b}d\omega-(-1)^p\frac{p}{n^2(n-p+1)}e_a.e^b.(e_b\wedge\delta\omega)\nonumber\\ &&-\frac{p}{2n^2(p+1)}e_a.e^b.d\omega.e_b-\frac{p}{2n^2(n-p+1)}e_a.e^b.\delta\omega.e_b\nonumber\\ &&+(-1)^p\frac{p}{2n}(e_a.A+A.e_a).\omega-(-1)^p\frac{p}{n^2}e_a.A.\omega\nonumber\\ &=&(-1)^p\frac{p}{2n}(e_a.A+A.e_a).\omega-(-1)^p\frac{p}{n^2}e_a.A.\omega \end{eqnarray} where we have used the expansion of the Clifford product in terms of the wedge product and interior derivative as follows \begin{eqnarray} d\omega.e_a&=&-(-1)^pe_a\wedge d\omega+(-1)^pi_{X_a}d\omega,\nonumber\\ \delta\omega.e_a&=&-(-1)^pe_a\wedge\delta\omega+(-1)^pi_{X_a}\delta\omega \end{eqnarray} and from the equality $e^a.\omega.e_a=(-1)^p(n-2p)\omega$ \begin{eqnarray} e_a.e^b.i_{X_b}d\omega&=&(p+1)e_a\wedge d\omega+(p+1)i_{X_a}d\omega,\nonumber\\ e_a.e^b.(e_b\wedge\delta\omega)&=&(n-p+1)e_a\wedge\delta\omega+(n-p+1)i_{X_a}\delta\omega,\nonumber\\ e_a.e^b.d\omega.e_b&=&-(-1)^p(n-2(p+1))e_a\wedge d\omega-(-1)^p(n-2(p+1))i_{X_a}d\omega,\\ e_a.e^b.\delta\omega.e_b&=&-(-1)^p(n-2(p-1))e_a\wedge\delta\omega-(-1)^p(n-2(p-1))i_{X_a}\delta\omega.\nonumber \end{eqnarray} So, the terms that do not contain $A$ on the left hand side of (45) cancel each other and we obtain the right hand side of (45). On the other hand, one can easily see that, for the sum of the coefficients of $\widehat{\displaystyle{\not}D}\psi$ in (43) and (44) (corresponding to the right hand side of (37)) is exactly equal to the right hand side of (45). Hence, we prove the first step, that is the coefficients of $\widehat{\displaystyle{\not}D}\psi$ in the equalities (41)-(44) satisfy (37). As a second step, we will consider the coefficients of $\psi$ for the terms in (37) by using (41)-(44). We can write the coefficients of $\psi$ in (37) in the following form \begin{eqnarray} &&-(-1)^p\frac{p}{2}\omega.K_a+\frac{p}{2(p+1)}\nabla_{X_a}d\omega+\frac{p}{2(n-p+1)}\nabla_{X_a}\delta\omega\nonumber\\ &&+(-1)^p\frac{p}{2n}e_a.e^b.\omega.K_b-\frac{p}{2n(p+1)}e_a.e^b.\nabla_{X_b}d\omega-\frac{p}{2n(n-p+1)}e_a.e^b.\nabla_{X_b}\delta\omega\nonumber\\ &&+(-1)^p\frac{p}{(n-1)(n-2)}\omega.e_a.F-(-1)^p\frac{p}{n-2}\omega.i_{X_a}F\nonumber\\ &&-(-1)^p\frac{p}{n(n-1)(n-2)}e_a.e^b.\omega.e_b.F+(-1)^p\frac{p}{n(n-2)}e_a.e^b.\omega.i_{X_b}F\nonumber\\ &&-\frac{p}{4(p+1)}(e_a.A.d\omega+A.e_a.d\omega)-\frac{p}{4(n-p+1)}(e_a.A.\delta\omega+A.e_a.\delta\omega)\nonumber\\ &&+\frac{p}{2n(p+1)}e_a.A.d\omega+\frac{p}{2n(n-p+1)}e_a.A.\delta\omega+\frac{p(n-2)}{4n(p+1)}e_a.A.d\omega\nonumber\\ &&+\frac{p(n-2)}{4n(n-p+1)}e_a.A.\delta\omega+\frac{p}{4(p+1)}A.e_a.d\omega+\frac{p}{4(n-p+1)}A.e_a.\delta\omega\nonumber\\ &=&0 \end{eqnarray} and we need to check that the left hand side of (48) is equal to zero. As can easily be seen that the terms that contain $A$ on the left hand side of (48) cancel each other and we obtain \begin{eqnarray} &&-(-1)^pp\omega.\bigg[\frac{K_a}{2}-\frac{1}{(n-1)(n-2)}e_a.F+\frac{1}{n-2}i_{X_a}F\bigg]\nonumber\\ &&+(-1)^p\frac{p}{n}e_a.e^b.\omega.\bigg[\frac{K_b}{2}-\frac{1}{(n-1)(n-2)}e_b.F+\frac{1}{n-2}i_{X_b}F\bigg]\nonumber\\ &&+\frac{p}{2(p+1)}\bigg[\nabla_{X_a}d\omega-\frac{1}{n}e_a.e^b.\nabla_{X_b}d\omega\bigg]\nonumber\\ &&+\frac{p}{2(n-p+1)}\bigg[\nabla_{X_a}\delta\omega-\frac{1}{n}e_a.e^b.\nabla_{X_b}\delta\omega\bigg]\nonumber\\ &=&0. \end{eqnarray} We know that $\omega$ is an ordinary CKY $p$-form and it satisfies the integrability conditions in (32)-(34). Considering this and the following equalities (by using (34)) \begin{eqnarray} e_a.e^b.\nabla_{X_b}d\omega&=&-e_a\wedge\delta d\omega-i_{X_a}\delta d\omega\nonumber\\ &=&\frac{(p+1)(n-p)}{p(n-p+1)}e_a\wedge d\delta\omega-i_{X_a}\delta d\omega\nonumber\\ &&-\frac{p+1}{p}\bigg[e_a\wedge P_b\wedge i_{X^b}\omega+e_a\wedge R_{bc}\wedge i_{X^b}i_{X^c}\omega\bigg] \end{eqnarray} \begin{eqnarray} e_a.e^b.\nabla_{X_b}\delta\omega&=&e_a\wedge d\delta\omega+i_{X_a}d\delta\omega\nonumber\\ &=&e_a\wedge d\delta\omega-\frac{p(n-p+1)}{(p+1)(n-p)}i_{X_a}\delta d\omega\nonumber\\ &&+\frac{n-p+1}{n-p}\bigg[i_{X_a}(P_b\wedge i_{X^b}\omega)+i_{X_a}(R_{bc}\wedge i_{X^b}i_{X^c}\omega)\bigg] \end{eqnarray} (49) transforms into \begin{eqnarray} &&-(-1)^pp\omega.\bigg[\frac{K_a}{2}-\frac{1}{(n-1)(n-2)}e_a.F+\frac{1}{n-2}i_{X_a}F\bigg]\nonumber\\ &&+(-1)^p\frac{p}{n}e_a.e^b.\omega.\bigg[\frac{K_b}{2}-\frac{1}{(n-1)(n-2)}e_b.F+\frac{1}{n-2}i_{X_b}F\bigg]\nonumber\\ &&+\frac{1}{2}R_{ba}\wedge i_{X^b}\omega+\frac{1}{2n}\bigg(e_a\wedge P_b\wedge i_{X^b}\omega+e_a\wedge R_{bc}\wedge i_{X^b}i_{X^c}\omega\bigg)\nonumber\\ &&+\frac{p}{2(n-p)}\bigg((i_{X_b}P_a)i_{X^b}\omega+i_{X_b}R_{ca}\wedge i_{X^c}i_{X^b}\omega\bigg)\nonumber\\ &&-\frac{p}{2n(n-p)}\bigg(i_{X_a}(P_b\wedge i_{X^b}\omega)+i_{X_a}(R_{bc}\wedge i_{X^b}i_{X^c}\omega)\bigg)\nonumber\\ &=&0. \end{eqnarray} In general, this is a very restrictive condition on the curvature characteristics of the manifold and gauge curvature $F$ related to the ordinary CKY forms of the background. However, a simplification in (52) occurs if we consider the constant curvature manifolds. In this case, the curvature 2-forms are written as $R_{ab}=\frac{\cal{R}}{n(n-1)}e_a\wedge e_b$ and Ricci 1-forms are $P_a=\frac{\cal{R}}{n}e_a$ while $K_a=-\frac{\cal{R}}{2n(n-1)}e_a$. By substituting them in (52) and using the equality $e^a\wedge i_{X_a}\omega=p\omega$, one can see that the terms that contain curvature characteristics (the terms that does not contain $F$) cancel each other and only the terms that contain $F$ remain. Moreover, constant curvature manifolds are conformally-flat, namely the conformal 2-forms defined in (18) are equal to zero $C_{ab}=0$. In Lorentzian space-times, the gauge curvature $F$ can be determined from $C_{ab}$ by using the integrability condition (19) of the gauged twistor equation \cite{Cassani Martelli}. Indeed, in constant curvature manifolds, the gauge curvature $F=0$ with non-zero $A$, so we have flat connections in the definition of the gauged covariant derivative \cite{Cassani Martelli, de Medeiros2}. This means that the remaining terms that contain $F$ in (52) will be equal to zero and the vanisihing condition of the left hand side of (52) is satisfied. Hence, the terms that are in the coefficients of $\psi$ satisfy (37). In that way, we prove that the operator defined in (35) is a symmetry operator for the gauged twistor equation in constant curvature manifolds. So, one can construct gauged twistor spinors which are the supersymmetry generators of supersymmetric field theories coupled to supergravity from a known solution by using the ordinary CKY forms of the constant curvature background through (35). The constant curvature manifolds such as anti-de Sitter (AdS) space-times are important since they occur in the backgrounds of supergravity theories and supersymmetric field theories. On the other hand, CKY forms and gauged twistor spinors can still exist in non-constant curvature manifolds. However, the construction of symmetry operators is more complicated in that case. In some algebraically special spacetimes such as having Petrov type II, III or D, the gauge curvature $F$ can have some zero components although not totally zero \cite{de Medeiros2}. By considering the components of the curvature characteristics and the gauge curvature, one can still find some CKY forms that satisfy (52) to construct the symmetry operators of gauged twistor spinors in more general cases. \subsection{From ordinary twistors to gauged twistors} The construction of the symmetry operator (35) gives rise to a relation between ordinary twistor spinors and gauged twistor spinors. The spinor bilinears of ordinary twistor spinors correspond to the ordinary CKY forms \cite{Acik Ertem}. This means that the symmetry operators in (35) can be written in terms of the ordinary twistor spinors. For an ordinary twistor spinor $\phi$, the $p$-form Dirac currents as defined in (21) \begin{equation} (\phi\overline{\phi})_p=(\phi, e_{a_p...a_2a_1}.\phi)e^{a_1a_2...a_p} \end{equation} can be replaced with the CKY $p$-forms in the symmetry operator (35). So, the symmetry operator of a gauged twistor spinor $\psi$ can be constructed from the $p$-form Dirac currents of ordinary twistor spinors as \begin{equation} L_{\phi\overline{\phi}}\psi=-(-1)^p\frac{p}{n}(\phi\overline{\phi})_p.\widehat{\displaystyle{\not}D}\psi+\frac{p}{2(p+1)}d(\phi\overline{\phi})_p.\psi+\frac{p}{2(n-p+1)}\delta(\phi\overline{\phi})_p.\psi \end{equation} and this means that the ordinary twistor spinors generate the solutions of the gauged twistor equation. The exterior derivative and co-derivative of the $p$-form Dirac currents of the ordinary twistor spinors can be found as \cite{Acik Ertem} \begin{equation} d(\phi\overline{\phi})_p=\frac{p+1}{n}\bigg(\displaystyle{\not}d(\phi\overline{\phi})-2i_{X^a}(\phi\overline{\nabla_{X_a}\phi})\bigg)_{p+1} \end{equation} \begin{equation} \delta(\phi\overline{\phi})_p=-\frac{n-p+1}{n}\bigg(\displaystyle{\not}d(\phi\overline{\phi})-2e^a\wedge(\phi\overline{\nabla_{X_a}\phi})\bigg)_{p-1} \end{equation} and (54) can also be written in the following form \begin{eqnarray} L_{\phi\overline{\phi}}\psi&=&-\frac{p}{n}\bigg[(-1)^p(\phi\overline{\phi})_p.\widehat{\displaystyle{\not}D}\psi+\frac{1}{n}\bigg(\big(i_{X^a}(\phi\overline{e_a.\displaystyle{\not}D\phi})\big)_{p+1}-\big(e^a\wedge(\phi\overline{e_a.\displaystyle{\not}D\phi})\big)_{p-1}\bigg).\psi\bigg]\nonumber\\ &&+\frac{p}{2n}\bigg[\big(\displaystyle{\not}d(\phi\overline{\phi})\big)_{p+1}-\big(\displaystyle{\not}d(\phi\overline{\phi})\big)_{p-1}\bigg].\psi \end{eqnarray} For $A=0$, this reduces to the symmetry operators of ordinary twistor spinors written in terms of ordinary twistor spinors \cite{Ertem2}. The symmetry operators of the gauged twistor equation is defined in constant curvature manifolds and the set of CKY forms in that case is of maximal dimension. The maximum number of CKY $p$-forms in $n$ dimensions is \cite{Semmelmann} \begin{equation} C_p=\left( \begin{array}{c} n \\ p-1 \\ \end{array} \right)+2\left( \begin{array}{c} n \\ p \\ \end{array} \right)+\left( \begin{array}{c} n \\ p+1 \\ \end{array} \right) \end{equation} and the dimension of the space of ordinary twistor spinors is given in $n$ dimensional constant curvature manifolds as \cite{Lichnerowicz} \begin{equation} t=2^{\lfloor n/2\rfloor}+1. \end{equation} Those sets of CKY forms or ordinary twistor spinors generate the gauged twistor spinors through (35) or (54). As an example, let us consider the AdS$_4$ spacetime. The explicit forms of geometric Killing spinors which are twistor spinors corresponding to the eigenspinors of the Dirac operator are calculated in \cite{OFarrill Gutowski Sabra} for AdS$_4$. For the coordinates $t, x, r, \rho$ and a relevant coframe basis ${e^a}$ with $a=0,...,3$, the spinor \[ \kappa=e^{it}\left(\cosh\rho+i\sinh\rho\right)1-e^{it}\left(\sinh\rho+i\cosh\rho\right)e^2 \] is a geometric Killing spinor. One can construct more general twistor spinors from geometric Killing spinors by using the Killing reversal method \cite{Fujii Yamagishi, Acik}. So, the following spinor \[ \phi=\kappa+z.\kappa \] is a twistor spinor in AdS$_4$ where $z$ is the volume form. The properties of the spinor inner product in four dimensional Lorentzian manifolds gives the fact that the only non-zero spinor bilinears correspond to 1-form and 2-form Dirac currents \cite{Alexeevsky et al}. So, we can construct the following CKY 1- and 2-forms from the twistor spinor $\phi$ \begin{eqnarray} \omega_1&=&(\phi, e_a.\phi)e^a\nonumber\\ \omega_2&=&(\phi, e_{ba}.\phi)e^{ab}\nonumber. \end{eqnarray} Then, we have the symmetry operators written in terms of CKY forms $\omega_1$ and $\omega_2$ as \[ L_{\omega_i}=-(-1)^p\frac{p}{4}\omega_i.\displaystyle{\not}{\widehat{D}}+\frac{p}{2(p+1)}d\omega_i+\frac{p}{2(4-p+1)}\delta\omega_i \] where $p=1,2$ for $i=1,2$. This operators transforms solutions of the gauged twistor equation to other solutions of it in AdS$_4$. So, one can investigate the algebra structure of these symmetry operators constructed from all twistor spinors in AdS$_4$ to find a mutually commuting set and construct a general solution for the gauged twistor equation, or one can use it to find gauged twistor spinors from the known ones. \section{Conclusion} Symmetry operators of the gauged twistor equation in terms of ordinary CKY forms is constructed in constant curvature backgrounds. Since the existence of CKY forms or gauged twistor spinors is a restrictive condition on the underlying manifold, the construction of symmetry operators is constrained to constant curvature manifolds. This is expected, because of the fact that the constant curvature manifolds have maximum numbers of CKY forms and ordinary twistor spinors and constructing symmetry operators using them in constant curvature manifolds is more possible than the other cases. Construction of those symmetry operators provides a new way to obtain the supersymmetry generators of supersymmetric and superconformal field theories in curved backgrounds. The spinor bilinears of gauged twistor spinors correspond to gauged CKY forms. However, the symmetry operators contain ordinary CKY forms and not gauged CKY forms. This means that the extended superalgebras that contain gauged twistor spinors and gauged CKY forms cannot be obtained by using the constructed symmetry operators while they can be constructed for the cases of ordinary twistor spinors and geometric Killing spinors \cite{Ertem1, Ertem2}. So, one can search for the other types of symmetry operators of gauged twistor equation constructed out of gauged CKY forms and try to construct extended superalgebras from them. These superalgebra structures are important in the classification problem of supergravity and supersymmetric field theory backgrounds. The methods described in the paper can be used for the explicit constructions of the symmetry operators of gauged twistor spinors in various constant curvature backgrounds. In that way, one can obtain the supersymmetry generators of supersymmetric field theories in those backgrounds. By investigating the algebra structure of those symmetry operators, one can also obtain a commuting set of symmetry operators which provide a general solution for the gauged twistor equation. Moreover, the extensions of the procedures for the construction of the symmetry operators to the cases of more general twistor equations is possible. For example, in the presence of supergravity fluxes, the supergravity twistor equations are coupled to these fluxes and the symmetry operators of those twistor spinors can be investigated in a similar way.
1,116,691,501,069
arxiv
\section{INTRODUCTION} The application of quantum field theory to curved space has resulted in a large array of interesting and important results. These include black hole evaporation\cite{Hawk1} and its implications for black hole thermodynamics\cite{GH} , the dissipation of anisotropy by particle production in cosmological spacetimes\cite{Z,ZS,H1,HFP,FPH,BB,LS,HP,HH}, and the removal of cosmological singularities by vacuum polarization effects\cite{PF,S,FHH,A,AZ}. One of the places for which quantum effects have been studied the least is the interior of a black hole. One might think that such studies are not interesting because no observer from the exterior region can probe the interior region unless they choose to fall into the hole. However the existence of black hole evaporation makes it quite possible to eventually learn about quantum effects in the interior of a black hole\footnote{By interior we mean here the region inside the apparent horizon.}. This is because as a black hole evaporates more and more of its interior is exposed. Thus not only can quantum effects in the interior of a black hole eventually be detected, they may have a significant influence on the evaporation process. Quantum effects in the interior may in fact have a direct bearing on two of the most fundamental outstanding issues relating to the quantum mechanics of black holes. One of these is the question of what happens during the late stages of black hole evaporation, that is, what is the end point of the evaporation process? The other is the question of what happens to the information about how the black hole formed. There are at least two ways in which quantum effects in the interior could affect the answers to these questions. One is that if quantum effects remove the singularity predicted by general relativity then it is very likely that the evolution will be unitary and information will not be destroyed. A second possibility is that quantum effects could cause the evaporation process to cease leaving a zero temperature black hole remnant. If the remnant has an event horizon the information would very likely be trapped inside the black hole. Since the temperature of a black hole is determined by the surface gravity at its horizon and since the evaporation process causes the horizon to be at points which were previously in the (apparent) interior, it is clear that the geometry of the interior is likely to influence the evaporation process as it progresses. One interesting quantum effect that seems likely to occur inside the horizon of a black hole is the dissipation of anisotropy and possibly inhomogeneity due to particle production. This is because the interior of such a black hole can be thought of as an anisotropic and possibly inhomogeneous cosmology. For example the interior of a Schwarzschild black hole can be thought of as a homogeneous, anisotropic cosmology of the Kantowski-Sachs family\cite{KS}. It has been well established that particle production dissipates anisotropy in Bianchi Type I spacetimes\cite{Z,ZS,H1,HFP,FPH,BB,LS,HP,HH}. If the process of anisotropy dissipation occurs it will certainly alter the geometry in the interior of a black hole. For these reasons it is interesting to examine quantum effects in the interior of a black hole. To do so for either the interior or exterior of an evaporating black hole would be an enormously difficult task at present due to problems that one would encounter in computing the stress-energy tensors for quantized fields in the relevant spacetime. However, computing the stress-energy tensors for these fields in the case of a spherically symmetric black hole in thermal equilibrium with radiation in a cavity, {\it i.e.,} with the fields in the Hartle-Hawking state, is a much more tractable problem. The reason is that there are then three Killing vector fields in the spacetime, which makes the mode equations separable. For a black hole in equilibrium with fields in the Hartle-Hawking state, analytical approximations for the stress-energy tensors of various types of quantized fields have been obtained. The derivations of most of these approximations have been for the exterior region, but, as is discussed later, they all can easily be extended to the interior region. These approximations include those of Page, Brown, and Ottewill\cite{P,BO,PBO} for conformally invariant fields in Schwarzschild spacetime, that of Frolov and Zel'nikov\cite{FZ1} for conformally invariant fields in a general static spacetime, that of Anderson, Hiscock and Samuel\cite{AHS} for massless arbitrarily coupled scalar fields in a general static spherically symmetric spacetime, and the DeWitt-Schwinger approximation for massive fields which was derived by Frolov and Zel'nikov\cite{FZ82,FZ84} for Kerr spacetime, by Anderson Hiscock and Samuel\cite{AHS} for a scalar field in a general static spherically symmetric spacetime and most recently by Herman and Hiscock\cite{RH} for an arbitrary spacetime. In this paper the various approximations mentioned above are used to investigate quantum effects in the interior of a Schwarzschild black hole when the fields are in the Hartle-Hawking state. The resulting semiclassical backreaction equations are linearized about the classical geometry and their solutions are found. The questions of whether backreaction effects tend to isotropize the spacetime and whether they tend to ``soften'' the geometry as the singularity is approached are addressed. Although the questions of whether the anisotropy is completely dissipated or whether the singularity is removed cannot be answered by examining linear perturbations, the results do provide insight into these issues. In Section II the interior geometry of a Schwarzschild black hole is reviewed and in Section III the various analytical approximations are reviewed and discussed. Solutions to the linearized backreaction equations which are derived using these approximations are displayed in Section IV. In Section V the dissipation of anisotropy is computed and in Section VI the change in the curvature is computed. The results are summarized and discussed in Section VII. \section{SCHWARZSCHILD BLACK HOLE INTERIOR} The Schwarzschild black hole is described by the metric: \begin{equation} ds^2 = - \left( 1 - {2M \over r} \right) dt^2 + \left( 1 - {2M \over r} \right)^{-1} dr^2 + r^2 d{\Omega}^2 , \label{SchMetric} \end{equation} where $d{\Omega}^2$ is the metric of the two-sphere. The coordinate $r$ runs from 0 to $\infty$, and $t$ from $-\infty$ to $+\infty$. We are thus considering the complete Schwarzschild manifold, as is appropriate with the Hartle-Hawking vacuum state. The black hole interior is the region in which $0 \leq r \leq 2M$. In the interior, the vector field $\partial/\partial r$ is timelike and the vector field $\partial/\partial t$ is spacelike; hence, the coordinate $t$ is a spatial coordinate, while $r$ is a time coordinate. The nature of the interior is more easily visualized if new coordinate names are adopted to reflect the physical nature of the coordinates in the region of interest. Defining new coordinates by setting \begin{equation} T \equiv r \qquad, \qquad x \equiv t , \label{CoordNames} \end{equation} the metric takes the form \begin{equation} ds^2 = - \left( {2M \over T} - 1 \right)^{-1} dT^2 + \left( {2M \over T} - 1 \right) dx^2 + T^2 d{\Omega}^2 . \label{InsideMetric} \end{equation} The metric given by Eq. (\ref{InsideMetric}) is clearly an anisotropic homogeneous cosmology. The vector field $\partial/\partial t$ is, in the interior, one of the spacelike Killing vector fields (along with those on the two-sphere) which guarantee spatial homogeneity. The spatial coordinate $x$ here runs from $-\infty$ to $+\infty$, while $T$ runs from $2M$ down to zero at the curvature singularity in the black hole interior. The Schwarzschild manifold contains both an anisotropic expanding universe, the ``white hole'' portion of the extended geometry, and an anisotropic collapsing universe, the black hole interior. In this paper we shall base our discussion on the black hole interior portion of the geometry, but all conclusions may be restated in terms of the expanding white hole geometry due to the time reversal symmetry of both the Schwarzschild geometry and the Hartle-Hawking state we shall use to perturb it. However the boundary conditions for the fields in the two cases are very different. In the black hole case they are ``initial'' conditions, while in the white hole case they are ``final'' conditions for the interior region. While it is conventional to write homogeneous cosmological metrics in terms of a proper time coordinate, i.e., \begin{equation} \tau = \int {dT \over {\left( {2M \over T} - 1 \right)^{1/2}}} \quad . \label{ProperTime} \end{equation} in the present case the spatial metric components cannot be expressed in closed algebraic form in terms of such a coordinate. Upon carrying out the integral in Eq. (\ref{ProperTime}), one finds that the range of the coordinate $T$ from $2M$ down to $0$ corresponds to an interval of proper time equal to $\pi M$. The spacetime described by the metric of Eq. (\ref{InsideMetric}), viewed as a cosmological model, is an anisotropic but homogeneous spacetime in which (as $T$ proceeds from $2M$ down to zero) two spatial dimensions are collapsing while one is expanding. The interior Schwarzschild cosmology is a special case of a Type I Kantowski-Sachs model\cite{KS}. Since the Schwarzschild metric is a vacuum solution, there is no naturally defined four-velocity of cosmological ``matter''; however, to explore the properties of the solution as an anisotropic cosmology, it is helpful to define a set of fiducial geodesic observers with four-velocities given by \begin{equation} u^{\alpha} = \left( \left( {2M \over T} - 1 \right)^{1/2},0,0,0 \right) . \label{FourVel} \end{equation} These observers travel along world lines with $x$, $\theta$, and $\phi$ constant. In terms of the conserved quantities normally used to describe geodesics in the exterior Schwarzschild metric, these observers have zero angular momentum and zero energy at infinity. The proper volume of a cube defined by a set of fiducial observers at the corners, separated by coordinate distances $\Delta x$, $\Delta \theta$, and $\Delta \phi$ is given by \begin{equation} V \left( T \right) = \left( {2M \over T} - 1 \right)^{1/2} T^2 \Delta x \Delta \theta \Delta \phi \qquad . \label{ProperVolume} \end{equation} Since the fiducial observers have four-velocities given by Eq. (\ref{FourVel}), the quantities $\Delta x$, $\Delta \theta$, and $\Delta \phi$ are constant. The volume goes to zero at both $T = 0$ and $T = 2M$. Near the singularity at $T = 0$, the Schwarzschild metric of Eq. (\ref{InsideMetric}) may be put into a form which is locally asymptotic to a Kasner universe. Let coordinates $y$ and $z$ be defined as functions of and locally in the neighborhood of a point $\left( \theta_0, \phi_0 \right)$ by \begin{equation} y = 2 M (\theta-\theta_0) \qquad , \qquad z = 2 M sin \left( \theta_0 \right) (\phi-\phi_0) \quad . \label{yzCoords} \end{equation} While these coordinates cannot be extended to cover the two-sphere they are perfectly adequate to describe the expansion and contraction of the cosmology in a local neighborhood. Near the singularity, the Schwarzschild metric then takes the form of a Kasner universe with exponents $p_1 = -1/3$, $p_2 = p_3 = 2/3$: \begin{equation} ds^2 = - d\tau^2 + \left( {\tau \over \tau_0} \right)^{-2/3} dx^2 + \left( {\tau \over \tau_0} \right)^{4/3} \left( dy^2 + dz^2 \right) . \label{KasnerSing} \end{equation} where $\tau_0 = 4M/3$ and $\tau = (2T^3/M)^{1/2}/3$. In a similar fashion, the metric may be approximated by a flat Kasner $\left( p_1 = 1, p_2 = p_3 = 0 \right)$ solution near $T = 2M$. There the cosmological proper time has the asymptotic form $\tau = 4M(1-T/2M)^{1/2}$, and the asymptotic form of the metric is \begin{equation} ds^2 = - d\tau^2 + {\tau^2 \over {16 M^2}} dX^2 + \left( dy^2 + dz^2 \right) \qquad , \label{KasnerHorz} \end{equation} as $\tau \rightarrow 0$. The singular behavior of Eq.(\ref{KasnerHorz}) is of course only apparent; the surface $\tau = 0$ is actually the black hole event horizon. \section{APPROXIMATE STRESS-ENERGY TENSORS} \subsection{Massless fields} To calculate the linearized metric perturbations to the Schwarzschild geometry resulting from the presence of quantized fields, it is necessary to know the values of the stress-energy tensors of those fields. Calculating the stress-energy tensor for a quantized field on a black hole background spacetime is an arduous task, which has been carried to completion only for a few cases. Howard and Candelas have computed the stress-energy of a conformally invariant scalar field in the Schwarzschild geometry \cite{HC,H}. Jensen and Ottewill have computed the vacuum stress-energy of a massless vector field in Schwarzschild \cite{JO}. More recently Anderson, Hiscock, and Samuel have developed a method for computing the vacuum stress-energy for a general (arbitrary curvature coupling and mass) scalar field in an arbitrary static spherical spacetime and have applied their method to the Reissner-Nordstr\"{o}m geometry\cite{AHS,AHL}. In each of these studies, an analytic expression for $\langle T_{\mu \nu} \rangle$ has been developed as a consequence of the procedure used to compute the exact values for $\langle T_{\mu \nu} \rangle$. These approximate expressions are generated by using a fourth order WKB expansion for the field modes in the unrenormalized expression for $\langle T_{\mu \nu} \rangle$ and then subtracting off the DeWitt-Schwinger counterterms \cite{C} to renormalize the stress-tensor. The resulting analytic expressions are closely related to approximate expressions for the vacuum stress-energy derived by Page, Brown, and Ottewill (PBO) \cite{P,BO,PBO} and Frolov and Zel'nikov (FZ) \cite{FZ1}. The analytic approximation found by Howard and Candelas is identical to the PBO approximation for the conformal scalar field's stress-energy in Schwarzschild spacetime; further, their numerical results show that the approximation is quite accurate for all values of $r$ down to the horizon. In the case of the vector field, the analytic expression derived by Jensen and Ottewill is equal to the PBO approximation for a conformal vector field plus a traceless term proportional to $r^{-4}$; the resulting expression yields a good match to the numerical results for the vector field \cite{JO}. The analytic approximation developed by Anderson, Hiscock, and Samuel reduces to the FZ approximation when restricted to conformal coupling; it has generally been shown to be valid for arbitrary curvature coupling when compared to numerical results in the Reissner-Nordstr\"{o}m geometry (which, of course, includes Schwarzschild as a special case). Each of these expressions has been derived in the exterior region of the black hole. There is good reason to believe they are valid in the interior also. The components of the curvature tensors in an orthonormal frame are analytic functions of $r$ near the event horizon. Each of the approximations is also an analytic function of the radial coordinate $r$ near the event horizon. Thus the analytic extension of these approximations into the interior region is trivial to obtain. Further Candelas and Jensen\cite{CJ} have numerically computed $<\phi^2>$ in the interior of a Schwarzschild black hole when the field is in the Hartle-Hawking state. They find that Page's approximation\cite{P} for $<\phi^2>$ arises in a natural way from the calculation of the renormalized Feynman Green function in the interior region and that it is a good approximation in much of the interior region. In this paper the Anderson, Hiscock, Samuel approximate analytic stress-energy tensor will be used to describe the effects of quantized massless scalar fields with arbitrary curvature coupling in the Schwarzschild interior. The Jensen-Ottewill analytic approximation will be used for the stress-energy tensor of massless vector fields. Massless spinor fields will be treated using the PBO approximation. It should be kept in mind, however, that the spinor field expression has not yet been tested against an accurate numerical computation to establish its validity. The components of the stress-energy tensor in Schwarzschild coordinates may then be expressed as follows \begin{equation} \langle T_{\mu \nu} \rangle= C_{\mu \nu} + \left(\xi - 1/6 \right) D_{\mu \nu} , \label{SchwarzTmn} \end{equation} where $C_{\mu \nu}$ represents the conformally invariant contribution to the vacuum stress-energy from all the fields, and $D_{\mu \nu}$ represents the non-conformal contribution due to the scalar fields, which we allow to have arbitrary curvature coupling. Applying the approximations discussed above: \begin{eqnarray} C_{T}^{T}& = &{\epsilon \over {\lambda M^2}} \left\{ a \left[1 + 2 \left({2M \over T}\right) + 3 \left({2M \over T}\right)^2 \right] + a_3 \left({2M \over T} \right)^3 \right . \nonumber \\ & & + \left . a_4 \left({2M \over T}\right)^4 + a_5 \left({2M \over T}\right)^5 + a_6 \left({2M \over T} \right)^6 \right\} \qquad, \label{CTT} \end{eqnarray} where \begin{equation} a = h \left( 0 \right) + {7 \over 8}h \left( 1/2 \right) + h \left( 1 \right)\qquad, \label{a} \end{equation} \begin{equation} a_3 = 4h \left( 0 \right) - {13 \over 2}h \left( 1/2 \right) - 76h \left( 1 \right)\qquad, \label{a3} \end{equation} \begin{equation} a_4 = 5h \left( 0 \right) - {35 \over 8}h \left( 1/2 \right) + 295h \left( 1 \right)\qquad, \label{a4} \end{equation} \begin{equation} a_5 = 6h \left( 0 \right) - {9 \over 4}h \left( 1/2 \right) - 54h \left( 1 \right)\qquad, \label{a5} \end{equation} \begin{equation} a_6 = 15h \left( 0 \right) + {15 \over 8}h \left( 1/2 \right) + 285h \left( 1 \right)\qquad, \label{a6} \end{equation} \begin{eqnarray} C_{x}^{x}& =& {\epsilon \over {\lambda M^2}} \left\{ -a \left[1 + 2\left({2M \over T}\right) + 3\left({2M \over T} \right)^2 + 4\left({2M \over T}\right)^3 \right] \right . \nonumber \\ & & + \left . b_4 \left({2M \over T}\right)^4 + b_5 \left({2M \over T}\right)^5 + b_6 \left({2M \over T} \right)^6 \right\} \qquad, \label{Cxx} \end{eqnarray} where \begin{equation} b_4 = -5h \left( 0 \right) - {45 \over 8}h \left( 1/2 \right) + 105h \left( 1 \right)\qquad, \label{b4} \end{equation} \begin{equation} b_5 = -6h \left( 0 \right) - {31 \over 4}h \left( 1/2 \right) - 26h \left( 1 \right)\qquad, \label{b5} \end{equation} \begin{equation} b_6 = 33h \left( 0 \right) + {161 \over 8}h \left( 1/2 \right) + 83h \left( 1 \right)\qquad, \label{b6} \end{equation} and \begin{eqnarray} C_{\theta}^{\theta} = C_{\phi}^{\phi} & = & {\epsilon \over {\lambda M^2}} \left\{ a \left[1 + 2\left({2M \over T}\right) + 3 \left({2M \over T} \right)^2 \right] + c_3 \left({2M \over T}\right)^3 \right . \nonumber \\ & & + \left . c_4 \left({2M \over T}\right)^4 + c_5 \left({2M \over T}\right)^5 + c_6 \left({2M \over T}\right)^6 \right\} \qquad, \label{Cpp} \end{eqnarray} \begin{equation} c_3 = 4h \left( 0 \right) + {17 \over 2}h \left( 1/2 \right) + 44h \left( 1 \right)\qquad, \label{c3} \end{equation} \begin{equation} c_4 = 5h \left( 0 \right) + {85 \over 8}h \left( 1/2 \right) - 305h \left( 1 \right)\qquad, \label{c4} \end{equation} \begin{equation} c_5 = 6h \left( 0 \right) + {51 \over 4}h \left( 1/2 \right) + 66h \left( 1 \right)\qquad, \label{c5} \end{equation} \begin{equation} c_6 = -9h \left( 0 \right) + {87 \over 8}h \left( 1/2 \right) - 579h \left( 1 \right)\qquad. \label{c6} \end{equation} The constants $\epsilon$ and $\lambda$ are defined by $\epsilon = \hbar/M^2$, $\lambda = 45 \cdot 2^{13} \cdot \pi^2$, and $h(s)$ is the number of helicity states in, respectively, the scalar, spinor, and vector fields present. Explicitly, $h(0)$ simply counts the number of scalar fields present, $h(1/2)$ is equal to 2 (or 4) for each two- (or four-) component spinor field present; $h(1)$ is equal to 2 times the number of vector fields present. The nonconformal contribution to the scalar field stress-energy is given by: \begin{equation} D_{T}^{T} = -60 h(0){\epsilon \over {\lambda M^2}} \left({2M \over T} \right)^3 \left[4 - 3 \left({2M \over T}\right)\right]\left[1 + 2\left({2M \over T}\right) + 3 \left({2M \over T}\right)^2 \right] \qquad, \label{DTT} \end{equation} \begin{equation} D_{x}^{x} = 180 h(0){\epsilon \over {\lambda M^2}} \left({2M \over T} \right)^4 \left[1 + 2 \left({2M \over T}\right) - 5 \left({2M \over T}\right)^2 \right] \qquad, \label{Dxx} \end{equation} \begin{equation} D_{\theta}^{\theta} = 120 h(0){\epsilon \over {\lambda M^2}}\left({2M \over T} \right)^3 \left[1 + 2 \left({2M \over T}\right) + 3 \left({2M \over T} \right)^2 - 12 \left({2M \over T}\right)^3 \right] \qquad. \label{Dthth} \end{equation} These expressions exhibit a variety of interesting behavior in the black hole interior. The energy density, $\rho = -\langle T_{T}^{T}\rangle$, is negative at the horizon for the conformally coupled scalar field and the vector field; it is positive there, however, for the spinor field and for any scalar field with $\xi > 1/4$. The energy density diverges negatively as the singularity is approached for all conformal fields; however, the density diverges positively for scalar fields with $\xi < 5/36$, which includes the minimally coupled scalar field. There is a particular surface, $T = 3M/2$, on which the energy density of the scalar field is independent of the curvature coupling. The spatial stress in the $x$-direction, $\langle T_{x}^{x} \rangle$, is positive at the horizon for all scalar fields with $\xi < 4/15$, which includes both the minimally coupled and conformally coupled cases, and for the conformal vector field. The stress is negative at the horizon for the spinor field. This stress diverges in a positive fashion as the singularity is approached for all conformal fields and also for the minimally coupled scalar field. The tangential stress, $\langle T_{\theta}^{\theta} \rangle$, is everywhere positive in the domain of interest for the minimally coupled scalar field and the spinor field; it is also everywhere negative for the vector field. The conformal scalar field has $\langle T_{\theta}^{\theta} \rangle$ positive at the horizon, but diverging negatively as the singularity is approached. \subsection{Massive fields} The technique of choice for computing an approximate renormalized stress-energy tensor in the massive case is the DeWitt-Schwinger approximation for $\langle T_{\mu}^{\nu} \rangle$. This is obtained by performing the DeWitt-Schwinger expansion of the stress-energy tensor, in inverse square powers of the field mass, $m$, and then subtracting off the first, divergent terms of the expansion \cite{dtgf}. The remaining terms of the asymptotic series may be used as an analytic approximation to $\langle T_{\mu}^{\nu} \rangle$. In this paper, approximations for the stress-energy tensor of massive quantized fields have been derived from the previous work of Frolov and Zel'nikov \cite{FZ84}, who used the DeWitt-Schwinger approximation to find the renormalized stress-energy for massive fields in the Kerr spacetime. For the massive scalar field in the Schwarzschild limit, Frolov and Zel'nikov's Kerr results have been found to reduce to the stress-energy obtained by other renormalization methods \cite{FZ82}. By taking the zero angular momentum limit ($a \rightarrow 0$) of the Kerr results, the DeWitt-Schwinger approximation to the stress-energy in Schwarzschild may be found for an arbitrary collection of scalar, spinor, and vector fields. The resulting stress-energy tensor may again be decomposed into the contributions of the conformally invariant fields, $C_{\mu}^{\nu}$, and the contribution of a possibly nonconformal scalar field, $D_{\mu}^{\nu}$, according to Eq.(\ref{SchwarzTmn}). The components of the approximate stress-energy tensor for conformally coupled massive fields are: \begin{eqnarray} C_{T}^{T}&= &{M^2 \over {1440\pi^2 T^8}}\left\{\left[15-11 \left({{2M} \over T}\right)\right]{1 \over {m_{0}^2}} +\left[36-28\left({{2M} \over T}\right)\right]{1 \over {m_{1/2}^2}} \right . \nonumber \\ & & \left . +\left[-99+75\left({{2M} \over T}\right)\right]{1 \over {m_{1}^2}} \right\} \qquad , \label{CTTDS} \end{eqnarray} \begin{eqnarray} C_{x}^{x}& = & {M^2 \over {10080\pi^2 T^8}}\left\{\left[-285+ 313 \left({{2M} \over T}\right)\right]{1 \over {m_{0}^2}} + \left[-540+596\left({{2M} \over T}\right)\right] {1 \over {m_{1/2}^2}} \right . \nonumber \\ & & \left . + \left[1665-1833\left({{2M} \over T}\right)\right] {1 \over {m_{1}^2}}\right\} \qquad , \label{CxxDS} \end{eqnarray} \begin{eqnarray} C_{\theta}^{\theta} & = & {M^2 \over {10800\pi^2 T^8}}\left\{\left[ -315+367\left({{2M} \over T}\right)\right]{1 \over {m_{0}^2}}+\left[-756+884\left({{2M} \over T}\right)\right] {1 \over {m_{1/2}^2}} \right . \nonumber \\ & & \left . +\left[2079-2427 \left({{2M} \over T}\right)\right]{1 \over {m_{1}^2}} \right\}, \label{CththDS} \end{eqnarray} where $m_{0}$, $m_{1/2}$, and $m_{1}$ are the ``effective masses'' of the scalar, Dirac spinor, and vector fields present. If there is no field present for a particular spin, then its effective mass is set equal to infinity. If there are multiple fields with a given spin, possibly with differing masses ({\it e.g.}, the massive spin $1/2$ fields in nature, representing the differing leptons and quarks), then the effective mass is calculated according to: \begin{equation} {1 \over m_{eff}^{2}} = \sum_{i=1}^{n} {1 \over m_{i}^{2}}, \label{emass} \end{equation} where the sum on the right hand side is taken over the $n$ fields of given spin present. The nonconformal scalar stress-energy contribution is given by \begin{equation} D_{T}^{T} = {M^2 \over {20\pi^2 m_{0}^{2} T^8}}\left[-4+3 \left({{2M} \over T}\right)\right] , \label{DTTDS} \end{equation} \begin{equation} D_{x}^{x} = {M^2 \over {20\pi^2 m_{0}^{2} T^8}}\left[10-11 \left({{2M} \over T}\right)\right] , \label{DxxDS} \end{equation} \begin{equation} D_{\theta}^{\theta} = {M^2 \over {10\pi^2 m_{0}^{2} T^8}} \left[6-7\left({{2M} \over T}\right)\right] . \label{DththDS} \end{equation} The DeWitt-Schwinger approximation for the stress-energy will be valid for sufficiently massive fields, when the Compton wavelength of the field, $\lambdabar = \hbar/m$, is much smaller than the horizon radius of the black hole. As was the case with the massless fields, these expressions show interesting behavior in the interior of the black hole. At the horizon, the energy density, $\rho = - \langle T_T^T \rangle$, is negative for all scalar fields with $\xi < 2/9$, which includes the conformally and minimally coupled scalar fields. The spinor field has negative energy density at the horizon as well, whereas the vector field has positive energy density. As the singularity is approached the energy density diverges in a positive fashion for scalar fields with $\xi < 47/216$, which again includes both the conformally and minimally coupled scalar fields. The energy density of the spinor field has a similar positive divergence, while the vector field energy density diverges negatively. Just as in the massless field case, the energy density of the scalar field is independent of the curvature coupling on the surface $T = 3M/2$. The spatial stress in the $x$-direction, $\langle T_x^x \rangle$, is positive on the horizon for all scalar fields with $\xi < 2/9$, including the minimal and conformally coupled cases. As the singularity is approached, the stress shows a positive divergence for all scalar fields with $\xi < 1237/5544$. For the spinor field, the spatial stress is also positive on the horizon and diverges in a positive direction as the singularity is approached. The vector field has negative stress in both limits. The tangential stress, $\langle T_{\theta}^{\theta} \rangle$, is positive for all scalar fields with $\xi < 55/252$, including the conformal scalar field. Again in this case, the stress for the spinor field is positive on the horizon and as the singularity is approached, and the vector field has negative stress in both cases. \section{SEMICLASSICAL BLACK HOLE INTERIORS} The linearized perturbations to the Schwarzschild metric resulting from the stress-energy of a quantized field (within the various analytic approximation schemes discussed in the previous section) have been described for the massless conformal scalar field by York \cite{Y}, for the massless vector field by Hochberg and Kephart \cite{HK}, and for the massless spinor field by Hochberg, Kephart and York \cite{HKY}. The perturbed geometry associated with a quantized massless scalar field with arbitrary curvature coupling has been analyzed by Anderson, Hiscock, Whitesell, and York \cite{AHWY}. In these previous calculations it was most convenient to describe the metric perturbations in ingoing Eddington-Finkelstein coordinates, $\left(v, r, \theta, \phi \right)$. The study of the interior semiclassical effects proceeds most naturally however in terms of the original Schwarzschild coordinates (albeit with new names in the interior). In those coordinates, the perturbed metric may be written in the form \begin{equation} ds^2 = - \left( {2M \over T} - 1 \right)^{-1} \left[1 + \epsilon \eta (T)\right] dT^2 +\left( {2M \over T} - 1 \right) \left[1 + \epsilon \sigma (T)\right] dx^2 + T^2 d{\Omega}^2 . \label{MetricCosmo} \end{equation} The Einstein equations, to first order in $\epsilon$, then have the form: \begin{equation} {d \over {dT}} \left[\left(2M - T \right)\eta\right] = { {8 \pi T^2 \langle T_{x}^{x}\rangle} \over \epsilon}\qquad, \label{EtaEEqn} \end{equation} \begin{equation} {{d \sigma} \over {dT}} = -{ {8 \pi T^2 \langle T_{T}^{T}\rangle} \over {\epsilon\left(2 M - T \right)}} - {\eta \over {2 M - T}} \qquad. \label{SigEEqn} \end{equation} \subsection{Massless fields} Integrating Eqs.(\ref{EtaEEqn},\ref{SigEEqn}) using the approximate stress-energy tensor for a collection of massless quantized fields given in Eqs.(\ref{CTT}-\ref{Dthth}), one obtains \begin{eqnarray} K \eta& =& A \left[ \left({T \over {2M}}\right)^2 + 4\left( { T\over {2M}} \right) + 12 \left(1 - {T \over {2M} }\right)^{-1} \ln\left({{2M} \over T} \right) \right] \nonumber \\ & & +A_0 + A_1 \left({{2M} \over T} \right) + A_2 \left({{2M} \over T} \right)^2 + A_3 \left({{2M} \over T} \right)^3 \qquad, \label{EtaSoln} \end{eqnarray} \begin{eqnarray} K \sigma& =& A \left[ \left({T \over {2M}}\right)^2 + 8 \left( {T \over {2M}} \right) - 24 \left({{3M - T} \over {2M - T}}\right) \ln\left( {2M} \over T \right) \right] \nonumber \\ & & + B_0 + B_1 \left({{2M} \over T} \right) + B_2 \left({{2M} \over T} \right)^2 + B_3 \left({{2M} \over T} \right)^3 \qquad, \label{SigSoln} \end{eqnarray} where $K = 3840 \pi$, and the coefficients $A_i$, $B_i$ are given by \begin{equation} A = { {8 h \left( 0 \right) + 7 h \left( 1/2 \right) + 8 h \left( 1 \right)} \over 24 }\qquad, \label{A} \end{equation} \begin{equation} A_0 = {1 \over 24}\left[8\left(109 - 360 \xi \right) h \left( 0 \right) + 43 h \left( 1/2 \right) +375 h \left( 1 \right) \right] \qquad, \label{A0} \end{equation} \begin{equation} A_1 = {1 \over 24}\left[ 8 \left(1 - 60 \xi \right) h \left( 0 \right) + 67 h \left( 1/2 \right) - 2872 h \left( 1 \right) \right]\qquad, \label{A1} \end{equation} \begin{equation} A_2 = {1 \over 6} \left[ 8 \left(-11 + 30 \xi \right) h \left( 0 \right) - 17 h \left( 1/2 \right) - 88 h \left( 1 \right) \right] \qquad, \label{A2} \end{equation} \begin{equation} A_3 = {1 \over 24} \left[ 8 \left(-83 + 300 \xi \right) h \left( 0 \right) - 161 h \left( 1/2 \right) - 664 h \left( 1 \right) \right] \qquad, \label{A3} \end{equation} \begin{equation} B_0 = {1 \over 24} \left[ 8 \left(155 - 720 \xi \right) h \left( 0 \right) + 365 h \left( 1/2 \right) - 565 h \left( 1 \right) \right] + k_0\qquad, \label{B0} \end{equation} \begin{equation} B_1 = {1 \over 8} \left[ 8 \left(-27 + 100 \xi \right) h \left( 0 \right) - 89 h \left( 1/2 \right) + 1064 h \left( 1 \right) \right]\qquad, \label{B1} \end{equation} \begin{equation} B_2 = {1 \over 12} \left[ 8 \left(-23 + 120 \xi \right) h \left( 0 \right) - 41 h \left( 1/2 \right) + 296 h \left ( 1 \right) \right]\qquad, \label{B2} \end{equation} and \begin{equation} B_3 = { 5 \over 24} \left[ 8 \left(-5 + 36 \xi \right) h \left( 0 \right) + h \left( 1/2 \right) + 152 h \left( 1 \right) \right]\qquad. \label{B3} \end{equation} The form of Eq. (\ref{B0}) has been chosen so that the integration constant in $\sigma$ is expressed in terms of the integration constant, $k_0$, which has appeared in previous papers\footnote{In each of these papers the black hole was surrounded by a thin perfectly reflecting cavity. The specific value of the integration constant $k_0$ was obtained in those cases by requiring $g_{tt}$ to be continuous at the cavity wall. In the present work, none of our results will depend on the numerical value chosen for $k_0$.} \cite{Y}, \cite{HK}, \cite{HKY}, \cite{AHWY}. The integration constant which is associated with $\eta$ has been absorbed via renormalization into $M$; the constant $M$ which appears in these equations is thus to be interpreted as the ``dressed'' mass of the black hole. The semiclassical metric of Eq. (\ref{MetricCosmo}) is valid only when the perturbations, $\epsilon \eta$ and $\epsilon \sigma$, are small compared to unity. The perturbations are small at the horizon, $T = 2 M$, for black hole masses greater than or equal to the Planck mass (recall $\epsilon = \hbar/M^2 = M_{P}^{2}/M^2$). Of course, the perturbations can always be made large by taking the large-N limit, where N is the number of quantized fields present. For reasonable numbers of fields, and black hole masses greater than the Planck mass, it is possible to approach the singularity at $T = 0$ fairly closely. As an example, if we take $h(0) = 0$, $h(1/2) = 6$, $h(1) = 2$, representing three massless neutrino fields and one massless vector field, and a black hole mass of $M = M_P$, then the perturbations reach a strength of $10^{-1}$ at about $T = M$; for a solar mass black hole, however, the perturbation does not reach this strength until $T \approx 3 \times 10^{-21} \mbox{cm} = 2 \times 10^{-26} M$. \subsection{Massive fields} Integrating Eqs.(\ref{EtaEEqn},\ref{SigEEqn}) using the approximate stress-energy tensor for a collection of massive quantized fields given in Eqs.(\ref{CTTDS}-\ref{DththDS}), one obtains \begin{equation} K\eta = E\left[\left({2M \over T}\right)+\left({2M \over T}\right)^2+\left({2M \over T}\right)^3+\left({2M \over T}\right)^4+\left({2M \over T}\right)^5\right]+\tilde{E} \left({2M \over T}\right)^6 , \label{etads} \end{equation} \begin{eqnarray} K\sigma& =& k_0-E\left[-5+\left({2M \over T}\right)+\left({2M \over T}\right)^2+\left({2M \over T}\right)^3+\left({2M \over T}\right)^4+\left({2M \over T}\right)^5\right] \nonumber \\ & & +F\left[\left({2M \over T}\right)^6-1\right] , \label{sigds} \end{eqnarray} where $K$ is again equal to $3840\pi$, and \begin{equation} E={1 \over {126 M^2}}\left[\left(113-504\xi\right) {1 \over {m_{0}^{2}}}+52{1 \over {m_{1/2}^{2}}}-165 {1 \over {m_{1}^{2}}}\right], \label{Eeta} \end{equation} \begin{equation} \tilde{E}={1 \over {126 M^2}}\left[\left(-1237 +5544\xi\right) {1 \over {m_{0}^{2}}}-596{1 \over {m_{1/2}^{2}}}+1833{1 \over {m_{1}^{2}}}\right], \label{Eteta} \end{equation} \begin{equation} F={1 \over {18 M^2}}\left[\left(-47+216\xi\right) {1 \over {m_{0}^{2}}}-28{1 \over {m_{1/2}^{2}}}+75 {1 \over {m_{1}^{2}}}\right]. \label{Fsig} \end{equation} The integration constants in Eqs.(\ref{etads},\ref{sigds}) are handled in the same manner as in the massless case; in particular, the black hole mass $M$ is the ``dressed'' or renormalized mass. The field masses $m_0$, $m_{1/2}$, $m_{1}$, are effective masses defined as described in Sec. III. The perturbations of the Schwarzschild metric caused by the presence of massive fields are small, and the DeWitt-Schwinger approximation valid, so long as the Compton wavelength of the field is significantly less than the local radius of curvature of the spacetime. In the Schwarzschild interior, this will be true so long as $T >> (M/m^2)^{1/3}$. \section{ANISOTROPY OF THE SCHWARZSCHILD INTERIOR} Since the Schwarzschild interior represents a highly anisotropic cosmology, it is natural to ask whether semiclassical effects dampen or strengthen the anisotropy. Many studies over the last quarter century have established that particle production can rapidly isotropize an anisotropic cosmology \cite{Z,ZS,H1,HFP,FPH,BB,LS,HP,HH}. As mentioned in the introduction, the analytical approximations for massless fields are nonlocal and thus probably take particle production into account to some extent. However, it is completely unknown at this point how well they do this. The DeWitt-Schwinger approximation for the massive fields does not take particle production into account at all because it is a local approximation and particle production is an intrinsically nonlocal phenomenon. Thus whatever dissipation of anisotropy that is found due to all of these approximations is likely to be less that what would occur if full numerical solutions to the nonlinear backreaction equations were obtained. One measure of the anisotropy of the interior is the ratio of the Hubble expansion rates in the differing spatial directions. In the present case, since the two spatial directions on the two-spheres of symmetry are equivalent, there is only one ratio to calculate, say \begin{equation} \alpha = { H_x \over H_\theta } = { {g_{\theta \theta} { {d g_{x x}} \over {d \tau}}} \over {g_{x x} { {d g_{\theta \theta}} \over {d \tau}}} } = { {g_{\theta \theta} { {d g_{x x}} \over {dT}}} \over {g_{x x} { {d g_{\theta \theta}} \over {dT}}} } \qquad. \label{Aniso} \end{equation} The sign of $\alpha$ is positive if the cosmology is expanding (or contracting) in all three spatial directions. If the cosmology is expanding or contracting isotropically, then $\alpha = 1$. Evaluating $\alpha$ for the metric of Eq. (\ref{MetricCosmo}), we find, to first order in $\epsilon$ \begin{equation} \alpha = \alpha_{Sch} + \epsilon \delta \alpha \qquad, \label{AnisoExpnd} \end{equation} where $\alpha_{Sch}$ is the ordinary Schwarzschild value, \begin{equation} \alpha _{Sch} = { -M \over {2 M - T} } \qquad, \label{AnisoSch} \end{equation} and \begin{equation} \delta \alpha = {1 \over 2} T { {d \sigma} \over {dT}} \qquad. \label{AnisoPert} \end{equation} Taking Eq. (\ref{SigEEqn}) with Eq. (\ref{AnisoPert}), the perturbation to the anisotropy can be written explicitly in terms of components of the stress-energy as \begin{equation} \delta \alpha = -{1 \over 2} T \left[ {\eta \over \left( 2M - T \right)} + { {8 \pi T^2 \langle T_T^T\rangle } \over {\epsilon \left( 2M - T \right) }} \right] \qquad. \label{FullPert} \end{equation} If the overall sign of the perturbation to the anisotropy is positive, then the semiclassical effects tend to isotropize the interior. Negative values of $\delta \alpha$ push the spacetime towards greater anisotropy. Since the anisotropy is the ratio of the expansion rates along different spatial directions, careful consideration must be given to the method of spacetime slicing used to compare the perturbed and unperturbed spacetimes. One choice would be to consider slices which sit at equal proper times away from the horizon. Another choice, used in this paper, is to consider surfaces with equal values of the Schwarzschild area coordinate $T$. Taking the stress-energy tensors described in the previous section for the quantized fields of interest, the contributions described in Eq. (\ref{FullPert}) can then be computed for various spin fields on the Scwarzschild background. It should be noted when considering these results that the perturbation expansions become less reliable as one proceeds away from the horizon and towards the singularity, but the exact point at which the perturbation should no longer be trusted is a matter of choice. The perturbation to the anisotropy in the presence of a massless scalar field is \begin{eqnarray} \delta \alpha & = & {1 \over {\pi \left( 2M - T \right)^2}} \left\{ M^2 \left[ {\xi \over 48} - {17 \over 2880} \right] + {M^3 \over T} \left[ {\xi \over 24} - {7 \over 720} \right] + {M^4 \over T^2} \left[{ {5 \xi} \over 12} - {29 \over 720} \right] \right. \nonumber \\ & & + {M^5 \over T^3} \left[ {5 \over 48} - { {3 \xi} \over 4} \right] + MT \left[ {29 \over 11520} - { {5 \xi} \over 192} - { {\ln \left( 2M/T \right)} \over 960} \right] \nonumber \\ & & + \left. {T^2 \over 2304} + {T^3 \over {11520 M}} + {T^4 \over {46080 M^2}} \right\} \qquad . \label{ScalarMsLs} \end{eqnarray} The sign of $\delta \alpha$ clearly depends on the value of the scalar curvature coupling, $\xi$. For values of $\xi < 5/36$ the perturbation is positive, and the field tends to isotropize the spacetime. For values of $\xi > 12/55$ the perturbation is negative and the spacetime tends to more anisotropy. Between these two values, $5/36 < \xi < 12/55$, the perturbation isotropizes in some regions of the interior and anisotropizes in other regions, as shown in Figure 1. For values of $\xi$ and $T$ above the solid line, the spacetime is pushed towards anisotropy. Values of $\xi$ and $T$ below the solid line make $\delta \alpha > 0$, and the spacetime is isotropized in the presence of the scalar field. In this case, the minimally coupled scalar field ($\xi = 0$) always isotropizes the spacetime, whereas the conformally coupled field ($\xi = 1/6$) only isotropizes in the interior regions near the horizon. The perturbation due to the massless spin-$1/2$ field is \begin{eqnarray} \delta \alpha &=& {1 \over {\pi \left( 2M - T \right)^2}} \left\{ - {M^5 \over T^3} {1 \over 192} + { {M^4 \over T^2} {97 \over 2880}} - { {M^3 \over T}{19 \over 2880}} - {M^2 {59 \over 11520}} \right. \nonumber \\ & & \left. - MT \left[ {97 \over 46080} + { {7 \ln \left( 2M/T \right)} \over 3840} \right] + { {7 T^2} \over 9216} + { {7 T^3} \over {46080 M} } + { {7 T^4} \over {184320 M^2} } \right\} \qquad . \label{SpinorMsLs} \end{eqnarray} For $T > 0.5$, which is the region where the perturbation expansion can be trusted, this always pushes the spacetime towards greater isotropy. The massless vector field perturbation to the anisotropy is \begin{eqnarray} \delta \alpha & = & {1 \over {\pi \left( 2M - T \right)^2}} \left\{ - { {19 M^5} \over {24 T^3}} + { {211 M^4} \over {360 T^2}} - { {97 M^3} \over {360 T}} + { {343 M^2} \over 1440} \right. \nonumber \\ & & \left. - MT \left[ {451 \over 5760} + { { \ln \left( 2M/T \right)} \over 480} \right] + { T^2 \over 1152} + { T^3 \over {5760 M} } + { { T^4 \over {23040 M^2} }} \right\} \label{VectorMsLs} \end{eqnarray} and pushes towards anisotropy for all values of $T$ in the interior. The impact of massive fields of varying spin can be considered as well. For the massive scalar field \begin{eqnarray} \delta \alpha & = & { 1 \over {\pi m^2} } \left\{ {M^4 \over T^6} \left[ {47 \over 360} - { {3 \xi} \over 5} \right] + {M^3 \over T^5} \left[ {113 \over 6048} - {\xi \over 12} \right] + {M^2 \over T^4} \left[ {113 \over 15120} - {\xi \over 240} \right]\right. \nonumber \\ & & \left. + {M \over T^3} \left[ {113 \over 40320} - {\xi \over 80} \right] + {1 \over T^2} \left[ {113 \over 120960} - {\xi \over 240} \right] + {1 \over {MT} } \left[ {113 \over 483840} - {\xi \over 960} \right] \right\} \label{ScalarMsve} \end{eqnarray} where $m$ is the effective field mass defined in Eq.(\ref{emass}). Similar to the case of the massless scalar field, the exact sign of the perturbation depends on the value of the scalar curvature coupling. When $\xi > 1223/5544$, the presence of the field makes the spacetime more anisotropic, and when $\xi < 47/216$ the push is always towards isotropy. As shown in Figure 2, there exists a range of values $47/216 < \xi < 1223/5544$ over which some interior regions are isotropized and others are not. As before, values of $\xi$ and $T$ above the solid line have $\delta \alpha < 0$ and the spacetime tends towards anisotropy. For values below the solid line, $\delta \alpha > 0$, and the tendency is towards isotropy. Both minimal and conformal coupling fall within this regime. The perturbation due to a massive spinor field is \begin{equation} \delta \alpha = { 1 \over {\pi m^2} } \left\{ {M^4 \over T^6} {7 \over 90} + {M^3 \over T^5}{113 \over 1512} + {M^2 \over T^4} {13 \over 3780} + {M \over T^3}{13 \over 10080} + {1 \over T^2} {13 \over 30240} + {1 \over {MT} }{13 \over 120960} \right\} \label{SpinorMsve} \end{equation} which is manifestly positive for all values of $T$, and hence decreases the anisotropy. Similarly, the massive vector field perturbation to the anisotropy is \begin{equation} \delta \alpha = -{ 1 \over {\pi m^2} } \left\{ {M^4 \over T^6} {5 \over 24} + {M^3 \over T^5}{55 \over 2016} + {M^2 \over T^4} {11 \over 1008} + {M \over T^3}{11 \over 2688} + {1 \over T^2} {11 \over 8064} + {1 \over {MT} }{11 \over 32256} \right\} \label{VectorMsve} \end{equation} which is manifestly negative for all $T$, and so always tends to increase the anisotropy. \section{APPROACHING THE FINAL SINGULARITY} Ever since it was realized that singularities could not be avoided in physically plausible spacetimes, quantum effects have been invoked as the physical instrument which might restore regularity to spacetime, by banishing singular behavior. While it is impossible for a perturbative analysis to determine whether quantum effects will eradicate the singularity, it is possible, it is possible to determine how the growth of curvature as one approaches the singularity is affected by the semiclassical perturbation. The simplest way to see the effect of quantized fields on the growth of curvature as one approaches the singularity is to examine the perturbations of curvature scalars. One such scalar is the Kretschmann scalar, which for unperturbed Schwarzschild is \begin{equation} K_{Sch} = R^{\alpha \beta \mu \nu} R_{\alpha \beta \mu \nu} = { {48 M^2} \over T^6} - { {32 M} \over T^5} + {16 \over T^4} \qquad . \label{KScalar} \end{equation} The Kretschmann scalar is perfectly well behaved near the horizon $T = 2M$, but diverges strongly as $T \rightarrow 0$. Evaluating $K$ to first order in $\epsilon$ for the metric of Eq. (\ref{MetricCosmo}) will yield \begin{equation} K = K_{Sch} + \epsilon \delta K \qquad . \label {KExpand} \end{equation} The first order correction to the Kretschmann scalar can be written in terms of the perturbation functions $\eta$ and $\sigma$ as \begin{eqnarray} \delta K& =& {8 \over T^3} \left[ -12 \eta {M^2 \over T^3} + 6 \eta {M \over T^2} - 2 \eta {1 \over T} + 3 \eta^\prime {M^2 \over T^2} - \eta^\prime {M \over T} \right . \nonumber \\ & & \left . - 5 \sigma^\prime {M^2 \over T^2} + \sigma^\prime {M \over T} + 2 \sigma^{\prime\prime} {M^2 \over T} - \sigma^{\prime\prime} M \right] \label {DeltaK} \end{eqnarray} where primes denote differentiation with respect to $T$. If the sign of $\delta K$ is positive, the divergence as one approaches the curvature singularity will be strengthened. If $\delta K$ is negative, then the divergence will be weakened. The perturbation to $K_{Sch}$ in the presence of a massless scalar field is \begin{eqnarray} \delta K & = & {{128 \pi} \over {\lambda T^2}} \left\{ {M^5 \over T^7} \left[ 12288 + 11520 \xi \right] - {M^4 \over T^6} \left[ 7712 - 24960 \xi \right] - {M^3 \over T^5} \left[ 192 + 2880 \xi \right] \right. \nonumber \\ & & + {M^2 \over T^4} \left[ 816 - 7200 \xi - 288 \ln \left( 2M/T \right) \right] + {M \over T^3} \left[ 168 + 480 \xi + 96 \ln \left( 2M/T \right) \right] \nonumber \\ & & - \left. {30 \over T^2} - {6 \over MT} - {1 \over M^2} \right\} \qquad . \label{KScalarMsLs} \end{eqnarray} The sign of $\delta K$ depends on the value of the scalar curvature coupling, $\xi$. Figure 3 shows a plot of the curvature coupling $\xi$ vs. $T$ over the interior. The solid line represents values of $\xi$ and $T$ for which $\delta K = 0$. For points below the solid line, the perturbation to the Kretschmann scalar is negative, and for values above the line the perturbation is positive. For all nonnegative values of the curvature coupling, the contribution is positive, and hence curvature grows faster than in the unperturbed metric. The massless spinor field perturbs $K_{Sch}$ by \begin{eqnarray} \delta K & = & {{224 \pi} \over {\lambda T^2}} \left\{ {57216 \over 7} {M^5 \over T^7} - {29024 \over 7} {M^4 \over T^6} - {7584 \over 7} {M^3 \over T^5} \right. \nonumber \\ & & + {M^2 \over T^4} \left[ {48 \over 7} - 288 \ln \left( 2M/T \right) \right] + {M \over T^3} \left[ {696 \over 7} + 96 \ln \left( 2M/T \right) \right] \nonumber \\ & & - \left. {30 \over T^2} - {6 \over MT} - {1 \over M^2} \right\} \qquad . \label{KSpinorMsLs} \end{eqnarray} The perturbation of Eq. (\ref{KSpinorMsLs}) changes sign in the interior, yielding a negative contribution to the Kretschmann scalar for $T > 1.435$, and a positive contribution to the Kretschmann scalar for $T < 1.435$. The massless vector field perturbation to the Kretschmann scalar is \begin{eqnarray} \delta K & = & {{256 \pi} \over {\lambda T^2}} \left\{ 87168 {M^5 \over T^7} - 15392 {M^4 \over T^6} + 31008 {M^3 \over T^5} \right. \nonumber \\ & & + {M^2 \over T^4} \left[ 15024 - 288 \ln \left( 2M/T \right) \right] + {M \over T^3} \left[ 3048 + 96 \ln \left( 2M/T \right) \right] \nonumber \\ & & - \left. {30 \over T^2} - {6 \over MT} - {1 \over M^2} \right\} \qquad . \label{KVectorMsLs} \end{eqnarray} which is positive for all values of $T$ in the interior; the massless vector field thus seems to strengthen the growth of curvature as the singularity is approached. Similar considerations can be given to massive fields. In the case of the massive scalar field \begin{eqnarray} \delta K & = & {1 \over {\pi m^2 T^5}} \left\{ -{1 \over M} \left[ {113 \over 15120} - {\xi \over 30} \right] + {1 \over T} \left[ {113 \over 5040} - {\xi \over 10} \right] + {M^4 \over T^5} \left[ {20 \over 7} - {{64 \xi} \over 5} \right] \right. \nonumber \\ & & - \left. {M^5 \over T^6} \left[ {1076 \over 189} - {{352 \xi} \over 15} \right] - {M^6 \over T^7} \left[ {44 \over 105} - {{32 \xi} \over 5} \right] \right\} \qquad . \label{KScalarMsve} \end{eqnarray} As in the massless scalar field case, the exact sign of the perturbation depends on the scalar curvature coupling. Figure 4 shows a plot of the curvature coupling $\xi$ vs. $T$ over the interior. Values of $\xi$ and $T$ below the solid line yield $\delta K < 0$. For points above the solid line, $\delta K > 0$. The minimally coupled massive scalar field always weakens the growth of curvature; for the conformally coupled field, the rate of curvature growth is initially less than in Schwarzschild, but near the singularity the perturbation causes the curvature to grow more rapidly than in the unperturbed metric. For the massive spinor field \begin{eqnarray} \delta K & = & {1 \over {\pi m^2 T^5}} \left\{ -{1 \over M} {13 \over 3780} + {1 \over T} {13 \over 1260} + {M^4 \over T^5} {48 \over 35} - {M^5 \over T^6} {656 \over 945} - {M^6 \over T^7} {496 \over 105} \right\} \qquad \label{KSpinorMsve} \end{eqnarray} which is negative for all interior values of the coordinate $T$; hence the massive spinor field softens the approach to the singularity, decreasing the rate of increase of the curvature. In contrast, the massive vector field has \begin{eqnarray} \delta K & = & {1 \over {\pi m^2 T^5}} \left\{ {1 \over M} {11 \over 1008} - {1 \over T} {11 \over 336} - {M^4 \over T^5} {148 \over 35} + {M^5 \over T^6} {2012 \over 315} + {M^6 \over T^7} {36 \over 7} \right\} \qquad \label{KVectorMsve} \end{eqnarray} which is positive for all values of $T$ in the interior; hence the massive vector field strengthens the growth of curvature as the singularity is approached. \section{DISCUSSION AND SUMMARY} In this paper we have calculated the linearized perturbations of the Schwarzschild black hole interior due to a collection of quantized matter fields. The stress-energy tensor of the matter fields has been described using analytic approximations. For massless fields, we have used the approximations of Page, Brown, and Ottewill \cite{PBO} for the spinor field, the approximation of Jensen and Ottewill \cite{JO} for the vector field, and that of Anderson, Hiscock, and Samuel \cite{AHS} for the scalar field. Massive fields have been treated using the DeWitt-Schwinger approximation, as developed by Frolov and Zel'nikov \cite{FZ84} and Anderson, Hiscock, and Samuel \cite{AHS}. These calculations provide virtually all of the useful information about semiclassical effects in the interior of a black hole that can be obtained using the various analytical approximations. One could attempt to construct fully self-consistent solutions to the semiclassical equations using the DeWitt-Schwinger approximation for massive fields or the approximations of Frolov and Zel'nikov\cite{FZ1} or Anderson, Hiscock, and Samuel\cite{AHS} for massless fields. However, serious problems arise in such calculations. For massless fields the analytical approximations diverge logarithmically on the event horizon in any static non-Ricci-flat spacetime. Numerical computations of the stress-energy tensor in Reissner-Nordstr\"{o}m spacetimes\cite{AHS,AHL} indicate that these divergences are not real. They are simply an indication that it is only for Schwarzschild spacetime that the analytical approximations are valid near the event horizon. For massive fields the DeWitt-Schwinger approximation gives no divergent behavior on the event horizon of any black hole. However, this approximation is valid only in the limit that the mass Compton wavelength of the field is much smaller than the radius of curvature of the spacetime. Thus the best that can be done when using the DeWitt-Schwinger approximation is to solve the semiclassical equations perturbatively, in which case the first order term is definitely the most important. Therefore, it will be necessary to numerically compute the stress-energy tensor to study semiclassical interior effects beyond the level of linear perturbation theory. We have addressed the question of whether anisotropy is dissipated in the interior by treating the black hole interior as an anisotropic, homogeneous cosmology and examining whether the perturbed metric has greater or lesser anisotropy than the background Schwarzschild metric. We find that minimally and conformally coupled scalar fields, and the spinor field, decrease the anisotropy as one approaches the singularity, while vector fields increase the anisotropy. These results are described from the point of view of the black hole interior, which as a cosmology is a universe approaching a final singularity. If one instead interpreted our results in terms of the white hole portion of the Schwarzschild Penrose diagram, then scalar and spinor fields would enhance anisotropy as one moves away from the singularity, while vector fields would reduce it. However as previously mentioned, in this case the boundary conditions for the fields are ``final'' rather than ``initial'' conditions. We have also examined whether there is any evidence for the semiclassical perturbation modifying the approach to the singularity. While within the context of perturbation theory, it is impossible to determine whether quantum effects might substantially change the character of the singularity (perhaps even eliminating it), one can ask whether an observer approaching the final black hole singularity will measure larger or smaller curvature in the perturbed metric than in the classical Schwarzschild case. We find that massless fields of all spin, and massive vector fields, generally strengthen the singularity (curvature grows faster than in Schwarzschild) while the massive scalar and spinor fields weaken the growth of curvature. \acknowledgements This work was supported in part by NSF Grants Nos. PHY92-07903 and PHY95-11794 (W.\ A.\ H.\ ), and PHY95-12686 (P.\ R.\ A.\ ).
1,116,691,501,070
arxiv
\section{Introduction} \label{sec1} The DUNE experiment aims at addressing key questions in neutrino physics and astroparticle physics~ \cite{duneCDRv2}. It includes precision measurements of the parameters that govern neutrino oscillations with the goal of measuring the CP violating phase and the neutrino mass ordering, nucleon decay searches, and detection and measurement of the electron neutrino flux from a core-collapse supernova within our galaxy. DUNE will consist of a near detector placed at Fermilab close to the production point of the muon neutrino beam of the Long-Baseline Neutrino Facility (LBNF), and four 10\,kt fiducial mass LAr TPCs as far detector in the Sanford Underground Research Facility (SURF) at 4300\,m.w.e. depth at 1300\,km from Fermilab~\cite{duneCDRv4}. In order to gain experience in building and operating such large-scale LAr detectors, an R\&D programme is currently underway at the CERN Neutrino Platform~\cite{ProtoDUNEs}. Such programme will operate two prototypes with the specific aim of testing the design, assembly, and installation procedures, the detector operations, as well as data acquisition, storage, processing, and analysis. The two prototypes will employ LAr TPCs as detection technology. One prototype will only use LAr, called ProtoDUNE Single-Phase, and the other will use argon in both its gaseous and liquid state, thus the name ProtoDUNE Dual-Phase (DP). Both detectors will have similar sizes. In particular, ProtoDUNE-DP~\cite{wa105}, also known as WA105 (NP02), will have an active volume of 6$\times$6$\times$6 m$^{3}$ corresponding to a mass of 300\,t. A schematic drawing of ProtoDUNE-DP is shown in Figure~\ref{fig:ProtoDUNEDP}. In ProtoDUNE-DP the charge is extracted, amplified, and detected in gaseous argon above the liquid surface allowing a finer readout pitch, a lower energy threshold, and better pattern reconstruction of the events. \begin {figure}[ht] \includegraphics[width=0.6\textwidth]{ProtoDUNEDP2.pdf} \centering \caption{Schematic drawing of ProtoDUNE-DP where the 36 PMTs are installed at the bottom of the active volume.} \label{fig:ProtoDUNEDP} \end {figure} In the case of a charged particle traversing the LAr, the medium is ionized and at the same time photons are emitted. LAr scintillation light is in the far vacuum ultraviolet, having a wavelength centered at 127\,nm with a width of 8\,nm~\cite{heindl}. The scintillation light signal is used as trigger for non-beam events, to determine precisely the event time, for cosmic background rejection, and there is also a possibility to perform calorimetric measurements and particle identification. The prompt scintillation light (usually referred to as S1 signal) in LAr has two components: S1 fast of $\sim$6\,ns, and S1 slow of $\sim$1.6\,$\mu$s. In addition, the electroluminiscence, secondary scintillation light called S2, is produced in the gas phase of the detector when electrons, extracted form the liquid, are accelerated in the electric field between the liquid phase and the anode. The time length of S2 reflects the maximum drift time of original ionization in the liquid phase up to the gas phase, thus for $\sim$1\,kV/cm electric field, the time scale of S2 is of the order of hundreds of microseconds. The photon detection system of ProtoDUNE-DP~\cite{protoDUNElight} is formed by 36 8-inch cryogenic photomultipliers (PMTs) placed at the bottom of the active volume, submerged in LAr, behind the cathode grid. As wavelength-shifter, tetraphenyl butadiene (TPB) is coated directly on the PMTs. The photon detection system must have precision timing capabilities of few ns, a wide dynamic range to record from a few photo-electrons (p.e.) to thousands of p.e., a linear response even under high frequency or high intensity light conditions, and be able to operate at cryogenic temperature (94\,K). This paper describes the PMT and base circuit to be used in ProtoDUNE-DP in section~\ref{sec2}. The dedicated facility used to carry out the PMT studies is presented in section~\ref{sec3}. Section~\ref{sec4} introduces the approach and procedure used in the measurements. The PMT validation measurements are detailed in section~\ref{sec5}, and the characterization results of the 36 PMTs to be used in ProtoDUNE-DP are described in section~\ref{sec6}. Finally, the results of the PMT performance at cryogenic temperatures that can be valuable for other experiments are summarized in the conclusions. \section{PMT unit} \label{sec2} The photon detection system of the ProtoDUNE-DP experiment is based on Hamamatsu R5912-20Mod cryogenic photomultipliers~\cite{ham}. This 8-inch diameter PMT was selected as the light sensor since a large sensor coverage of the cathode area of 36\,m$^{2}$ should be achieved and also due to its proven reliability on other cryogenic detectors. This PMT was successfully operated in the WA105 3$\times$1$\times$1\,m$^3$ detector~\cite{311}. Also, similar PMTs were used in other LAr experiments like MicroBooNE~\cite{microboone1}, MiniCLEAN~\cite{clean2}, ArDM~\cite{ArDM2} and ICARUS T600~\cite{icarus}. The selected PMT has a 14-stage dynode chain which provides a nominal gain of 10$^9$ at room temperature (RT), required to compensate the gain loss at cryogenic temperatures (CT). The dynode structure is Box \& Line which is a combination of the so-called box type, which has a large collection area at the first dynode, and the linear-focused configuration, whose dynodes are designed to ensure progressive focusing of the electron paths through the multiplier. The higher number of dynodes also has the advantage of requiring a lower operation voltage for a given gain, which reduces the potential risk of sparks and the heat dissipation in the PMT bases. As the PMTs are designed to operate at CT, a thin platinum layer was added between the bi-alkali photocatode and the borosilicate glass envelope to preserve the conductivity of the photocathode at these temperatures. The cathode sensitivity provides a spectral response from 300 to 650\,nm. It is worth noting that the PMT tests described in this paper are done prior to the TPB coating. An individual PMT support structure was designed, manufactured and assembled at CIEMAT. This structure is mainly composed of 304L stainless steel with some small Teflon (PTFE) 6.6 pieces assembled by A4 stainless steel screws that minimize the mass while ensuring the PMT support to the membrane. The design takes into account the shrinking of the different materials during the cooling process to avoid the break of the PMT glass. The PMTs are placed at the bottom of the ProtoDUNE-DP membrane fixed to a stainless steel plate which is glued to the bottom of the cryostat. A picture of one PMT assembled on its support is shown in Figure~\ref{fig:PMT}. \begin {figure}[ht!] \includegraphics[width=0.25\textwidth]{PMT.jpg} \centering \caption{Picture of R5912-20Mod PMT with the mechanical support, the PMT base, and its cable.} \label{fig:PMT} \end {figure} The PMT base was designed with passive components due to the degraded performance of semiconductors on cryogenic conditions. The components were carefully selected and tested in order to minimize their variations with the temperature. The circuit resistors were chosen following the voltage ratio recommended by the manufacturer to increase the linearity range. Resistors used on the base are SMD 0805 thin film from Vishay TNPW e3 series\footnote{www.vishay.com} and the capacitors are MLCC 1812 C0G dielectric from TDK C series\footnote{www.tdk.com}, except the high-voltage (HV) filtering capacitor (C3 in Figure~\ref{fig:basediagram}) that is a Polypropylene capacitor from Kemet R76 series. The total resistance of the circuit was set to 13.4\,M$\Omega$ with an average current $\sim$100\,$\mu$A (depending on the required voltage for each PMT), and the power dissipated per PMT base is $\sim$0.14\,W. In order to choose the optimal PMT base design for ProtoDUNE-DP, two posible voltage configurations (positive and negative bias) were considered, see Figures~\ref{fig:bases}. In the so called positive base (PB)~\cite{microboone1}, see Figure~\ref{fig:PB}, a positive HV is applied at the anode and the photocatode is grounded, which reduces the noise; whereas in the negative base (NB)~\cite{clean1, icarus}, see Figures~\ref{fig:NB}, a negative HV is applied at the cathode and the anode is grounded. In the negative bias configuration, the photocathode is connected to HV, and special care must be taken to prevent spurious pulses due to HV leakage through the glass tube envelope to nearby grounded structures. The NB configuration requires two cables, one for HV and the other for the signal readout. Nevertheless, in this configuration it is more easily to read the PMT signal as it is referenced to ground. On the other hand, one advantage of the PB is that only one coaxial cable is required to carry the positive HV and to receive the signal from the PMT, but a decoupling circuit is needed to split the HV and the PMT signal. A dedicated splitter circuit was designed at CIEMAT to perform this function out of the cryostat. The splitter affects the real voltage value that is sent to the PB which is expected to be $\sim$7\% lower than the one read on the power supply. After some validation measurements, see section \ref{sec5.5}, the PB design was selected for ProtoDUNE-DP, as the total number of cables and feedthroughs in the detector is reduced and its behavior in terms of linearity and dark current is slightly better than the NB. All results presented in this study were achieved using the PB design with the exception of the results discussed in section~\ref{sec5.5}. \begin {figure}[ht!] \includegraphics[width=0.75\textwidth]{HVdividers.pdf} \centering \caption{Schematic drawing of the two HV divider options for the base.} \label{fig:bases} \end {figure} \begin {figure}[ht!] \subfigure[]{\includegraphics[width=0.75\textwidth]{PBdiagram.pdf}\label{fig:PB}} \subfigure[]{\includegraphics[width=0.75\textwidth]{NBdiagram.pdf}\label{fig:NB}} \centering \caption{Diagrams of the two bases considered: (a) positive base and (b) negative base.} \label{fig:basediagram} \end {figure} Once the PB was selected, all the bases were assembled, cleaned and tested at CIEMAT in air, Ar gas and liquid nitrogen (LN$_2$). Two tests were performed to the bases before being soldered to the PMTs and tested at CT: a resistance value of 13.4$\pm$0.1\,M$\Omega$ was confirmed, and 2000\,V were applied to the bases in Ar gas to verify the absence of sparks. \section{Testing set-up} \label{sec3} A dedicated test bench was designed at CIEMAT for the PMT characterization. The measurements are performed to 10 PMTs at the same time inside a 300\,L vessel at RT and filled with LN$_2$ at 77\,K for the CT tests. A schematic drawing is shown in Figure~\ref{fig:setup}. The vessel set-up and 10 PMTs installed are shown in Figure~\ref{fig:setup2}. \begin {figure}[ht] \includegraphics[width=0.8\textwidth]{PMTdiagram2.pdf} \centering \caption{Schematic drawing of the PMT testing experimental set-up.} \label{fig:setup} \end {figure} \begin {figure}[ht] \subfigure[]{\includegraphics[height=0.25\textheight]{setup4.pdf}\label{fig:setup2a}} \subfigure[]{\includegraphics[height=0.25\textheight]{setup3.pdf}\label{fig:setup2b}} \subfigure[]{\includegraphics[height=0.25\textheight]{PMTs_Vessel_10pms.pdf}\label{fig:setup2c}} \centering \caption{(a) 300\,L vessel used in the testing set-up. (b) 10 PMTs being installed in the vessel. (c) Schematic drawing showing the distribution of the 10 PMTs, temperature sensors (T1-T4), diffusers, and level sensor inside the vessel. } \label{fig:setup2} \end {figure} A 400\,L tank supplies LN$_2$ at $\sim$2\,atm pressure through a pipe directly to the 300\,L vessel where the PMTs are located at ambient pressure. A diagram of this system is shown in Figure~\ref{fig:setup3}. The system is automatically filled, using electro-valves controlled by level probes and temperature probes through a PC. The 10 PMTs are distributed in two levels, five per level, fixed to the lid through an internal structure, see Figure~\ref{fig:setup2c}. The cables of the PMTs, temperature and level sensors, and optical fibers pass through several CF40 ports. Each PMT is fixed to the internal structure with two M6 screws using the same support as it will be done in ProtoDUNE-DP. The PMTs are connected to the vessel feed-through by means of the same 21\,m cable that will be used on the final installation. The opening and closing of the lid is carried out with a little crane. \begin {figure}[ht] \includegraphics[width=0.7\textwidth]{setup.pdf} \centering \caption{Detailed cryogenics set-up for 10 PMT testing.} \label{fig:setup3} \end {figure} The main PMT features to measure are dark current, gain, and linearity. For each feature, a different electronics set-up is arranged using VME and NIM modules. To perform the dark current acquisition, a V895 discriminator and a V560E scaler tandem from CAEN\footnote{http://www.caen.it/} is used. In order to calculate the gain and study the PMT response to light, the PMT output is measured with a V965A CAEN Charge-to-Digital Converter (QDC) in a 200\,ns window. The PMTs are biased using CAEN's N1470 power supply. The DAQ is remotely controlled with the aim of automating the data acquisition with LabVIEW software\footnote{http://www.ni.com/}. The light sources used are a PicoQuant GmbH\footnote{https://www.picoquant.com/} laser head, with a 405\,nm wavelength and a pulse width less than 500\,ps FWHM, and an LED pulser, with a 460\,nm wavelength and a pulse width of $\sim$40\,ns. An example of the PMT response using the laser and the LED is shown in Figure~\ref{fig:sources}. The QDC integration window and the light source are synchronized using a two output signal generator. The light is driven inside the dewar using hard clad silica multimode optical fibers from Thorlabs\footnote{https://www.thorlabs.com/}. The amount of light is tuned using a set of UV optical filters and the light is diffused in the detector volume. \begin {figure}[ht] \subfigure[]{\includegraphics[width=0.45\textwidth]{Laser.pdf}} \subfigure[]{\includegraphics[width=0.45\textwidth]{LED.pdf}} \centering \caption{PMT response to a $\sim$10\,p.e. light input provided by the laser (a) and LED (b). The integration window is shown in purple.} \label{fig:sources} \end {figure} \section{PMT measurement methodology} \label{sec4} In this section, the approach and procedures used in the measurements, and the expected results are detailed. The studies carried out to fully understand the PMT behavior at CT include the dark current, the gain, and the linearity. \subsection{Dark Current} \label{sec4.1} The dark current (DC) rate is the response of the PMT in absence of light. It is known that the main contribution to the DC at RT is the thermionic emission~\cite{ham2}. However, a non-thermal contribution increases the DC rate at CT \cite{meyer2,meyer3} . The DC rate is estimated as the average rate of detected signals larger than 7\,mV, which assures single photo-electron (SPE) triggering at an operating gain of 10$^7$ in a completely dark state. At RT, measurements are taken after at least 15 hours of complete darkness inside the vessel. The DC evolution with time right after immersing the PMTs in LN$_2$ and closing the vessel is shown in Figure~\ref{fig:dcvstime}. It is clearly observed that initially the DC rate drops by more than one order of magnitude as the PMT was exposed to ambient light before the measurement. Then, the DC rate comes to a stable value with a small increase corresponding to the gain increase (see Figure~\ref{fig:gvstime}). As the PMT needs time to stabilize, DC measurements at CT are taken after $\sim$3~days. The same behavior is observed for all the tested PMTs. \begin {figure}[ht] \includegraphics[width=0.49\textwidth]{fig7.pdf} \centering \caption{Evolution of the DC rate right after immersing a PMT in LN$_2$ and closing the vessel lid. DC measurements are taken every 10\,s and averaged every 13\,min in the plot (error bars are the rms).} \label{fig:dcvstime} \end {figure} The DC as a function of the HV is measured in 100\,V steps from 1000\,V to 1900\,V at RT and CT. Additionally, as the PMT manufacturer provided a reference value of the DC rate for every PMT at a gain of 10$^9$ at RT, a similar measurement of the DC at that gain is also performed. The conditions defined by Hamamatsu to measure the DC rate differ from the ones explained above, as the trigger is made at SPE level for a 10$^9$ gain, which is a higher threshold than the one used at CIEMAT. \subsection{Gain and fatigue effect} \label{sec4.2} To characterise the gain of each PMT, a small amount of photons is sent to the PMT to obtain the SPE spectrum. The SPE spectrum is fitted to a convolution of a Poisson distribution, which models the number of p.e. generated in the photocathode, and a binomial distribution considering two posible amplification paths: through the first dynode and directly starting in the second dynode~\cite{bellamy}. An example of the fit is shown in Figure~\ref{fig:spe}. \begin {figure}[ht!] \includegraphics[width=0.49\textwidth]{fig8.pdf} \centering \caption{SPE spectrum (red) of a PMT at CT at 1500\,V, and fit results (black) where $\mu$ is the mean number of p.e. $Ped$ the position of the pedestal, $G1$ is the amplification of a p.e. from the first dynode (expressed as the mean pulse area), $G2$ from the second dynode, and $\alpha$ is the proportion of p.e. amplified by the second dynode.} \label{fig:spe} \end {figure} The gain vs HV, known as gain-voltage curve, is measured from 1100\,V to 1900\,V in 100\,V steps. Then, a fit is done following the power law $G= A V^B$ being $A$ and $B$ constants dependent on the number, structure, and material of the dynodes~\cite{ham2}. The gain evolution with time right after immersing the PMTs in LN$_2$ is shown in Figure~\ref{fig:gvstime}. During the first $\sim$5 hours the gain increases as the PMT vacuum improves, then it drops during $\sim$10 hours by a factor of 4 while the PMT is cooling down. Finally, a small drift takes place and gain reaches a stable value after $\sim$3~days. As a result, tests are taken always at least 3 days after immersion. A very similar behavior is reported in~\cite{microboone1}. \begin {figure}[ht!] \includegraphics[width=0.49\textwidth]{G_vs_time_afterimmersion_1.pdf} \centering \caption{Evolution of the gain right after immersing a PMT in LN$_2$ and closing the vessel lid. Gain is calculated every 30 minutes. The dotted line represents the nominal gain obtained 3 days after immersing the PMTs in LN$_2$.} \label{fig:gvstime} \end {figure} A variation of the gain is expected when the PMT output current increases, either, due to high gain, to high light intensity or to high rate of the light (or a combination of them). This variation of the gain is called fatigue~\cite{fatigue} and can distort the results of the measurements if the PMT cannot recover from this effect between measurements. At RT, no variation of the PMT gain is observed during the tests. The PMT fatigue recovery is much slower at CT allowing to observe a gain reduction. This gain reduction depends on the PMT output current generated during the previous test. In addition, once the cause of the gain reduction is over, the PMT requires a long time to recover the initial gain, and this time is also dependent on the amount of current generated in the PMT. Both factors, gain reduction and recovery time, vary from one PMT to another. The PMT gain is monitored over time after the tests that produce a high PMT output current. The gain monitoring is performed triggering the PMTs with low light level (SPE) at a rate of 100\,Hz. First, the recovery time is studied at 1400\,V after gain vs HV measurements, see Fig.~\ref{fig:fatiguea}. Although the average number of p.e. is lower than 1, at the maximum voltage (1900\,V) the output current is high enough to reduce the effective PMT gain by 30\% with respect to the initial measured gain at 1400\,V and the gain recovery requires about a day at CT. Second, after the linearity measurements where PMTs are exposed to light levels of $\sim$1000\,p.e. and up to a 10$^8$ gain, a similar behavior is observed, see Fig.~\ref{fig:fatigueb}. Finally, a more permanent effect over the gain is caused by high rate signals of the order of MHz, as can be observed in Fig.~\ref{fig:fatiguec}. In this case, the gain decreases by a factor of 2. For some PMTs, a week after illuminating them with high frequency signals, the gain is still 40\% lower than the nominal gain. However, other PMTs recover faster (in 3 days). In any case, the recovery time varies a lot from one PMT to another. The theoretical value of the PMT maximum output current during the previous test is calculated to verify that the gain recovery time depends on it. The PMT maximum output current is calculated as $I_{out} = e^- \cdot G \cdot npe \cdot f$, where $e^-$ is the electron charge, $G$ is the PMT gain, $npe$ is the average number of p.e. during the test, and $f$ the light pulse rate in Hz. The results are a few $\mu$A for the gain vs HV and linearity tests, and hundreds of $\mu$A for the light rate test, so the maximum output current looks proportional to the observed recovery time. \begin {figure}[ht!] \subfigure[]{\includegraphics[width=0.49\textwidth]{fig10a_ejebien_leyenda.pdf}\label{fig:fatiguea}} \subfigure[]{\includegraphics[width=0.49\textwidth]{fig10b_leyenda.pdf}\label{fig:fatigueb}} \subfigure[]{\includegraphics[width=0.49\textwidth]{fig10c_leyenda.pdf}\label{fig:fatiguec}} \centering \caption{ (a) Gain evolution at 1400\,V after raising the voltage to 1900\,V. (b) Gain recovery time after linearity measurements. (c) Gain recovery time after high rate measurements. The dotted lines represent the expected gain without the fatigue effect.} \label{fig:fatigue} \end {figure} \subsection{Linearity} \label{sec4.3} The PMT response is studied as a function of the light intensity. The amount of scintillation light arriving to the PMTs in ProtoDUNE-DP will vary from a few p.e. to thousands of p.e. depending on the particle track energy and distance to the PMTs. Thus, the response of the PMTs should be linear in a wide dynamic range. The PMT base was designed to fulfill this goal, and the deviation of the linear response is primarily caused by the anode saturation. The PMT linearity is studied at gains up to $10^8$ using a set of 11 filters with different attenuation factors connected to the light source through optical fibers. To avoid the fatigue effect, explained in section~\ref{sec4.2}, measurements are done from low light (SPE) to high light levels (up to 1000 p.e.). The expected amount of light is estimated relatively to the measured one in the linear region, taking three reference filters and considering their transmission factor. As known from previous tests, the PMT response is linear up to at least 100 p.e. for gain $\sim$10$^7$ and up to 40 p.e. for gain $\sim$10$^8$. For low intensity illumination (SPE regime) the average number of p.e. is obtained from the fit of the SPE spectrum, but it is not used as reference due to its large uncertainty. The 405\,nm laser (<1ns pulse) and the 460\,nm LED (40\,ns pulse) are used to check the effect of the pulse profile on the anode saturation. For the same amount of detected light, the shorter the light pulse, the higher output peak current leading more easily the PMT to saturation due to space-charge effects on the last dynodes. \subsection{Light rate} \label{sec4.4} In ProtoDUNE-DP, a continuous background of light pulses is expected due to the secondary scintillation light produced on the gaseous phase of the argon by the drifted electrons. To study how this can affect the PMT performance, dedicated tests are carried out. In these tests, the light intensity is set to different levels, from few p.e. to 150 p.e., using the set of filters. For each light level, the light pulses emission frequency is swept from low (100\,Hz) to high frequencies (10\,MHz). The tests are carried out with the laser and the LED to check the effect of the pulse profile. For each combination of light level and pulsed frequency, the charge spectra of the PMT is obtained to calculate the averaged amount of light observed by the PMT. Increasing the rate of the light pulses produces a proportional increase of the average output current. As the base design is based on resistors, the PMT inter-dynode voltages have a dependency with the PMT output current making the PMT response not linear when the light rate increases over a certain limit. The current through the base resistors has two components: the polarization current, constant and provided by the power supply, and the PMT output current that goes through the resistors in the opposite direction. As the total voltage applied to the base is kept constant by the power supply, a decrease on the last dynodes voltage (by the increase of the output current), makes higher the voltage on the first dynodes increasing the PMT gain. If the voltage on last dynodes continues decreasing (by the increase of the output current) at some point this voltage is not enough to maintain the current flow to the anode, and it decreases following the typical I-V curve of a vacuum diode. The only difference is that in the vacuum diode, the cloud of electrons is generated by a filament while in the PMT it is generated by the incident light and the previous dynodes. Besides that, the PMT output current does not depend any more on the light input intensity, only on the light rate that is changing the voltage on the last dynodes. Then, the expected PMT response vs. light rate can be divided in three zones: first, the linear response, from $<$1\,Hz; second, the over-linearity region, where the PMT gain increases; and third, the saturation region, when the voltage between the last dynode and the anode is close to zero and the PMT output decreases with the light rate. The PMT output is never zero because the initial velocity of the p.e. is not zero, therefore some of them can be collected by the last dynode even if the potential difference between the cathode and the first dynode is zero. To reach the zero output this potential should be negative. At CT, the fatigue effect observed on the PMT gain (as explained in section~\ref{sec4.2}), compensates the beginning of the gain increase with the light rate leading to a small reduction of the over-linearity region. \section{PMT validation results} \label{sec5} Although these PMTs are designed to operate at CT, the manufacturer does not provide information about the CT behavior. Then, to validate the PMT model selection, different tests were carried out to several PMTs at RT and CT, and the results are presented in this section. \subsection{Dark Current} \label{sec5.1} The DC rate as a function of the HV is measured. Figure~\ref{fig:dc_vs_hv_rt_ct} shows a typical plot of the DC at RT and CT. The plot is also done as a function of the gain to equalize the response at RT and CT, as expected. In general, the DC at CT is higher than at RT for the same gain. For this particular PMT, the DC increases from 0.6\,kHz at RT to 1.9\,kHz at CT for a gain of 1.5$\cdot10^7$, being the behavior of this PMT representative of other PMTs. \begin{figure}[ht!] \centering \subfigure[]{\includegraphics[width=0.49\textwidth]{DC_vs_HV}} \subfigure[]{\includegraphics[width=0.49\textwidth]{DC_vs_G}} \caption{DC vs HV (a) and vs gain (b) at RT and CT for one PMT. For each HV, DC is measured every 10\,s during 5\.min, the average value is plotted and the error bars represent the rms.} \label{fig:dc_vs_hv_rt_ct} \end{figure} \subsection{Gain} \label{sec5.2} Gain-voltage curves at RT and CT (at 77\,K) are measured, and the result for one PMT can be seen in Figure \ref{fig:gain_vs_hv_rt_ct}. The slope of the curves follows a simple power law. For the same HV, the gain at CT is lower than at RT. Being the gain 10$^7$ at RT, it decreases by 76\% due to cryogenic conditions. An increase of 170\,V is required on this PMT to compensate the gain loss at CT. The gain is also measured at a temperature closer to the one in ProtoDUNE-DP, which will be 94 K, taking into account the pressure at the bottom of the LAr cryostat. To achieve a higher temperature, an over-pressure of 1 bar is applied to the vessel, which is the maximum allowed by the set-up. While the temperature with LN$_2$ at atmospheric pressure is 77\,K, during these tests 83\,K are reached. Gain variations are found (see Figure~\ref{fig:gain_vs_hv_rt_ct}) compatible with expectations~\cite{clean1}. For the same PMT, a 10$^7$ gain at RT decreases 66\% at 83\,K. This means that an increase of 130\,V in the HV is required at RT to obtain the same gain. In total, 4 PMTs were measured at 83 K, and the gain decreases on average 60$\pm$9\%. \begin{figure}[ht!] \centering \includegraphics[width=0.49\textwidth]{fig12_leyenda.pdf} \caption{Gain vs HV at RT, at two CT (77\,K and 83\,K) for the same PMT. The dots represent the measurements, and the lines the fit explained in section \ref{sec4.2}. The vertical error bars correspond to the average variation in the gain obtained from repeated measurements: 7\% at RT and 21\% at CT.} \label{fig:gain_vs_hv_rt_ct} \end{figure} \subsection{Linearity} \label{sec5.3} The response of several PMTs as a function of the light intensity is measured at RT and CT. The response at RT for $\sim$10$^7$ and $\sim$10$^8$ gains illuminating a PMT with the laser is shown in Figure~\ref{fig:lin1}. For gains $<$10$^6$ the response has been observed to be linear in the tested range (up to $\sim$1000 p.e.). For gains close to 10$^7$, the PMT remains linear up to 200 p.e., while for larger gains ($>$10$^8$) the PMT response deviates from linearity with only 75 p.e., and a saturation regime is reached. Same response is observed for all tested PMTs, as an example the perfect agreement of three PMTs is shown in Figure~\ref{fig:lin1}. The uncertainty in the measured number of p.e. is 4\% and comes from the gain variation obtained fitting spectra at different light levels. On the other hand, the error in the expected number of p.e. is 9\% and corresponds to error propagation of the uncertainties in the gain estimation and in the measurement of the transmission of the filters. \begin {figure}[ht] \includegraphics[width=0.45\textwidth]{fig12.pdf} \centering \caption{Measured vs expected number of p.e illuminating three PMTs at different gains at RT with the laser.} \label{fig:lin1} \end {figure} The PMTs are also illuminated with the LED, and the results are compared to the laser ones in Fig.~\ref{fig:lin2a}. The linearity range increases when the light source is wider, showing a clear dependency with the pulse shape in the PMT saturation. At CT, the PMT response is just slightly worse than at RT when the PMT is illuminated by the laser. For instance, for a 10$^8$ gain and 500 p.e., the measured number of p.e. is 60\% and 70\% lower than expected at RT and CT, respectively. However, as can be seen in Fig.~\ref{fig:lin2b}, the linear range when the PMT is illuminated by the LED at CT is shorter than at RT, and similar to the laser at CT. Therefore, despite the very good linearity at RT, the PMT response is saturated at 300 p.e. for a gain of 10$^7$ at CT. \begin {figure}[ht] \subfigure[]{\includegraphics[width=0.49\textwidth]{13a.pdf}\label{fig:lin2a}} \subfigure[]{\includegraphics[width=0.49\textwidth]{fig14b_todos.pdf}\label{fig:lin2b}} \centering \caption{Comparison of the measured vs expected number of p.e. illuminating the PMT with the LED and the laser at RT (a) and comparison of RT and CT for the LED (b).} \label{fig:lin2} \end {figure} \subsection{Light rate} \label{sec5.4} The PMT response is studied for pulsed frequencies from 100\,Hz to 10\,MHz using the laser and the LED as light sources and different light intensities (from 10 to 50\,p.e.). At RT, see Figure \ref{fig:freqa}, the three regions explained in section \ref{sec4.4} are observed: the PMT response is flat until a given frequency ($>$10\,kHz) which depends on the charge; then, the over-linearity effect is observed; and the PMT saturation ($>$500\,kHz). At CT, the PMT response is expected to be the same as at RT, see Figure \ref{fig:freqa}, as the saturation curve depends only on the base design. The expected PMT gain reduction, as the average output current increases (see section \ref{sec4.2}), compensates and reduces the over-linearity region. The same result is observed for different PMTs, see Figure \ref{fig:freqb}. The frequency sweep is also done using the LED, and the results, shown in Figure \ref{fig:freq3}, suggest that the response is very similar, but the over-linearity starts at slightly higher frequencies when the LED is used. As the LED light pulses are wider, the PMT output peak current is smaller (for the same charge) moving the over-linearity effect to higher frequencies. For low intensity signals, as expected for the S2 light in ProtoDUNE-DP, the PMT response is linear until $\sim$1\,MHz, far enough for our purpose. \begin {figure}[ht] \subfigure[]{\includegraphics[width=0.49\textwidth]{fig14a_paper.pdf}\label{fig:freqa}} \subfigure[]{\includegraphics[width=0.49\textwidth]{fig14b_paper.pdf}\label{fig:freqb}} \centering \caption{(a) PMT LED frequency response at RT and CT for different amounts of light. (b) Two PMTs frequency response comparison for laser at RT. The vertical error bars are 4\% and are given by the variation of the charge obtained when repeating the measurements.} \label{fig:freq} \end {figure} \begin {figure}[ht] \includegraphics[width=0.49\textwidth]{fig15_paper.pdf} \centering \caption{Charge vs frequency for LED and laser with similar charge at CT. The vertical error bars are 4\% and are given by the variation of the charge obtained when repeating the measurements.} \label{fig:freq3} \end {figure} \subsection{PMT base validation} \label{sec5.5} In order to validate the design of the PMT base, the PB and NB are compared at RT and CT in terms of gain, dark rate, and linearity. As expected, the PMT gain with both configurations is similar. In order to compare DC rate vs voltage dependence, both bases are tested on the same PMT and on the same darkness conditions. The results show that the behavior of both bases is similar at voltages up to 1600\,V, and at higher HV the DC rate increases more on the NB reaching a rate around 50\% higher than the PB at 1900\,V. The base with negative power supply shows higher dark rate than the positive one because the photocathode is at high voltage and spurious pulses could appear due to current leakage through the glass or due to electro-luminescence in the glass. For the linearity response, the positive base shows slightly better results, see Figure~\ref{fig:PBNBa}. On the positive bias base, the power supply filter capacitor is closer to the anode which increases the charge reservoir for the PMT output increasing slightly the linearity range. For the study of the response with the light rate, the tests are done following the procedure described in section~\ref{sec4.4}. The positive circuit shows also better results, as shown in Figure~\ref{fig:PBNBb}. The difference between the two bases is also due to the different position of the filtering capacitor on the base. To verify it, a PB without this capacitor was also tested and its behavior was slightly worse than with the NB, see Figure~\ref{fig:PBNBb}, making clear that the presence of this capacitor close to the anode improves the linearity of the PMT. Increasing the capacity of this capacitor did not improve the response any further at the tested light levels. These tests confirm the better performance of the PB. \begin {figure}[ht] \subfigure[]{\includegraphics[width=0.49\textwidth]{linearity_bases_paperrr.pdf}\label{fig:PBNBa}} \subfigure[]{\includegraphics[width=0.49\textwidth]{fig16b_paperr.pdf}\label{fig:PBNBb}} \centering \caption{(a) Measured vs expected number of p.e. for the NB and PB at RT with the laser. (b) Response vs light rate for NB, PB and PB without filtering capacitor at RT with the laser.} \label{fig:PBNB} \end {figure} \section{ProtoDUNE-DP PMT characterization} \label{sec6} Last, all the PMTs to be installed at ProtoDUNE-DP are characterized. Measurements with 40 (36 + 4 spares) PMTs are taken to verify the successful functioning and create a database to be used during the detector commissioning and operation. DC and gain as a function of HV at RT and CT are measured. Also, typical waveforms for all the PMTs at 10$^7$ gain are recorded. \subsection{Dark Current} \label{sec6.1} The DC rate is measured for the 40 PMTs at several HVs as explained in section \ref{sec4.1}. In particular, the DC rate at $10^9$ gain is measured to compare with the value given by the manufacturer. Figure~\ref{fig:dca} shows the correlation between the rates measured at CIEMAT (at the HV specified by the manufacturer) and the ones provided by Hamamatsu at RT. In general, similar results are obtained. However, two of them had to be replaced, as one had no signal and another one showed a DC rate of around 100 kHz (almost 60 times the expected rate from Hamamatsu). It is worth noting that the defective PMTs are not shown in the plot, but the replacements are shown instead. On average, as shown in Figure~\ref{fig:dcb}, DC rate is 0.4\,$\pm$\,0.2\,kHz when the PMTs operate at $\sim$10$^7$ gain at RT, and DC rate is always below 1.4\,kHz. However, at CT, DC rate increases up to 1.7\,$\pm$\,0.3\,kHz, and some PMTs reach up to 2.5\,kHz. No correlation between DC at RT and CT is observed. The PMTs with very unstable DC rates in time (i.e. large error bars in the Fig.~\ref{fig:dca}) were inspected by Hamamatsu and although no defects were found, they can be designated as spares. The PMTs with the lowest DC rate are selected and the rest is left as spare. \begin{figure}[ht!] \centering \subfigure[]{\includegraphics[width=0.49\textwidth]{fig18a.pdf}\label{fig:dca}} \subfigure[]{\includegraphics[width=0.49\textwidth]{histo_dc_rt_ct_1e7_v2}\label{fig:dcb}} \caption{(a) DC measured at CIEMAT vs Hamamatsu for $10^9$ gain for the 40 PMTs. DC is measured every 10\,s during 5\,min, and the error bars represent the rms of the average value. The dotted line represents the identity line. (b) DC histograms for 10$^7$ gain at RT and CT measured at CIEMAT for the 40 PMTs. At RT, DC rate is on average 0.4\,$\pm$\,0.2\,kHz, and at CT, 1.7\,$\pm$\,0.3\,kHz. } \label{fig:dc_ciemat_hamamatsu} \end{figure} \subsection{Gain} \label{sec6.2} The gain results from the characterization of the 40 PMTs are presented here. Figure~\ref{fig:hv_ciemat_hamamatsu} shows the correlation between the HV determined at CIEMAT for a $10^9$ gain, and the HV required according to manufacturer specifications (both values at RT). The CIEMAT HV is extrapolated from the gain-voltage curve of each PMT. A good correlation is observed with only 2.3\% deviation possibly due to differences in the setup or in the gain determination method. In Figure~\ref{fig:hv_ciemat}, the HV required for a $10^7$ gain at RT and CT is presented. Higher HV, 170$\pm$72 V on average, needs to be applied at CT to reach the same gain, which is equivalent to 71$\pm$14\% gain drop at CT. \begin{figure}[ht!] \centering \includegraphics[width=0.49\textwidth]{hv_ciemat_hamamatsu_bajados7_paper.pdf} \caption{HV applied at CIEMAT compared to the HV provided by Hamamatsu for $10^9$ gain for the 40 PMTs. The dotted line represents the identity line.} \label{fig:hv_ciemat_hamamatsu} \end{figure} \begin{figure}[ht!] \centering \subfigure[]{\includegraphics[width=0.49\textwidth]{hv_ciemat_rt_ct_1e7_paperr.pdf}\label{fig:hv_ciemat_hamamatsub}} \subfigure[]{\includegraphics[width=0.49\textwidth]{hv_ciemat_rt_ct_1e7_v2.pdf}\label{fig:hv_ciemat_hamamatsuc}} \caption{(a) HV at RT vs CT for $10^7$ gain measured at CIEMAT for the 40 PMTs. The dotted line represents the identity line and the solid line the identity line shifted 170\,V. (b) Histogram of HV required to obtain a $10^7$ gain at RT and CT. The average HV is 1154$\pm$87\,V at RT, and 1324$\pm$103\,V at CT.} \label{fig:hv_ciemat} \end{figure} \section*{Conclusions} \addcontentsline{toc}{section}{\protect\numberline{}Conclusions}% The ProtoDUNE-DP experiment aims to build and operate a LAr TPC detector at CERN to fully demonstrate the dual-phase technology at large scale for DUNE, the next generation long-baseline neutrino experiment. The photon detection system will add precise timing capabilities, and will be formed by 8-inch cryogenic photomultipliers from Hamamatsu positioned at the bottom of the detector. The validation measurements of the PMT model (R5912-20Mod) and the base design at cryogenic temperature are described. In addition, the 40 PMTs to be used in ProtoDUNE-DP are characterized to ensure their performance according to the specifications and valuable results to be used by other experiments are obtained. It is observed that the dark current rate at CT is higher than at RT due to non-thermal contributions, i.e. at a gain of 10$^7$ the DC rate is on average to 1.7$\pm$0.3\,kHz at CT while at RT is 0.4$\pm$0.2\,kHz. This effect has been observed previously in different PMTs models, being proportional to the photocathode area \cite{meyer2}. From RT to CT, the gain decreases, and a gain of $10^7$ at RT, drops by 71$\pm$14\% when the PMT operates at 77\,K , and 60$\pm$9\% at 83\,K. At CT, a fatigue effect is observed as the PMT output current increases, either, due to high gain, high light intensity or high light rate. The gain recovery time depends on the PMT excitation (output current) and varies from PMT to PMT. The largest effect is observed, as expected, when the excitation comes from high frequency signals of $\sim$MHz. The linearity of the PMT response with the incident light depends on the PMT gain and the width of the incident light pulses. For fast signals (<1ns pulse) the PMT remains linear up to at least 1000 p.e. for a 10$^6$ gain, but loses linearity at 75 p.e. in the case of 10$^8$ gain. However, when the same amount of light is distributed on a wider (40 ns) pulse, the linearity region of the PMT increases. A slightly worse behavior is observed at CT for the laser, but an earlier saturation is observed for the LED in comparison to RT. PMT response with the light rate is flat until >10\,kHz for less than 100\,p.e.. At CT, the PMT response is expected to be the same as at RT, as the saturation curve depends only on the base design, but gain reduction compensates and reduces the over-linearity region. For these reasons, characterization tests at CT before installing the PMTs are required, and a dedicated photon calibration system is recommended to monitor the PMT gain during the experiment data taking period. It is concluded that these PMTs are validated and will be used in ProtoDUNE-DP. \acknowledgments This project has received funding from the European Union Horizon~2020 Research and Innovation programme under Grant Agreement no.~654168 and from the Spanish Ministerio de Econom\'ia y Competitividad (SEIDI-MINECO) under Grants no.~FPA2016-77347-C2-1-P, FPA2016-77347-C2-2-P, MdM-2015-0509, and SEV-2016-0588. \bibliographystyle{JHEP}
1,116,691,501,071
arxiv
\section{Introduction} The planar dimer model is a classical statistical mechanics model, involving the study of the set of \emph{dimer covers} (perfect matchings) of a planar, edge-weighted graph. In the 1960s, Kasteleyn \cites{Kast61,Kast63} and Temperley and Fisher \cite{TF61} showed how to compute the (weighted) number of dimer covers of planar graphs using the determinant of a signed adjacency matrix now known as the \emph{Kasteleyn matrix}. In mathematics the dimer model was popularized with the papers \cites{EKLP1,EKLP2} on the ``Aztec diamond" and later with results on the local statistics \cite{K97}, conformal invariance \cite{K00}, and limit shapes \cite{CKP}, connections with algebraic geometry \cites{KOS, KO}, cluster varieties and integrability \cite{GK12}, and string theory \cite{HK}. While the dimer model can be considered from a purely combinatorial point of view, it also has a rich integrable structure, first described in \cite{GK12}. The integrable structure on dimers on graphs on the torus was found to generalize many well-known integrable systems, see for example \cite{FM} and \cite{AGR}. What is especially important is that the related integrable system is of cluster nature, and this allows one to immediately quantize it, getting a quantum integrable system. From the point of view of classical mechanics, associated to the dimer model on a bipartite graph on a torus (or equivalently a periodic bipartite planar graph) is a Poisson variety with a Hamiltonian integrable system. Underlying this system is an algebraic curve $C = \{P(z,w)=0\}$ (called the \emph{spectral curve}) and a divisor on this curve--essentially a set of $g$ distinct points $\{(p_1,q_1),\dots,(p_g,q_g)\}$ on $C$. This is the \emph{spectral data} associated to the model. It was shown in \cite{KO} that the map from the weighted graph to the spectral data was bijective, from the space of ``face weights" (see below) to the moduli space of genus-$g$ curves and effective degree-$g$ divisors on the open spectral curve $C^\circ$. Subsequently Fock \cite{Fock} constructed the inverse spectral map (from the spectral data to the face weights), describing it in terms of theta functions over the spectral curve. The special case of genus $0$ was described earlier in \cites{Kenyon.isoradial, KO} and an explicit construction in the case of genus $1$ was more recently given in \cite{BCdT}. Positivity of Fock's inverse map was studied in \cite{BCdT1}. In the current paper, we show that the inverse map can be given an explicit \emph{rational} expression in terms of the divisor points $(p_i,q_i) \in C^\circ$ and the points of $C$ at toric infinity. An exact statement is given in Theorem \ref{Th2.3} below. While Fock's construction is very natural and interacts nicely with positivity, it involves theta functions. Our construction gives the inverse map as ratios of certain determinants in the spectral data and can be explicitly computed using computer algebra. We briefly describe our construction now. The spectral data is defined via a matrix $K(z,w)$ called the Kasteleyn matrix, whose rows are indexed by white vertices, columns by black vertices, and whose entries are Laurent polynomials in $z,w$. Let us consider the adjugate matrix of $K$: \[ Q=K^{-1}\det K. \] The matrix $Q$ is important when studying the probabilistic aspects of the dimer model (on the lift of the graph on the torus to the plane): the edge occupation variables form a determinantal process whose kernel is given by the Fourier coefficients of $Q/P$, as discussed in \cite{KOS}. In the present work, we have a different use for $Q$: finding (a column of) the matrix $Q$ from the spectral data allows us to reconstruct the face weights and thereby invert the spectral transform. The points $(p_i,q_i)\in C$ are defined to be the points where a column of $Q(z,w)$, corresponding to a {fixed} white vertex ${\bf w}$, vanishes. We show that entries in the ${\bf w}$-column of $Q$, which are Laurent polynomials, can be reconstructed from the spectral data by solving a linear system of equations. Some of the linear equations are easy to describe: for {any} black vertex ${\rm b}$, we have $Q_{{\rm b} {\bf w}}(p_i,q_i)=0$ for $i=1,\dots,g$, which are $g$ linear equations in the coefficients of $Q_{{\rm b} {\bf w}}$. However, these equations are usually not sufficient to determine the coefficients of $Q_{{\rm b} {\bf w}}$. We find additional equations from the vanishing of $Q_{{\rm b} {\bf w}}$ at certain points at infinity of the spectral curve $C$, and show that these equations determine $Q_{{\rm b} {\bf w}}$ uniquely, up to a non-zero constant. We then give a procedure to reconstruct the weights from the ${\bf w}$-column of $Q$. The article is organized as follows. In Section \ref{sec:background} we review the dimer cluster integrable system and the spectral transform. In Section \ref{sec2}, we state Theorem \ref{Vbwthm}, which is our main result, and describe the reconstruction procedure. We work out two detailed examples in Section \ref{sec:example}. Sections \ref{smallpolysection}, \ref{extensionsection} and \ref{laurentsection} contain proofs of our results. In Appendix A we review results from toric geometry. In Appendix B we provide explicit combinatorial descriptions for some of our constructions. These are useful for computations. \section{Background}\label{sec:background} For further information about the material in this section see \cite{GK12}. \subsection{Dimer models} Let $\Gamma$ be a bipartite graph on the torus ${\mathbb T}\cong S^1\times S^1$ such that the connected components of the complement of $\Gamma$---the faces---are contractible. We denote by $B(\Gamma)$ and $W(\Gamma)$ the black and white vertices of $\Gamma$, by $V(\Gamma)$ the vertices, and by $E(\Gamma)$ the edges of $\Gamma$. When the graph is clear from context, we will usually abbreviate these to $B,W, V$ and $E$. A \textit{dimer model on the torus} is a pair $(\Gamma, [{wt}])$, where $\Gamma$ is a bipartite graph on the torus as above and $[wt] \in H^1(\Gamma,{\mathbb C}^\times).$ (Here and throughout the paper ${\mathbb C}^\times$ is the group of nonzero complex numbers under multiplication.) For a loop $L$ and a cohomology class $[wt]$, we denote by $[wt]([L])$ the pairing between the cohomology and the homology. We orient edges from their black vertex to their white vertex. The cohomology class $[wt]$ can be represented by a cocycle $wt$, which using this orientation can be identified with a ${\mathbb C}^\times-$valued function on the edges of $\Gamma$, called the \emph{edge weight}. The edge weight is well-defined modulo multiplication by coboundaries, which are functions on edges $e={\rm b} {\rm w}$ given by $ f({\rm w})f({\rm b})^{-1}$ for functions $f:V(\Gamma)\to{\mathbb C}^\times$. Note that the weight of a loop is not the product of its edge weights, but the ``alternating product" of its edge weights: edges oriented against the orientation of the loop are multiplied with exponent $-1$. A \textit{dimer cover} or \textit{perfect matching} ${\mathrm{m}}$ of $\Gamma$ is a subset of $E(\Gamma)$ such that each vertex of $\Gamma$ is incident to exactly one edge in ${\mathrm{m}}$. Let $\mathscr M$ denote the set of dimer covers of $\Gamma$. If we fix a reference dimer cover ${\mathrm{m}}_0$, we get a function \begin{align*} \pi_{{\mathrm{m}}_0}: \mathscr M &\rightarrow H_1({\mathbb T},{\mathbb Z})\\ {\mathrm{m}} &\mapsto [{\mathrm{m}}-{\mathrm{m}}_0]. \end{align*} Here ${\mathrm{m}}-{\mathrm{m}}_0$ is the $1$-chain which assigns $1$ to (oriented) edges of ${\mathrm{m}}$ and $-1$ to (oriented) edges of ${\mathrm{m}}_0$, so ${\mathrm{m}}-{\mathrm{m}}_0$ is a union of oriented cycles and doubled edges, whose homology class is $[{\mathrm{m}}-{\mathrm{m}}_0]$. The \emph{Newton polygon} of $\Gamma$ is defined to be \[ N(\Gamma) :=\text{Convex-hull}(\pi_{{\mathrm{m}}_0} (\mathscr M)) \subset H_1({\mathbb T},{\mathbb R}). \] Changing the reference dimer cover ${\mathrm{m}}_0$ results in a translation of $N(\Gamma)$. We assume that $\Gamma$ is such that $N(\Gamma)$ has interior. This is a nondegeneracy condition on $\Gamma$. {(When $N$ has empty interior, the graph $\Gamma$ is equivalent under certain elementary transformations to a graph whose lift to ${\mathbb R}^2$ is disconnected, that is, has noncontractible faces; such a graph breaks into essentially one-dimensional components, and there is no integrable system.)} \subsection{Zig-zag paths and the Newton polygon} \begin{figure} \centering \begin{tikzpicture}[scale=0.8,baseline={([yshift=-.7ex]current bounding box.center)}] \draw[dashed, gray] (0,0) rectangle (4,4); \coordinate[bvert] (w2) at (1,1); \coordinate[bvert] (w1) at (3,3); \coordinate[wvert] (b1) at (1,3); \coordinate[wvert] (b2) at (3,1); \draw[-] (b1) edge (w1) edge (w2) edge (1,4) edge (0,3) ; \draw[-] (b2) edge (w2) edge (w1) edge (4,1) edge (3,0) ; \draw[-] (w1) edge (3,4) edge (4,3); \draw[-] (w2) edge (1,0) edge (0,1); \draw[->,densely dotted] (3.8,0) -- (3.8,4); \draw[->,densely dotted] (0,3.75) -- (4,3.75); \node (no) at (-0.25,3.75) {$\gamma_z$}; \node (no) at (3.75,-0.25) {$\gamma_w$}; \end{tikzpicture} \caption{The fundamental rectangle $R$, along with the cycles $\gamma_z,\gamma_w$.}\label{figgzgw} \end{figure} A \emph{zig-zag path} in $\Gamma$ is a closed path that turns maximally right at each black vertex and maximally left at each white vertex. Let $\widetilde \Gamma$ be the biperiodic graph on the plane given by the lift of $\Gamma$ to the universal cover of ${\mathbb T}$. The bipartite graph $\Gamma$ is said to be \textit{minimal} if the lift of any zig-zag path does not self-intersect, and lifts of any two zig-zag paths do not have ``parallel bigons'', where by \emph{parallel bigon} we mean two consecutive intersections where both paths are oriented in the same direction from one to the next. For a minimal bipartite graph $\Gamma$ on the torus, the Newton polygon has an alternative description in terms of the zig-zag paths of $\Gamma$. Namely, since $\Gamma$ is embedded in ${\mathbb T}$, each zig-zag path $\alpha$ has a non-zero homology class $[\alpha] \in H_1({\mathbb T},{\mathbb Z})$. The polygon $N(\Gamma)$ is the unique convex integral polygon defined modulo translation in $H_1({\mathbb T},{\mathbb Z})$ whose integral primitive edge vectors in counterclockwise order around $N$ are given by the vectors $[\alpha]$ for all zig-zag paths $\alpha$. \begin{figure}[h] \begin{center} \begin{tikzpicture}[baseline={([yshift=-12ex]current bounding box.center)}] \begin{scope} \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>}}} ] \draw[postaction={decorate},red,thick] (1,0)--(0,1); \draw[postaction={decorate},blue,thick] (0,1)--(-1,0); \draw[postaction={decorate},black!60!green,thick] (-1,0)--(0,-1); \draw[postaction={decorate},orange!70!yellow,thick] (0,-1)--(1,0); \end{scope} \draw[fill=black] (0,0) circle (2pt); \draw[fill=black] (0,1) circle (2pt); \draw[fill=black] (1,0) circle (2pt); \draw[fill=black] (0,-1) circle (2pt); \draw[fill=black] (-1,0) circle (2pt); \end{scope} \begin{scope}[scale=0.5,shift={(3,2)}] \draw[dashed, gray] (0,0) rectangle (4,4); \coordinate[bvert] (w2) at (1,1); \coordinate[bvert] (w1) at (3,3); \coordinate[wvert] (b1) at (1,3); \coordinate[wvert] (b2) at (3,1); \draw[-] (b1) edge (w1) edge (w2) edge (1,4) edge (0,3) ; \draw[-] (b2) edge (w2) edge (w1) edge (4,1) edge (3,0) ; \draw[-] (w1) edge (3,4) edge (4,3); \draw[-] (w2) edge (1,0) edge (0,1); \draw[->,red,thick] (3,0)--(b2); \draw[->,red,thick](b2)--(w2); \draw[->,red,thick] (w2)--(b1); \draw[->,red,thick] (b1)--(0,3); \draw[->,red,thick] (4,3)--(w1); \draw[->,red,thick] (w1)--(3,4) ; \end{scope} \begin{scope}[scale=0.5,shift={(-7,2)}] \draw[dashed, gray] (0,0) rectangle (4,4); \coordinate[bvert] (w2) at (1,1); \coordinate[bvert] (w1) at (3,3); \coordinate[wvert] (b1) at (1,3); \coordinate[wvert] (b2) at (3,1); \draw[-] (b1) edge (w1) edge (w2) edge (1,4) edge (0,3) ; \draw[-] (b2) edge (w2) edge (w1) edge (4,1) edge (3,0) ; \draw[-] (w1) edge (3,4) edge (4,3); \draw[-] (w2) edge (1,0) edge (0,1); \draw[->,blue,thick] (3,4)--(w1); \draw[->,blue,thick](w1)--(b1); \draw[->,blue,thick] (b1)--(w2); \draw[->,blue,thick] (w2)--(0,1); \draw[->,blue,thick] (4,1)--(b2); \draw[->,blue,thick] (b2)--(3,0) ; \end{scope} \begin{scope}[scale=0.5,shift={(3,-6)}] \draw[dashed, gray] (0,0) rectangle (4,4); \coordinate[bvert] (w2) at (1,1); \coordinate[bvert] (w1) at (3,3); \coordinate[wvert] (b1) at (1,3); \coordinate[wvert] (b2) at (3,1); \draw[-] (b1) edge (w1) edge (w2) edge (1,4) edge (0,3) ; \draw[-] (b2) edge (w2) edge (w1) edge (4,1) edge (3,0) ; \draw[-] (w1) edge (3,4) edge (4,3); \draw[-] (w2) edge (1,0) edge (0,1); \draw[->,orange!70!yellow,thick] (1,0)--(w2); \draw[->,orange!70!yellow,thick](w2)--(b2); \draw[->,orange!70!yellow,thick] (b2)--(w1); \draw[->,orange!70!yellow,thick] (w1)--(4,3); \draw[->,orange!70!yellow,thick] (0,3)--(b1); \draw[->,orange!70!yellow,thick] (b1)--(1,4) ; \end{scope} \begin{scope}[scale=0.5,shift={(-7,-6)}] \draw[dashed, gray] (0,0) rectangle (4,4); \coordinate[bvert] (w2) at (1,1); \coordinate[bvert] (w1) at (3,3); \coordinate[wvert] (b1) at (1,3); \coordinate[wvert] (b2) at (3,1); \draw[-] (b1) edge (w1) edge (w2) edge (1,4) edge (0,3) ; \draw[-] (b2) edge (w2) edge (w1) edge (4,1) edge (3,0) ; \draw[-] (w1) edge (3,4) edge (4,3); \draw[-] (w2) edge (1,0) edge (0,1); \draw[->,black!60!green,thick] (1,4)--(b1); \draw[->,black!60!green,thick](b1)--(w1); \draw[->,black!60!green,thick] (w1)--(b2); \draw[->,black!60!green,thick] (b2)--(4,1); \draw[->,black!60!green,thick] (0,1)--(w2); \draw[->,black!60!green,thick] (w2)--(1,0) ; \end{scope} \end{tikzpicture} \end{center} \caption{Zig-zag paths and Newton polygon for a fundamental domain of the square lattice.} \label{fig:npzz} \end{figure} \begin{example}\label{eg:zz} Consider the fundamental domain for the square lattice shown in Figure \ref{figgzgw}, and let $\gamma_z,\gamma_w$ be cycles generating $H_1({\mathbb T},{\mathbb Z})$ as shown there. We will write homology classes in $H_1({\mathbb T},{\mathbb Z})$ in the basis $(\gamma_z,\gamma_w)$. There are four zig-zag paths $\alpha,\beta,\gamma,\delta$ with homology classes $(-1,1),(-1,-1),(1,-1)$ and $(1,1)$ respectively (Figure \ref{fig:npzz}), and therefore the Newton polygon is \[ \text{Convex-hull}\{(1,0),(0,1),(-1,0),(0,-1)\}. \] \end{example} \subsection{The cluster variety assigned to a Newton polygon } For a convex integral polygon $N \subset H_1({\mathbb T},{\mathbb R})$ defined modulo translation, consider the family of minimal bipartite graphs $\Gamma$ with Newton polygon $N(\Gamma)=N$. Any two graphs $\Gamma_1, \Gamma_2$ in the family are related by certain elementary transformations. An elementary transformation $\Gamma_1 \to \Gamma_2$ gives rise to a birational map $H^1(\Gamma_1, {\mathbb C}^\times) \dashrightarrow H^1(\Gamma_2, {\mathbb C}^\times)$. Gluing the tori $H^1(\Gamma, {\mathbb C}^\times)$ by these maps, we obtain a space $\mathcal X_N$, called the \textit{dimer cluster Poisson variety}. It carries a canonical Poisson structure. The Poisson center is generated by {the loop weights of the zig-zag paths}. The space $\mathcal X_N$ is the phase space of the cluster integrable system. See details in \cite{GK12}. \subsection{Some notation} Let $\Sigma$ denote the normal fan of $N$ {(see Section \ref{toricv} and Figures \ref{fig:hex} and \ref{fig:sqoct})} so that the set of rays $\Sigma(1)=\{\rho\}$ of $\Sigma$ is in bijection with the set of edges of $N$. We denote the edge of $N$ whose inward normal is directed along the ray $\rho$ by $E_\rho$, and the primitive vector along $\rho$ by $u_\rho$. Let $$ {\rm M}:=H_1({\mathbb T},{\mathbb Z}){\cong {\mathbb Z}^2}, \quad {\rm M}^\vee:=\rm{Hom}_{\mathbb Z}(\rm M,{\mathbb Z}){\cong {\mathbb Z}^2}. $$ Let us consider the algebraic torus with lattice of characters ${\rm M}$: $$ {\mathrm T}:= \mathrm{Hom}_{\mathbb Z}({\rm M},{\mathbb C}^\times) \cong ({\mathbb C}^\times)^2 $$ Let ${\rm M}_{\mathbb R}$ (resp. ${\rm M}^\vee_{\mathbb R}$) denote ${\rm M} \otimes_{\mathbb Z} {\mathbb R}$ (resp. ${\rm M}^\vee \otimes_{\mathbb Z} {\mathbb R}$), so that $N \subset {\rm M}_{\mathbb R}$ and $\Sigma \subset {\rm M}^\vee_{\mathbb R}$. An elementary transformation $\Gamma_1 \rightarrow \Gamma_2$ induces a canonical bijection between zig-zag paths in $\Gamma_1$ and zig-zag paths in $\Gamma_2$. Therefore, the set of zig-zag paths is canonically associated with $N$. We denote the set of zig-zag paths by $Z$, and for an edge $E_\rho$ of $N$, we denote by $Z_\rho$ the set of zig-zag paths $\alpha$ such that the primitive vector $[\alpha]$ is contained in $E_\rho$. \subsection{The Kasteleyn matrix} Let $R$ be a fundamental rectangle for ${\mathbb T}$, so that ${\mathbb T}$ is obtained by gluing together opposite sides of $R$. Let $\gamma_z,\gamma_w$ be the oriented sides of $R$ generating $H_1({\mathbb T},{\mathbb Z})$, as shown in Figure \ref{figgzgw}. Let $z$ (resp. $w$) denote the character $\chi^{\gamma_w}$ (resp. $\chi^{\gamma_z}$), so the coordinate ring of ${\mathrm T}$ is ${\mathbb C}[z^{\pm 1},w^{\pm 1}]$. Let $(*,*)_{\mathbb T}$ be the intersection pairing on $H_1({\mathbb T},{\mathbb Z})$. For $z,w\in{\mathbb C}^\times$ we multiply edge weights on edges crossing $\gamma_z$ by $z^{\pm1}$ and those crossing $\gamma_w$ by $w^{\pm1}$, with the sign determined by the orientation. Precisely, we multiply by \begin{equation} \label{edgeph} \phi(e):=z^{(e,\gamma_w)_{\mathbb T}} w^{(e,-\gamma_z)_{\mathbb T}}, \end{equation} Here $(e,*)_{\mathbb T}:=(l_e,*)_{\mathbb T}$ is the intersection index with the oriented loop $l_e$ obtained by concatenating $e = bw$ with an oriented path contained in $R$ from ${\rm w}$ to ${\rm b}$. \old{ \red{Let $H_1({\mathbb T},{\mathbb Z})^\vee:=\text{Hom}_{\mathbb Z}(H_1({\mathbb T},{\mathbb Z}),{\mathbb Z})$ be the dual lattice of $H_1({\mathbb T},{\mathbb Z})$.} There is an isomorphism $T:=H_1({\mathbb T},{\mathbb Z})^\vee \otimes {\mathbb C}^\times \cong ({\mathbb C}^\times)^2$, defined as follows. For each edge $e$ of $\Gamma$, we associate a character, that is a group homomorphism $T \rightarrow {\mathbb C}^\times$: \begin{equation} \label{edgeph} \varphi(e)=z^{( e,\gamma_z )} w^{(e,\gamma_w)}, \end{equation} where $( e,\gamma_z )$ is the intersection index of the edge $e$ and $\gamma_z$, and similarly $( e,\gamma_w )$. Explicitly we fix an embedding of $\Gamma$ in the fundamental rectangle. Isotoping edges if necessary, we may assume that each edge of $\Gamma$ intersects $\gamma_z$ and $\gamma_w$ only finitely many times. For an edge $e$, let $I_1, ..., I_n$ be the intersection points of $e$ with $\gamma_z$. We define $( e,\gamma_z ):=\sum_{i=j}^n (e,\gamma_z)_{I_j}$, where $(e,\gamma_z)_{I_j} \in \{-1,0,1\}$ is the local intersection index, where we orient $e$ from its black vertex to its white vertex.\\ } A \emph{Kasteleyn sign} is a cohomology class $[\kappa] \in H^1(\Gamma,{\mathbb C}^\times)$ such that for any loop $L$ in $\Gamma$, $[\kappa]([L])$ is $-1$ (respectively $1$) if the number of edges in $L$ is $0$ mod $4$ (respectively $2$ mod $4$). Given edge weights $wt$ and $\kappa$ representing $[wt]$ and $[\kappa]$ respectively, one defines the {\it Kasteleyn matrix} $K=K(z,w)$, whose columns and rows are parameterized by ${\rm b}\in B $ and ${\rm w} \in W$ respectively: \begin{align}\label{Kastdet} K(z,w)_{{\rm w},{\rm b}}&=\sum_{e \in E \text{ incident to } {\rm b},{\rm w}} wt(e) \kappa(e)\phi(e) \end{align} It describes a map of free ${\mathbb C}[z^{\pm 1}, w^{\pm 1}]$-modules, called the {\it Kasteleyn operator}: \begin{align}\label{Kastdet1} K(z,w):~&{\mathbb C}[z^{\pm 1}, w^{\pm 1}]^{B} \rightarrow {\mathbb C}[z^{\pm 1}, w^{\pm 1}]^{W},\\ &\delta_{{\rm b}} \longmapsto \sum_{{\rm w} \in W} K(z,w)_{{\rm w}, {\rm b}} \delta_{{\rm w}}. \end{align} \begin{theorem}[Kasteleyn 1963, \cite{Kast63}]\label{Kastthm} Fix a dimer cover ${\mathrm{m}}_0$, and let $\phi({\mathrm{m}}_0)=\prod_{e \in {\mathrm{m}}_0} \phi(e)$. Then \[ \frac{1}{{ wt}({\mathrm{m}}_0) \kappa({\mathrm{m}}_0) \phi({\mathrm{m}}_0) } \det K(z,w)= \sum_{{\mathrm{m}} \in {\mathscr M}} \mathrm{sign}([{\mathrm{m}}-{\mathrm{m}}_0]) [{wt}]([{\mathrm{m}}-{\mathrm{m}}_0]) \chi^{[{\mathrm{m}}-{\mathrm{m}}_0]}, \] where $\text{sign}({\mathrm{m}}) \in \{\pm 1\}$ is a sign that depends only on the homology class $[{\mathrm{m}}-{\mathrm{m}}_0]$ and $[\kappa]$ \end{theorem} The \textit{characteristic polynomial} is the Laurent polynomial \[ P(z,w):=\frac{1}{{ wt}({\mathrm{m}}_0) \kappa({\mathrm{m}}_0) \phi({\mathrm{m}}_0) } \det K(z,w).\] Its vanishing locus $C^\circ:=\{P(z,w)=0\} \subset ({\mathbb C}^\times)^2$ is called the \textit{(open part of the) spectral curve}. Theorem $\ref{Kastthm}$ implies that $N$ is the Newton polygon of $P(z,w)$. Although the definition of the Kasteleyn matrix uses cocycles representing the cohomology classes $wt$ and $\kappa$, the spectral curve does not depend on these choices. \begin{figure} \centering \begin{tikzpicture}[scale=0.8] \draw[dashed, gray] (0,0) rectangle (4,4); \coordinate[bvert] (w2) at (1,1); \coordinate[bvert] (w1) at (3,3); \coordinate[wvert] (b1) at (1,3); \coordinate[wvert] (b2) at (3,1); \draw[-] (b1) edge (w1) edge (w2) edge (1,4) edge (0,3) ; \draw[-] (b2) edge (w2) edge (w1) edge (4,1) edge (3,0) ; \draw[-] (w1) edge (3,4) edge (4,3); \draw[-] (w2) edge (1,0) edge (0,1); \node[](no) at (1.4,2.7){${\rm w}_1$}; \node[](no) at (3.3,0.7){${\rm w}_2$}; \node[](no) at (3.4,2.7){${\rm b}_1$}; \node[](no) at (1.3,0.7){${\rm b}_2$}; \node[blue](no) at (2,2){$f_1$}; \node[blue](no) at (0,2){$f_2$}; \node[blue](no) at (2,0){$f_3$}; \node[blue](no) at (0,0){$f_4$}; \draw[->,green,thick] (3,0)--(b2); \draw[->,green,thick](b2)--(w1); \draw[->,green,thick] (w1)--(3,4); \draw[->,red,thick] (0,3)--(b1); \draw[->,red,thick] (b1)--(w1); \draw[->,red,thick] (w1)--(4,3); \end{tikzpicture} \hspace{10mm} \begin{tikzpicture}[scale=0.8] \draw[dashed, gray] (0,0) rectangle (4,4); \coordinate[bvert] (w2) at (1,1); \coordinate[bvert] (w1) at (3,3); \coordinate[wvert] (b1) at (1,3); \coordinate[wvert] (b2) at (3,1); \draw[-] (b1) edge node[above] {$1$} (w1) edge node[left] {$1$} (w2) edge (1,4) edge (0,3) ; \draw[-] (b2) edge node[above] {$X_1$} (w2) edge node[right] {$-1$} (w1) edge (4,1) edge (3,0) ; \draw[-] (w1) edge (3,4) edge (4,3); \draw[-] (w2) edge (1,0) edge (0,1); \node (no) at (1,4.5) {$-\frac{X_1 X_3}{Bw}$}; \node (no) at (3,4.5) {$Bw$}; \node (no) at (5,1) {$-\frac{1}{ A X_2 z}$}; \node (no) at (4.5,3) {$-{A z}$}; \end{tikzpicture} \caption{Shown on the {left} is a labeling of vertices and faces of $\Gamma$, and two cycles $a$ (red) and $b$ (green) in $\Gamma$ that generate $H_1({\mathbb T},{\mathbb Z})$. Shown on the {right} is a cocycle representing $[wt]$, along with $\kappa$ and $\phi$.} \label{figds} \end{figure} \begin{example} Let $a$ and $b$ be the two cycles in $\Gamma$ shown on the {left} of Figure \ref{figds} whose projections to ${\mathbb T}$ generate $H_1({\mathbb T},{\mathbb Z})$. Let $[wt] \in H^1(\Gamma,{\mathbb C}^\times)$ and let $A:=[wt]([a]), B:=[wt]([b])$. For $i=1,2,3$, let $X_i$ denote the $[wt]([\partial f_i])$, where $\partial f_i$ denotes the boundary of the face $f_i$ (the weight of the fourth face is determined by the fact that the product of all face weights is $1$). Then $(X_1,X_2,X_3,A,B)$ generate the coordinate ring of $H^1(\Gamma,{\mathbb C}^\times)$. A cocycle representing $[wt]$ is shown on the {right} of Figure \ref{figds}, along with $\kappa$ and $\phi$. The Kasteleyn matrix and the spectral curve are: \begin{align} K(z,w)&=\begin{blockarray}{ccc} {{\rm b}_1}& {\rm b}_2 \\ \begin{block}{(cc)c} 1-A z & 1-\frac{X_1 X_3}{B w} & {\rm w}_1\\ -1+Bw & X_1-\frac{1}{A X_2 z} & {\rm w}_2\\ \end{block} \end{blockarray},\nonumber \\ P(z,w)&=\left(1 + X_1 + \frac{1}{X_2} + X_1 X_3\right)- B w - \frac{X_1 X_3}{B w} - \frac{1}{A X_2 z} - A X_1 z. \label{spectralcurve2} \end{align} \end{example} \subsection{The toric surface assigned to a Newton polygon} \label{sec:toricvariety} In this Section, we collect some notation regarding toric varieties, and refer the reader to the Appendix \ref{toricv} for more details. A convex integral polygon $N \subset {\rm M}_{\mathbb R}$ determines a compactification $X_N$ of the complex torus ${\rm T}$ called a {\it toric surface}, and a divisor $D_N$ supported on the boundary $X_N- {\rm T}$, so that Laurent polynomials with Newton polygon $N$ extend naturally to sections of the coherent sheaf $\mathcal O_{X_N}(D_N)$. Denote by $|D_N|$ the projective space of non-zero global sections of the coherent sheaf $\mathcal O_{X_N}(D_N)$, considered modulo a multiplicative constant. Assigning to a section its vanishing locus, we identify points of $|D_N|$ with curves in $X_N$ whose restrictions to ${\rm T}$ are defined by Laurent polynomials with Newton polygon contained in $N$. The genus $g$ of the generic curve in $|D_N|$ is equal to the number of interior lattice points in $N$. Each edge $E_\rho$ of $N$ determines a projective line $D_\rho$ at infinity of $X_N$, and $$ X_N - {\rm T} = \cup_{\rho \in \Sigma(1)}D_\rho. $$ The divisor $D_N$ is given by \begin{equation} \label{DN} D_N = \sum_{\rho \in \Sigma(1)} a_\rho D_\rho, \end{equation} where $a_\rho \in {\mathbb Z}$ are such that \begin{equation} \label{eq:Npoly} N=\bigcap_{\rho \in \Sigma(1)}\{m \in {\rm M}_{\mathbb R}: \langle m , u_\rho \rangle \geq -a_\rho\}. \end{equation} The lines $D_\rho$ intersect according to the combinatorics of $N$. The intersection index of a generic curve in $|D_N|$ with the line $D_\rho$ is equal to the number $|E_\rho|$ of primitive integral vectors in the edge $E_\rho$. The points of intersection are called \textit{points at infinity}. Let $C \in |D_N|$ denote the compactification of the open spectral curve $C^\circ$, i.e. $C$ is the closure of $C^\circ$ in $X_N$. \subsection{Casimirs}\label{sec:cas} Let $\alpha$ be a zig-zag path $\alpha= {\rm b}_1 \rightarrow {\rm w}_1 \rightarrow {\rm b}_2 \rightarrow \cdots \rightarrow {\rm w}_d \rightarrow {\rm b}_1$ in $Z_\rho$. We define the \emph{Casimir} $C_\alpha$ by \[ C_\alpha:=(-1)^d [\kappa]([\alpha]) [wt]([\alpha]). \] The Casimirs determine points at infinity of $C$ as follows: since $[\alpha]$ is primitive and $\langle u_\rho,[\alpha] \rangle=0$, we can extend it to a basis $(x_1,x_2)$ of $\rm M$ with $[\alpha]=x_1$ and $\langle x_2,u_\rho \rangle =1$. The affine open variety in $X_N$ corresponding to the cone $\rho$ is \[ U_\rho=\text{Spec }{\mathbb C}[x_1^{\pm 1 },x_2] \cong {\mathbb C}^\times \times {\mathbb C}, \] and $D_\rho \cap U_\rho$ is defined by $x_2=0$, and so the character $x_1^{-1}=\chi^{-[\alpha]}$ is a coordinate on the dense open torus ${\mathbb C}^\times=D_\rho \cap U_\rho$ in $D_\rho$. Therefore, the equation \begin{equation} \label{eq:casca} \chi^{-[\alpha]} (\nu_\rho(\alpha))= C_\alpha, \end{equation} defines a point $\nu_\rho(\alpha)$ in $D_{\rho}$. In other words, the point is defined as the unique point on the line at infinity such that the monomial $z^iw^j$, where $-[\alpha]=(i,j)$, evaluates to $C_\alpha$. We will prove later (see (\ref{caspoint})) that these are precisely the points at infinity of $C$. \begin{example} Consider the fundamental domain of the square lattice, whose zig-zag paths were listed in Example \ref{eg:zz} and Figure \ref{fig:npzz}. The Casimirs are \begin{align}\label{cassq} C_\alpha = -\frac B {A X_1}, \ \ \ C_\beta = -\frac{1}{A B X_2 }, \ \ \ C_\gamma = -\frac{A X_1X_2X_3}{B}, \ \ \ C_\delta = -\frac{AB}{X_3}.\end{align} Let us denote the normal ray in $\Sigma$ of a zig-zag path $\omega$ by $\rho(\omega)$, so $u_{\rho(\alpha)}=(-1,-1)$ etc. We choose $x_2=\chi^{(0,-1)}$ so that $\langle u_{\rho(\alpha)},(0,-1) \rangle$=1. Then we have $U_{\rho(\alpha)}=\text{Spec}~{\mathbb C}[x_1=z^{-1}w,x_2=w^{-1}]$ and $D_{\rho(\alpha)} \subset U_{\rho(\alpha)}$ is given by $x_2=0$. In this case, $D_N=D_{\rho(\alpha)}+D_{\rho(\beta)}+D_{\rho(\gamma)}+D_{\rho(\delta)}$ (\ref{eq:Npoly}), and $P(z,w)$ is a global section of $\mathcal O_{X_N}(D_N)$. We trivialize $\mathcal O_{X_N}(D_N)$ over $U_{\rho(\alpha)}$ as follows: \begin{align*} \restr{\mathcal O_{X_N}(D_N)}{U_{\rho(\alpha)}}=\{t \in {\mathbb C}[z^{\pm 1},w^{\pm 1}]: \restr{\text{div}~t}{U_{\rho(\alpha)}}+D_{\rho(\alpha)} \geq 0\} &\cong \mathcal O_{U_{\rho(\alpha)}}\\ t &\mapsto {tx_2} \end{align*} Then making the change of variables $z=\frac{1}{x_1x_2}$ and $w=\frac{1}{x_2}$, and multiplying by $x_2$, the spectral curve $C$ in $U_\rho$ is cut out by \[ \left(1 + X_1 + \frac{1}{X_2} + X_1 X_3\right)x_2- {B} - \frac{X_1 X_3}{B }x_2^2 - \frac{x_1x_2^2}{A X_2 } - \frac{A X_1}{x_1}, \] so that $C \cap D_{\rho(\alpha)}$ is given by \[ -B- \frac{A X_1}{x_1}=0. \] Therefore $\nu(\alpha)$ is given by $\frac{z}{w}=\frac{1}{x_1}=C_\alpha$, which agrees with (\ref{eq:casca}). The table below lists the points at infinity for each of the zig-zag paths: \begin{equation} \centering \begin{tabular}{||c c c c||} \hline Zig-zag path & Homology class & Basis $x_1,x_2$ & Point at infinity\\ [0.5ex] \hline\hline $\alpha$ & $(-1,1)$ & $(-1,1),(0,-1)$ &$x_1=\frac{1}{C_\alpha},x_2=0$ \\ \hline $\beta$ & $(-1,-1)$ & $(-1,-1),(0,-1)$ &$x_1=\frac{1}{C_\beta},x_2=0$ \\ \hline $\gamma$ & $(1,-1)$& $(1,-1),(0,1)$&$x_1=\frac{1}{C_\gamma},x_2=0$ \\ \hline $\delta$ & $(1,1)$ & $(1,1),(0,1)$&$x_1=\frac{1}{C_\delta},x_2=0$ \\ \hline \end{tabular} \label{zzpathtable} \end{equation} \end{example} \subsection{The spectral transform} Our next goal is to define the spectral transform, which plays the key role in this paper. We present two equivalent definitions of the spectral transform. The first is more invariant, and does not require choosing a distinguished black vertex ${\bf w}$. The second is the original definition of Kenyon and Okounkov \cite{KO}, and it is the one which we use in computations. However, it depends on the choice of the distinguished white vertex ${\bf w}$. Recall that {for each edge $E_\rho$ of $N$, we have} $\# Z_\rho = \# C \cap D_\rho$, but there is no canonical bijection between these sets. We define a \textit{parameterization of the points at infinity by zig-zag paths} to be a {choice} of bijections $\nu=\{\nu_\rho\}_{\rho \in \Sigma(1)}$, where \begin{equation} \label{ZA} \nu_\rho : Z_\rho \xrightarrow[]{\sim} C\cap D_\rho. \end{equation} For a curve $C \in|D_N|$, we denote by ${\rm Div}_\infty(C)$ the abelian group of divisors on $C$ supported at the \textit{infinity}, that is at $C \cap D_N$. Compactifications of the Kasteleyn operator will play a important role in this paper. The main ingredient in the construction of these compactifications is a combinatorial object called the \textit{discrete Abel map} introduced by Fock \cite{Fock} that encodes intersections with zig-zag paths. Let $\Gamma$ be a minimal bipartite graph in ${\mathbb T}$ with Newton polygon $N$ and spectral curve $C$. The {discrete Abel map} \begin{align*} {\bf d}:B \cup W \cup F &\rightarrow {\rm Div}_\infty(C) \end{align*} assigns to each vertex and face of $\Gamma$ a divisor at infinity. It is defined uniquely up to a constant by the requirement that for a path $\gamma$ from $x$ to $y$, contained in the fundamental domain $R$, where $x$ and $y$ are either vertices or faces of $\Gamma$, we have \begin{align*} {\bf d}(y)-{\bf d}(x)&=\sum_{\rho \in \Sigma(1)}\sum_{\alpha \in Z_\rho} (\alpha,\gamma)_R \nu_\rho(\alpha). \end{align*} Here $(\alpha,\gamma)_R$ is the intersection index in $R$, i.e. the signed number of intersections of $\alpha$ with $\gamma$. Since we require $\gamma$ to be contained in $R$, this is well-defined, independent of the choice of path $\gamma$. Locally, the rule is as follows: \begin{enumerate} \item If ${\rm b}$ is a black vertex incident to a face $f$, and ${\rm b}$ and $f$ are separated by $\alpha \in Z_\rho$, then ${\bf d}({\rm b})={\bf d}(f)+\nu_\rho(\alpha)$. \item If ${\rm w}$ is a white vertex incident to a face $f$, and ${\rm w}$ and $f$ are separated by $\alpha \in Z_\rho$, then ${\bf d}({\rm w})={\bf d}(f)-\nu_\rho(\alpha)$. \end{enumerate} We normalize ${\bf d}$, setting the value of ${\bf d}$ at certain face $f_0$ of $\Gamma$ to be $0$. Then for any black vertex ${\rm b}$, face $f$, and white vertex ${\rm w}$ of $\widetilde \Gamma$ we have: \begin{equation} {\rm deg} ~{\bf d}({\rm b}) =1, \ \ {\rm deg}~ {\bf d}({f}) =0, \ \ {\rm deg}~ {\bf d}({\rm w}) =-1. \end{equation} \begin{remark} Only differences of the form ${\bf d}(y)-{\bf d}(x)$ will appear in our constructions later, so the choice of normalization does not play a role. \end{remark} \begin{example}\label{eg:dam} Let us compute the discrete Abel map ${\bf d}$ for the square lattice in Figure \ref{figds}. We normalize ${\bf d}(f_1)=0$. Then we have \[ {\bf d}({\rm b}_1)=\nu_{\rho(\gamma)}(\gamma), \quad {\bf d}({\rm b}_2)=\nu_{\rho(\alpha)}(\alpha),\quad {\bf d}({\rm w}_1)=-\nu_{\rho(\beta)}(\beta),\quad {\bf d}({\rm w}_2)=-\nu_{\rho(\delta)}(\delta), \] where $\nu$ is shown in table (\ref{zzpathtable}). \end{example} \vskip 2mm {\bf Definition 1.} A \textit{line bundle spectral data} related to a Newton polygon $N$ is a triple $(C,{\cal L},\nu)$ where $C\in |D_N|$ is a genus $g$ curve on the toric surface $X_N$, ${\cal L}$ is a degree $g-1$ line bundle on $C$, and $\nu$ is a parameterization of points at infinity by zig-zag paths. Denote by $\mathcal S'_N$ the moduli space parameterizing the line bundle spectral data on $N$. The spectral transform is a rational map (here $\dashrightarrow$ means a rational map) \begin{align*} \kappa_{\Gamma, {\bf d}}: \mathcal X_N & \dashrightarrow \mathcal S'_N \end{align*} defined on the dense open subset $H^1(\Gamma,{\mathbb C}^\times)$ of $\mathcal X_N$ by $[wt] \mapsto (C,\mathcal L,\nu)$, where: \begin{enumerate} \item $C$ is the spectral curve. \item Let $\restr{K}{C^\circ}$ denote the restriction of the Kasteleyn matrix to $C^\circ$. The discrete Abel map ${\bf d}$ determines an extension $\overline K$ of $\restr{K}{C^\circ}$ to a morphism of locally free sheaves on $C$, see Section \ref{extensionsection}. The coherent sheaf $\mathcal L$ is defined as the cokernel of $\overline K$. When $C$ is a smooth curve, which happens for generic $[wt]$, $\mathcal L$ is a line bundle. The convention deg ${\bf d}({\rm w})=-1$ implies that $\text{deg}~\mathcal L=g-1$, see Proposition \ref{degl}. \item The parameterizations of the divisors $D_N \cap C$ are defined by associating to a zig-zag path $\alpha$ the point at infinity $\nu_\rho(\alpha)$. \end{enumerate} \vskip 2mm {\bf Definition 2.} A \textit{divisor spectral data} related to a Newton polygon $N$ is a triple $(C,S,\nu)$ where $C \in |D_N|$ is a genus $g$ curve on the toric surface $X_N$, $S$ is a degree $g$ effective divisor in $C^\circ$, and $\nu=\{\nu_\rho\}$ are parameterizations of the divisors $D_\rho \cap C$, see (\ref{ZA}). Denote by $\mathcal S_N$ the moduli space parameterizing the divisor spectral data on $N$. Let us fix a {\it distinguished white vertex} ${\bf w}$ of $\Gamma$. Then there is a rational map, called the \textit{spectral transform}, defined by Kenyon and Okounkov \cite{KO}, \begin{align} \kappa_{\Gamma, {\bf w}}: \mathcal X_N \dashrightarrow \mathcal S_N \label{SM} \end{align} defined on the dense open subset $H^1(\Gamma,{\mathbb C}^\times)$ of $\mathcal X_N$ by $[wt] \mapsto (C,S,\nu)$ as follows: \begin{enumerate} \item $C$ is the spectral curve. \item For generic $[wt]$, $C$ is a smooth curve and $\text{coker } K$ is the pushforward of a line bundle on $C^\circ$. Let $s_{\bf w}$ be the section of $\text{coker } K$ given by the ${\bf w}$-entry of the cokernel map. $S$ is defined to be the divisor of this section. In Corollary \ref{cordeg}, we show that $S$ has degree $g$. Then $S$ is the set of $g$ points in $C^\circ$ where the ${\bf w}$-column of the adjugate matrix $Q=K^{-1} \text{det}K$ vanishes. \item The parameterization of points at infinity by zig-zag paths $\nu$ is defined as follows: $\nu_\rho(\alpha)$ is the point in $C\cap D_\rho$ satisfying $\chi^{-[\alpha]}= C_\alpha$ (see Section \ref{sec:cas}). We call $\nu_\rho(\alpha)$ the \textit{point at infinity associated to} $\alpha$. \end{enumerate} Since $\rho$ is determined by $\alpha$, we will use the simpler notation $\nu(\alpha){:=\nu_{\rho}(\alpha)}$ hereafter. The two types of spectral data are equivalent. Given {a degree $g$ effective divisor} $S$, we have (Proposition \ref{degl}) \begin{equation} \mathcal L \cong \mathcal O_{C}\left( S+{\bf d}({\bf w}) \right). \end{equation} On the other hand, given a line bundle $\mathcal L$ and a white vertex ${\bf w}$, we can recover $S$ as follows. Consider the Abel-Jacobi map \begin{align*} A^g:\text{Sym}^g C &\rightarrow \text{Jac}(C),\\ E &\mapsto \mathcal L \otimes \mathcal O_C(E + {\bf d}({\bf w})). \end{align*} Then $A^g$ is birational by the Abel-Jacobi theorem \cite{Beau}*{Corollary 4.6}. We obtain $S=(A^g)^{-1}(\mathcal O_C)$. \begin{example} We compute the spectral transform for our running example of the square lattice. Let us take the distinguished white vertex to be ${\bf w}={\rm w}_1$. \begin{align} Q(z,w)&=\begin{blockarray}{ccc} {\rm w}_1& {\rm w}_2 \\ \begin{block}{(cc)c} X_1-\frac{1}{A X_2 z} & -1+\frac{X_1 X_3}{B w} & {\rm b}_1\\ 1-Bw & 1-A z & {\rm b}_2\\ \end{block} \end{blockarray}.\label{qbwds} \end{align} Solving $Q_{{{\rm b}_1} {\bf w}}(z,w)=Q_{{{\rm b}_2} {\bf w}}(z,w)=0,$ we get \begin{equation} \label{pqds} p=\frac{1}{A X_1 X_2}, \quad q=\frac 1 B. \end{equation} Therefore the spectral transform is: \begin{align*} \kappa_{\Gamma,{\bf w}}:H^1(\Gamma,{\mathbb C}^\times) &\dashrightarrow \mathcal S_N\\ (X_1,X_2,X_3,A,B) &\mapsto (C,(p,q),\nu), \end{align*} where $C=\{P(z,w)=0\}$ where $P$ is in (\ref{spectralcurve2}), and $\nu$ is shown in table (\ref{zzpathtable}). \end{example} \section{The main theorem} \label{sec2} Below we introduce functions ${\rm V}_{{\rm b} {\rm w}}$ on the moduli space $ {\cal S}_{N}$ of spectral data, relying on results in the remaining Sections \ref{smallpolysection}, \ref{extensionsection}, \ref{laurentsection}. They are defined for any pair ${\rm b} \in B$ and ${\rm w} \in W$ of black and white vertices, and defined as the solution to a system of linear equations ${\mathbb V}_{{\rm b} {\rm w}}$. The main result of the paper is the following. \begin{theorem}\label{Vbwthm} For the distinguished black vertex ${\bf w}$, the pull-back of the function ${\rm V}_{{{\rm b}}{\bf w}}$ under the spectral map coincides, up to a multiplicative constant, with the ${\rm b} {\bf w}$ matrix element ${Q}_{{\rm b} {\bf w}}$ of the adjugate matrix ${Q} :=K^{-1}\det K $ of the Kasteleyn matrix $K$. That is, \begin{equation} {Q}_{{\rm b} {\bf w}} = c \cdot \kappa_{\Gamma,{\bf w}}^*({\rm V}_{{\rm b} {\bf w}}). \end{equation} \end{theorem} As an application of this result, we get an explicit description of the inverse to the spectral map (\ref{SM}); see Section \ref{Sec2.3}. The next few sections discuss the structure of the system of linear equations $\mathbb V_{{\rm b} {\rm w}}$, described by a matrix, also denoted by $\mathbb V_{{\rm b} {\rm w}}$. Detailed examples are given in Section \ref{sec:example}. \subsection{The matrix \texorpdfstring{$\mathbb V_{{\rm b} {\rm w}}$}{Vbw}} The linear equations $\mathbb V_{{\rm b} {\rm w}}$ are in the variables $ (a_m)_{m \in N_{{\rm b} {\rm w} } \cap {\rm M}}$ where $N_{{\rm b} {\rm w}} \subset {\rm M}_{\mathbb R}$ is a convex polygon, introduced in Section \ref{S2.1.1}, and called the \textit{small Newton polygon}. Therefore the columns of the matrix $\mathbb{V}_{{\rm b} {\rm w}}$ are indexed by the lattice points $N_{{\rm b} {\rm w} } \cap {\rm M}$. By Corollary \ref{Cor3.5}, the polygon $N_{{\rm b} {\rm w}}$ is the Newton polygon of the Laurent polynomial ${Q}_{{\rm b} {\rm w}}$. The equations in $\mathbb V_{{\rm b} {\rm w}}$ i.e. the rows of the matrix $\mathbb V_{{\rm b} {\rm w}}$ are defined in Section \ref{S2.1.2}. There are two types: \begin{enumerate} \item There is a row for each of the points $(p_1,q_1),\dots, (p_g,q_g)$ of the divisor $S$ on the spectral curve. The entry of the row in column $m \in N_{{\rm b} {\rm w} } \cap {\rm M}$ is $\chi^m(p_i,q_i)$. \item The remaining rows correspond to certain zig-zag paths $\alpha$. The entries in the row corresponding to $\alpha$ are certain monomials in $C_\alpha$. \end{enumerate} The number of rows is at least as large as the number of columns minus one, but not necessarily equal. However, Proposition \ref{prop:uniq} shows that there is a unique solution to $\mathbb V_{{\rm b} {\rm w}}$ up to a multiplicative constant. We define \begin{equation} \label{DV} {\rm V}_{{\rm b} {\rm w}}:=\sum_{m \in N_{{\rm b} {\rm w} } \cap {\rm M}} a_m \chi^m, \end{equation} where $(a_m)_{m \in N_{{\rm b} {\rm w} } \cap {\rm M}}$ is the unique solution to $\mathbb V_{{\rm b} {\rm w}}$. Let $\mathbb V_{{\rm b} {\rm w}}^\chi$ be the matrix obtained from $\mathbb V_{{\rm b} {\rm w}}$ by appending a row of characters, whose entry in column $m\in N_{{\rm b} {\rm w} } \cap {\rm M}$ is $\chi^m$. If the matrix $\mathbb V_{{\rm b} {\rm w}}^\chi$ is square, then (cf. Remark \ref{remind}) we have: \[ {\rm V}_{{\rm b} {\rm w}}=\det \mathbb V_{{\rm b} {\rm w}}^\chi. \] Note that ${\rm V}_{{\rm b} {\rm w}}$ is only defined up to a multiplicative constant. However, only {ratios of the values of} these functions appear in the inverse map, see Section \ref{Sec2.3}. Let us proceed to the precise definition of the matrix $\mathbb V_{{\rm b}{\rm w}}$. \subsubsection{Columns of the matrix \texorpdfstring{$\mathbb V_{{\rm b} {\rm w}}$}{Vbw}} We now describe the small Newton polygons, whose lattice points correspond to columns of $\mathbb V_{{\rm b} {\rm w}}$. \paragraph{Rational Abel map \texorpdfstring{${\bf D}$}{dd}.} Recall the set $\{D_{\rho}\}$ of $\rm T$-invariant divisors of the toric surface $X_N$. We call them the projective lines at infinity of $X_N$. Consider the ${\mathbb Q}$-vector space $\text{Div}_{\rm T}^{{\mathbb Q}}(X_N)$ of ${\mathbb Q}$-divisors at infinity, defined as the ${\mathbb Q}$-vector space with a basis given by the divisors $D_{\rho}$: $$ \text{Div}_{\rm T}^{{\mathbb Q}}(X_N) := \bigoplus_{\rho \in \Sigma(1)}{\mathbb Q}{D_\rho}. $$ We define a \textit{rational Abel map} \[ {{\bf D}}:V \rightarrow \text{Div}_{\rm T}^{{\mathbb Q}}(X_N) \] which assigns to each vertex $\rm v$ of the graph $\Gamma$ a ${\mathbb Q}$-divisor at infinity ${\bf D}(\rm v)$ as follows: \begin{enumerate} \item Normalize ${{\bf D}}({\bf w})=0$. As in the case of ${\bf d}$, the choice of normalization plays no role, and we can replace $0$ with any ${\mathbb Q}$-divisor. \item For any path $\gamma$ contained in $R$ from ${\rm v}_1$ to ${\rm v}_2$, \begin{align*} {{\bf D}}({\rm v}_2)-{{\bf D}}({\rm v}_1)&=\sum_{\rho \in \Sigma(1)}\sum_{\alpha \in Z_\rho} (\alpha,\gamma)_R \frac{D_\rho}{|E_\rho|} \end{align*} where $(\cdot,\cdot)_R$ is the intersection index in $R$, i.e. the signed number of intersections of $\alpha$ with $\gamma$. \end{enumerate} The following Lemma follows from definitions. \begin{lemma}\label{lem:DAM} Let $\alpha, \beta$ be the zig-zag paths through ${e={\rm b} {\rm w}}$, with $\alpha \in Z_\sigma, \beta \in Z_\rho$. Then we have \begin{equation} \label{DAM} {{\bf D}}({\rm w})-{\bf D}({\rm b})=-\frac{1}{|E_{{\sigma}}|}D_{{\sigma}}-\frac{1}{|E_{{\rho}}|}D_{{\rho}}-\mathrm{div } \phi(e) \end{equation} {where $\phi(e)$ is defined in (\ref{edgeph}).} \end{lemma} \paragraph{Small Newton polygons.}\label{S2.1.1} There is a canonical bijection between divisors $D$ in $\text{Div}_{\rm T}^{{\mathbb Q}}(X_N)$ and convex polygons {$P$} with rational intercepts (see Proposition \ref{pro:globalsec} for its importance in toric geometry): \begin{equation}\label{equivalence} D=\sum_{\rho \in \Sigma(1) }a_\rho D_\rho ~~ \xleftrightarrow[]{} ~~ P=\bigcap_{\rho \in \Sigma(1)}\{m \in {\rm M}_{\mathbb R}: \langle m , u_\rho \rangle \geq -a_\rho\}, \qquad a_\rho \in {\mathbb Q}. \end{equation} Recall the divisor $D_N$ at the infinity of $X_N$, see \ref{DN}. Given an edge $e={\rm b} {\rm w}$, we define a ${\mathbb Q}$-divisor at infinity \begin{equation} \label{DE} E_{{\rm b} {\rm w}}:=D_N-{\bf D}({\rm w})+{\bf D}({\rm b})-\sum_{\rho \in \Sigma(1)}\sum_{\alpha \in Z_\rho : {\rm b} \in \alpha } \frac{1}{|E_{\rho}|}D_{\rho}. \end{equation} Here the double sum is over all zig-zag paths $\alpha$ passing through ${\rm b}$. We define $b_\rho \in {\mathbb Q}$ as the multiplicities of the projective lines at infinity $D_\rho$ in the divisor $E_{{\rm b} {\rm w}} $: \begin{equation} \label{br} E_{{\rm b} {\rm w}} = \sum_{\rho \in \Sigma(1)} b_\rho D_\rho. \end{equation} \begin{definition} \label{SNP1} The {\it small Newton polygon $N_{{\rm b} {\rm w}}$} is the polygon defined by the formula \begin{equation}\label{nbwdefinition} N_{{\rm b} {\rm w}}= \bigcap_{\rho \in \Sigma(1)} \{m \in {\rm M}_{\mathbb R} : \langle m, u_\rho \rangle \geq -b_\rho\}. \end{equation} Equivalently, $N_{{\rm b} {\rm w}}$ is the polygon associated to the divisor $E_{{\rm b} {\rm w}}$ in (\ref{equivalence}). \end{definition} The polygon $N_{{\rm b} {\rm w}}$ may not be integral. We will consider only integral points in it. The convex hull of the integral points in $N_{{\rm b} {\rm w}}$ contains the Newton polygon of $Q_{{\rm b} {\rm w}}$ (Corollary \ref{Cor3.5}). \begin{figure} \centering \begin{tikzpicture} \draw[fill=black] (0,0) circle (4pt); \draw[fill=black] (-1,0) circle (2pt); \draw[] (0,0) -- (-0.5,0.5)--(-1,0) --(-0.5,-0.5)--(0,0); \end{tikzpicture} \hspace{20mm} \begin{tikzpicture} \draw[fill=black] (0,0) circle (4pt); \draw[fill=black] (0,1) circle (2pt); \draw[] (0,0) -- (0.5,0.5)--(0,1) --(-0.5,0.5)--(0,0); \end{tikzpicture} \caption{The two small polygons in Example \ref{eg:smallply1}. The big black dot denotes the origin, while the other black dots are integral points.} \label{fig:smallpoly1} \end{figure} \begin{example}\label{eg:smallply1} We compute the small polygons for the square lattice in Figure \ref{figds}. Recall that we chose ${\bf w}={\rm w}_1$. Since there is only one zig-zag path in each homology direction, the rational Abel map ${\bf D}$ is obtained from ${\bf d}$ by replacing the point at infinity with the corresponding line at infinity, so from Example \ref{eg:dam}, we have \[ {\bf D}({\rm b}_1)=D_{\rho(\gamma)}, \quad {\bf D}({\rm b}_2)=D_{\rho(\alpha)},\quad {\bf D}({\rm w}_1)=-D_{\rho(\beta)},\quad {\bf D}({\rm w}_2)=-D_{\rho(\delta)}. \] We have $D_N=D_{\rho(\alpha)}+D_{\rho(\beta)}+D_{\rho(\gamma)}+D_{\rho(\delta)}$, using which we compute \begin{align*} E_{{\rm b}_1 {\rm w}_1}&= (D_{\rho(\alpha)}+D_{\rho(\beta)}+D_{\rho(\gamma)}+D_{\rho(\delta)}) -(-D_{\rho(\beta)})+D_{\rho(\gamma)}-(D_{\rho(\alpha)}+D_{\rho(\beta)}+D_{\rho(\gamma)}+D_{\rho(\delta)})\\ &= D_{\rho(\beta)}+D_{\rho(\gamma)},\\ E_{{\rm b}_2 {\rm w}_1}&= (D_{\rho(\alpha)}+D_{\rho(\beta)}+D_{\rho(\gamma)}+D_{\rho(\delta)}) -(-D_{\rho(\beta)})+D_{\rho(\alpha)}-(D_{\rho(\alpha)}+D_{\rho(\beta)}+D_{\rho(\gamma)}+D_{\rho(\delta)})\\ &= D_{\rho(\alpha)}+D_{\rho(\beta)}, \end{align*} so that \begin{align*} N_{{\rm b}_1 {\rm w}_1}&= \{-i-j \geq 0\} \cap \{i-j\geq -1\} \cap \{i+j \geq -1\} \cap \{-i+j \geq 0\},\\ N_{{\rm b}_2 {\rm w}_1}&= \{-i-j \geq -1\} \cap \{i-j\geq -1\} \cap \{i+j \geq 0\} \cap \{-i+j \geq 0\}, \end{align*} see Figure \ref{fig:smallpoly1}. Note that the convex hulls of the lattice points are the Newton polygons of $Q_{{\rm b}_1 {\rm w}_1}$ and $Q_{{\rm b}_2{\rm w}_1}$ in (\ref{qbwds}). \end{example} \subsubsection{Rows of the matrix \texorpdfstring{$\mathbb V_{{\rm b} {\rm w}}$}{Vbw}} \label{S2.1.2} Recall that the variables in $\mathbb V_{{\rm b} {\rm w}}$ are $ (a_m)_{m \in N_{{\rm b} {\bf w} } \cap {\rm M}}$. The equations in $\mathbb V_{{\rm b} {\rm w}}$ are of two types: \begin{enumerate} \item For each $1 \leq i \leq g$, we have the linear equations \[ \sum_{m \in N_{{\rm b} {{\rm w}} } \cap {\rm M}} a_m \chi^m(p_i,q_i) =0, \] so the entry of the corresponding row of $\mathbb V_{{\rm b} {\rm w}}$ in column $m$ is $\chi^m(p_i,q_i)$. \item Recall the notation $[x]$ for the largest integer $n$ such that $n \leq x$. Given a ${\mathbb Q}$-divisor $D = \sum_{\rho \in \Sigma(1)} b_\rho D_\rho$, we define a divisor with integral coefficients \[ [{D}]:=\sum_{\rho \in \Sigma(1)} [{b_\rho}] D_\rho. \] Recall the divisor $E_{{{\rm b}}{{\rm w}}}$ in (\ref{DE}). We have a linear equation for every zig-zag path $\alpha$ such that $\nu(\alpha)$ appears in \begin{equation} \label{contpointsatinf} -\restr{D_N}{C}+{\bf d}({{\rm w}})-{\bf d}({\rm b})+\sum_{\alpha \in Z} \nu(\alpha) + \restr{[{E_{{{\rm b} }{{\rm w}}}}]}{C}. \end{equation} Suppose $\alpha \in Z_\rho$ is a zig-zag path that contributes an equation. We extend $[\alpha]$ to a basis $(x_1,x_2)$ of $\rm M$, where $x_1:= [\alpha]$ and $\langle x_2,u_\rho \rangle =1$, so that for any $m \in {\rm M}$, we can write \[ \chi^m = x_1^{b_m} x_2^{c_m}, ~~~~b_m,c_m \in {\mathbb Z}. \] Let $N_{{\rm b} {{\rm w}}}^\rho$ be the set of lattice points in $N_{{\rm b} {{\rm w}}}$ closest to the edge $E_{\rho}$ of $N$ i.e. the set of points in $N_{{\rm b} {{\rm w}}}$ that minimize the functional $\langle *,u_\rho \rangle$. Then the equation associated with $\alpha$ is \begin{equation} \label{Cas} \sum_{m \in N^\rho_{{\rm b} {{\rm w}} } \cap {\rm M}}a_m C_\alpha^{-b_m}=0. \end{equation} So the entry in column ${m \in N^\rho_{{\rm b} {{\rm w}} } \cap {\rm M}}$ is the monomial $C_\alpha^{-b_m}$, and the entries in the other columns are $0$. Choosing a different basis vector $x_2$ leads to the same equation multiplied by a monomial in $C_\alpha$. \end{enumerate} \begin{remark} \label{remind} When the equations in $\mathbb V_{{\rm b} {\rm w}}$ are linearly independent (so there is exactly one less equation than the number of variables), we can append to $\mathbb V_{{\rm b} {\rm w}}$ the equation $\sum_{m \in N_{{\rm b} {\rm w}} \cap {\rm M}} a_m \chi^m$ to get a square matrix, which we denote by $\mathbb V_{{\rm b} {\rm w}}^\chi$. Then the function ${\rm V}_{{\rm b} {\rm w}}$ is the determinant: $$ {\rm V}_{{\rm b} {\rm w}} = \mathrm{det}~\mathbb V_{{\rm b} {\rm w}}^\chi. $$ Indeed, given an $(n-1)\times n$ matrix $(a_{ij})$, the system of linear equations $\sum^n_{j=1}a_{ij}x_j=0$ has a solution given by the signed maximal minors $A_j$ of the matrix $A$: \[ x_j = (-1)^jA_j. \] Here $A_j$ is the determinant of the matrix obtained by deleting the $j$-th column of $A$. Therefore the determinant of the augmented matrix $\mathbb V_{{\rm b} {\rm w}}^\chi$ recovers the expression ${\rm V}_{{\rm b} {\rm w}} $ in (\ref{DV}). \end{remark} \begin{remark}\label{rem::prim} When all the sides of the Newton polygon are primitive, we call the Newton polygon \textit{simple}. In this case, we have $[E_{{\rm b} {\bf w}}]=E_{{\rm b} {\bf w}}$ and ${\bf d}({\bf w})-{\bf d}({\rm b}) = \restr{({\bf D}({\bf w})-{\bf D}({\rm b}))}{C}$. Then the formula (\ref{contpointsatinf}) simplifies considerably to \begin{equation} \label{eq:simplepoly} \sum_{\alpha \in Z: {\rm b} \notin \alpha} \nu(\alpha). \end{equation} So for a simple Newton polygon the Casimir rows of the matrix $\mathbb V_{{\rm b} {\rm w}}$, that is the rows providing equations (\ref{Cas}), are parameterized by the zig-zag paths $\alpha$ which do not contain the vertex ${\rm b}$. \end{remark} \begin{example} We compute the linear system of equations $\mathbb V_{{\rm b} {\bf w}}$ for the square lattice in Figure \ref{figds} with ${\bf w}={\rm w}_1$. Since both black vertices are contained in every zig-zag path, the formula (\ref{eq:simplepoly}) is $0$, so there are no equations of type 2 in $\mathbb V_{{\rm b} {\bf w}}$ for ${\rm b} \in B$. Therefore, \[ {\mathbb V}_{{\rm b}_1 {\bf w}}=\begin{pmatrix} 1& p^{-1} \end{pmatrix}, \quad {\mathbb V}_{{\rm b}_2 {\bf w}}=\begin{pmatrix} 1& q \end{pmatrix}. \] By Remark \ref{remind}, we get \begin{equation} \label{v:eg} {\rm V}_{{\rm b}_1 {\bf w}}=\begin{vmatrix} 1 & z^{-1}\\ 1& p^{-1} \end{vmatrix}, \quad {\rm V}_{{\rm b}_2 {\bf w}}=\begin{vmatrix} 1 & w\\ 1& q \end{vmatrix}. \end{equation} Using (\ref{pqds}), we have \begin{align*} \kappa_{\Gamma,{\bf w}}^*({\rm V}_{{\rm b}_1 {\bf w}})&=A X_1 X_2-\frac{1}{z}=A X_2 Q_{{\rm b}_1 {\bf w}},\\ \kappa_{\Gamma,{\bf w}}^*({\rm V}_{{\rm b}_2 {\bf w}})&=\frac 1 B-w=\frac 1 B Q_{{\rm b}_1 {\bf w}}, \end{align*} verifying the conclusion of Theorem \ref{Vbwthm}. \end{example} \subsection{Reconstructing weights via functions \texorpdfstring{${\rm V}_{{\rm b} {\bf w}}$}{vbw}.} \label{Sec2.3} Take a white vertex ${\rm w}$ and a zig-zag path $\alpha$ containing ${\rm w}$. The pair $({\rm w}, \alpha)$ determines a {\it wedge} $W:={\rm b} \xrightarrow[]{e} {\rm w} \xrightarrow[]{e'} {\rm b}'$, where ${\rm w}$ is a white vertex incident to the vertices ${\rm b}, {\rm b}'$ such that ${\rm b} {\rm w} {\rm b}'$ is a part of $\alpha$. Recall $\phi(e)$ from (\ref{edgeph}), and the Kasteleyn sign $\kappa(e)$. We assign to this wedge the ratio \begin{equation} \label{RV} r_W:=-\frac{\kappa(e')\phi(e'){\rm V}_{{\rm b}' {\bf w}}}{\kappa(e)\phi(e){\rm V}_{{\rm b} {\bf w}}}(\nu(\alpha)). \end{equation} Note that we use the distinguished white vertex ${\bf w}$ in the expression rather than ${\rm w}$. The expression is in fact independent of ${\rm w}$, as discussed in Section \ref{S5.2}. The ratio on the right is a rational function on the curve. We evaluate the ratio at the point at infinity of the spectral curve $\nu(\alpha)$ corresponding to the zig-zag path $\alpha$, see (\ref{ZA}). This expression is well-defined, that is, the numerator and denominator have equal order of vanishing at $\nu(\alpha)$, by Corollary \ref{divQbw} below. Let $L={\rm b}_1 \to {\rm w}_1 \to {\rm b}_2 \to \dots \to {\rm b}_\ell={\rm b}_1$ be an oriented loop on $\Gamma$. It is a concatenation of wedges $W_i:={\rm b}_{i-1} {\rm w}_i {\rm b}_i, i=1,\dots,\ell$ (with $i$ taking values {cyclic modulo $\ell$}) provided by the white vertices. Denote by $\alpha_i$ the zig-zag path assigned to the wedge $W_i$. We define a cohomology class $[\omega]$ by \begin{equation}\label{mon} \begin{split} [\omega]([L]):=& \prod_{i =1}^\ell r_{W_i}. \end{split} \end{equation} \begin{lemma} The product (\ref{mon}) does not depend on the ambiguities of the involved functions ${\rm V}_{{\bf b}w}$. \end{lemma} \begin{proof} For each zig-zag path $\alpha$, ${\rm V}_{{\bf b}w}$ appears twice in (\ref{mon}), once each in the numerator and denominator, and so the multiplicative constants cancel out. \end{proof} \begin{theorem} \label{Th2.3} The cohomology classes $[wt]$ and $\kappa_{\Gamma,{\bf w}}^*[\omega]$ are equal. \end{theorem} We prove Theorem \ref{Th2.3} in Section \ref{S5.2}. \begin{remark}\label{remark:factor} To evaluate the expression (\ref{RV}), as explained in Section \ref{sec:cas}, we first extend $[\alpha]$ to a basis $(x_1,x_2)$ of $\rm M$ with $[\alpha]=x_1$ and $\langle x_2,u_\rho \rangle =1$. Then $\nu(\alpha)$ is given by $\frac{1}{x_1}=C_\alpha, x_2=0$. The numerator and denominator in (\ref{RV}) vanish to the same order in $x_2$, so after factoring out and canceling the highest power of $x_2$ in the numerator and denominator, we can evaluate at ${x_1}=\frac{1}{C_\alpha}, x_2=0$ to get a well-defined number. \end{remark} \begin{example} Consider the cycle $a$ in Figure \ref{figds}, {the red horizontal path}. We write it as the concatenation of the two wedges \[ W_1=({\rm w}_1,\delta), \quad W_2=({\rm w}_1,\gamma). \] From table (\ref{zzpathtable}), we know that in the basis $x_1=z w,x_2=w$, the points $\nu(\delta)$ is given by $x_1=\frac{1}{C_\delta},x_2=0$. Using (\ref{v:eg}), and making the substitution $z=\frac{x_1}{x_2},w=x_2$, we get \begin{align*} r_{W_1}&=- \frac{-1\cdot w^{-1}\cdot V_{{\rm b}_2 {\bf w}}}{-1 \cdot z \cdot V_{{\rm b}_1 {\bf w}}}(\nu(\delta))\\ &=-\frac{1}{zw} \frac{q-w}{p^{-1}-z^{-1}}(\nu(\delta))\\ &=\frac{-(q-x_2)}{x_1 p^{-1}-x_2}\left(\frac{1}{C_\delta},0\right)\\ &=-{pq}{C_\delta}. \end{align*} Similarly, from table (\ref{zzpathtable}) we know that in the basis $x_1=\frac z w,x_2=w$, the points $\nu(\delta)$ is given by $x_1=\frac{1}{C_\gamma},x_2=0$. Using (\ref{v:eg}), and making the substitution $z={x_1}{x_2},w=x_2$, we get \begin{align*} r_{W_2}&=- \frac{1\cdot 1\cdot V_{{\rm b}_1 {\bf w}}}{-1 \cdot w^{-1} \cdot V_{{\rm b}_2 {\bf w}}}(\nu(\gamma))\\ &=w \frac{p^{-1}-z^{-1}}{q-w}(\nu(\gamma))\\ &=x_2 \frac{p^{-1}-\frac{1}{x_1 x_2}}{q-x_2}\left(\frac{1}{C_\gamma},0\right)\\ &=-\frac{C_\gamma}{q}. \end{align*} \end{example} Therefore, $[\omega]([a])={p C_\gamma}{C_\delta}$, and using (\ref{cassq}) and (\ref{pqds}), we have \begin{align*} \kappa_{\Gamma,{\bf w}}^*[\omega]([a])&=\left(\frac{1}{AX_1X_2} \right)\cdot \left(-\frac{AX_1 X_2 X_3}{B} \right)\cdot \left({-\frac{AB}{X_3}}\right)\\ &=A. \end{align*} \section{Examples}\label{sec:example} In this section, we work out two detailed examples. \subsection{Primitive genus \texorpdfstring{$2$}{2} example} \begin{figure} \centering \begin{tikzpicture} \begin{scope}[scale=1.5,shift={(0,3)}] \draw[dashed, gray] (0,0) --(1,1.732)--(5,1.732)--(6,0)--(5,-1.732)--(1,-1.732)--(0,0); \coordinate[bvert] (b11) at (2*1+1-1,1.732*1-0.5773); \coordinate[wvert] (w11) at (2*1+1-2,1.732*1-1.732+0.5773); \coordinate[bvert] (b21) at (2*2+1-1,1.732*1-0.5773); \coordinate[wvert] (w21) at (2*2+1-2,1.732*1-1.732+0.5773); \coordinate[wvert] (w31) at (2*3+1-2,1.732*1-1.732+0.5773); \coordinate[wvert] (w'11) at (2*1+1-1,-1.732*1+0.5773); \coordinate[bvert] (b'11) at (2*1+1-2,-0.5773); \coordinate[wvert] (w'21) at (2*2+1-1,-1.732*1+0.5773); \coordinate[bvert] (b'21) at (2*2+1-2,-1.732*1+1.732-0.5773); \coordinate[bvert] (b'31) at (2*3+1-2,-1.732*1+1.732-0.5773); \coordinate[] (b01) at (1-1+0.5,1.732*1-1.732+0.866); \coordinate[] (b41) at (6-0.5,1.732*1-1.732+0.866); \coordinate[] (w'01) at (1-1+0.5,-1.732*1+1.732-0.866); \coordinate[] (w'41) at (6-0.5,-1.732*1+1.732-0.866); \draw[](b01)--(w11)--(b11)--(w21)--(b21)--(w31)--(b41); \draw[](w'01)--(b'11)--(w'11)--(b'21)--(w'21)--(b'31)--(w'41) (b'11)--(w11) (b'21)--(w21) (b'31)--(w31) (b11)--(2,1.732) (b21)--(4,1.732) (w'11)--(2,-1.732) (w'21)--(4,-1.732) ; \draw[red,thick] (b01)--(b41); \draw[red,thick] (w'01)--(w'41); \draw[black!60!green,thick] (b01)--(2,-1.732) (2,1.732)--(4,-1.732) (4,1.732)--(w'41) ; \draw[orange!70!yellow,thick] (2,1.732)--(w'01) (2,-1.732)--(4,1.732) (4,-1.732)--(b41) ; \end{scope} \begin{scope}[shift={(12,6)}] \draw[red,thick] (-1,-1)--(1,0); \draw[black!60!green,thick] (1,0)--(0,2); \draw[orange!70!yellow,thick] (0,2)--(-1,-1); \draw[fill=black] (0,0) circle (4pt); \draw[fill=black] (0,1) circle (2pt); \draw[fill=black] (1,0) circle (2pt); \draw[fill=black] (0,2) circle (2pt); \draw[fill=black] (-1,-1) circle (2pt); \end{scope} \begin{scope}[shift={(12,2)}] \draw[red,thick,->] (0,0)--(-1,2); \draw[black!60!green,thick,->] (0,0)--(-2,-1); \draw[orange!70!yellow,thick,->] (0,0)--(3,-1); \draw[fill=black] (0,0) circle (2pt); \node[](no) at (1.5,0) {$\beta$}; \node[](no) at (-1.5,-0.3) {$\alpha$}; \node[](no) at (-0.3,1.5) {$\gamma$}; \end{scope} \end{tikzpicture} \caption{A hexagonal graph, its Newton polygon and normal fan, with zig-zag paths and rays labeled.}\label{fig:hex} \end{figure} \begin{figure} \centering \begin{tikzpicture}[scale = 1.5] \draw[dashed, gray] (0,0) --(1,1.732)--(5,1.732)--(6,0)--(5,-1.732)--(1,-1.732)--(0,0); \coordinate[bvert,label=below:${{\rm b}_1}$] (b11) at (2*1+1-1,1.732*1-0.5773); \coordinate[wvert,label=right:${\bf w}$] (w11) at (2*1+1-2,1.732*1-1.732+0.5773); \coordinate[bvert,label=below:${{\rm b}_2}$] (b21) at (2*2+1-1,1.732*1-0.5773); \coordinate[wvert,label=above:${{\rm w}_2}$] (w21) at (2*2+1-2,1.732*1-1.732+0.5773); \coordinate[wvert,label=left:${{\rm w}_3}$] (w31) at (2*3+1-2,1.732*1-1.732+0.5773); \coordinate[wvert,label=above:${{\rm w}_4}$] (w'11) at (2*1+1-1,-1.732*1+0.5773); \coordinate[bvert,label=below:${{\rm b}_3}$] (b'11) at (2*1+1-2,-0.5773); \coordinate[wvert,label=above:${{\rm w}_5}$] (w'21) at (2*2+1-1,-1.732*1+0.5773); \coordinate[bvert,label=below:${{\rm b}_4}$] (b'21) at (2*2+1-2,-1.732*1+1.732-0.5773); \coordinate[bvert,label=below:${{\rm b}_5}$] (b'31) at (2*3+1-2,-1.732*1+1.732-0.5773); \coordinate[] (b01) at (1-1+0.5,1.732*1-1.732+0.866); \coordinate[] (b41) at (6-0.5,1.732*1-1.732+0.866); \coordinate[] (w'01) at (1-1+0.5,-1.732*1+1.732-0.866); \coordinate[] (w'41) at (6-0.5,-1.732*1+1.732-0.866); \draw[](b01)--(w11)--(b11)--node[below]{$\frac{1}{X_2}$}(w21)--node[below]{$X_3$}(b21)--(w31)--node[above]{$\frac{1}{AB X_1 zw}$}(b41); \draw[](w'01)--(b'11)--(w'11)--(b'21)--(w'21)--(b'31)--node[below]{$Az$}(w'41) (b'11)--(w11) (b'21)--(w21) (b'31)--(w31) (b11)--node[left]{$X_1 X_4 B w$}(2,1.732) (b21)--node[left]{$B w$}(4,1.732) (w'11)--(2,-1.732) (w'21)--(4,-1.732) ; \draw[red,thick,->] (b01)--(w11); \draw[red,thick,->] (w11)--(b'11); \draw[red,thick,->](b'11)--(w'11); \draw[red,thick,->](w'11)--(b'21); \draw[red,thick,->](b'21)--(w'21); \draw[red,thick,->](w'21)--(b'31); \draw[red,thick,->](b'31)--(w'41); \draw[green,thick,->] (4,-1.732)--(w'21); \draw[green,thick,->] (w'21)--(b'31); \draw[green,thick,->] (b'31)--(w31); \draw[green,thick,->] (w31)--(b21); \draw[green,thick,->] (b21)--(4,1.732); \node[](no) at (4,-1.732-0.5){$b$}; \node[](no) at (1-1+0.5-0.5,1.732*1-1.732+0.866){$a$}; \node[blue](no) at (0,0) {$f_1$}; \node[blue](no) at (2,0) {$f_2$}; \node[blue](no) at (4,0) {$f_3$}; \node[blue](no) at (6,0) {$f_4$}; \node[blue](no) at (3,1.732) {$f_5$}; \end{tikzpicture} \caption{Labeling of the vertices and faces of $\Gamma$, and a cocycle and Kasteleyn sign, where $X_i=[wt]([\partial f_i]),A=[wt]([a]),B=[wt](b)$, and $a$ and $b$ are the red and green cycles respectively. The edges with no weight indicated have weight $1$.}\label{fig:damhex} \end{figure} Consider the hexagonal graph $\Gamma$ with Newton polygon $N$ and {normal} fan $\Sigma$ as shown in Figure \ref{fig:hex}. We label the zig-zag paths by $\alpha,\beta,\gamma$, and denote the ray of $\Sigma$ dual to $\tau \in \{\alpha,\beta,\gamma\}$ by $\sigma_\tau$. We can take $X_i=[wt]([\partial f_i]), i=1,\dots,4$, and $A=[wt]([a]), B=[wt]([b])$ as coordinates on $H^1(\Gamma,{\mathbb C})$ (see Figure \ref{fig:damhex}). The Casimirs are \begin{align}\label{cashexx} C_\alpha= -\frac{B^2 X_1 X_2 X_4}{A}, \quad C_\beta=- \frac{X_3}{A B^3 X_1^2 X_4},\quad C_\gamma = \frac{A^2 B X_1}{X_2 X_3}. \end{align} The Kasteleyn matrix is \[ K(z,w)=\begin{blockarray}{cccccc} {{\rm b}_1}& {\rm b}_2&{\rm b}_3&{\rm b}_4&{\rm b}_5& \\ \begin{block}{(ccccc)c} 1&0&1&0&Az & {\rm w}_1\\ \frac{1}{X_2}&X_3&0&1&0 & {\rm w}_2\\ 0&1&\frac{1}{A B X_1 z w}&0&1&{\rm w}_3\\ X_1 X_4 B w &0&1&1&0&{\rm w}_4\\ 0&Bw&0&1&1&{\rm w}_5\\ \end{block} \end{blockarray} \] Let $P(z,w)=\det K(z,w)$ and $C=\overline{\{P(z,w)=0\}}$. The spectral transform is $\kappa_{\Gamma,{\bf w}}=(C,S,\nu) \in \mathcal S_N$, where $S=(p_1,q_1)+(p_2,q_2)$ with \begin{align} p_1&=-\frac{\sqrt{(-B {X_1} {X_2} {X_3} {X_4}-B {X_1} {X_2} {X_4}-B)^2-4 B^2 {X_1} {X_2} {X_4}}+B {X_1} {X_2} {X_3} {X_4}-B {X_1} {X_2} {X_4}+B}{2 A B {X_1}}, \nonumber\\ q_1&=\frac{-\sqrt{(-B {X_1} {X_2} {X_3} {X_4}-B {X_1} {X_2} {X_4}-B)^2-4 B^2 {X_1} {X_2} {X_4}}+B {X_1} {X_2} {X_3} {X_4}+B {X_1} {X_2} {X_4}+B}{2 B^2 {X_1} {X_2} {X_4}}, \nonumber\\ p_2&=-\frac{-\sqrt{(-B {X_1} {X_2} {X_3} {X_4}-B {X_1} {X_2} {X_4}-B)^2-4 B^2 {X_1} {X_2} {X_4}}+B {X_1} {X_2} {X_3} {X_4}-B {X_1} {X_2} {X_4}+B}{2 A B {X_1}}, \nonumber\\ q_2&=\frac{\sqrt{(-B {X_1} {X_2} {X_3} {X_4}-B {X_1} {X_2} {X_4}-B)^2-4 B^2 {X_1} {X_2} {X_4}}+B {X_1} {X_2} {X_3} {X_4}+B {X_1} {X_2} {X_4}+B}{2 B^2 {X_1} {X_2} {X_4}}. \label{eq:pqpq} \end{align} The points at infinity are given by the following table: \begin{equation} \centering \begin{tabular}{||c c c c||} \hline Zig-zag path & Homology class & Basis $x_1,x_2$ & Point at infinity\\ [0.5ex] \hline\hline $\alpha$ & $(-1,2)$ & $(-1,2),(0,-1)$ &$x_1=\frac{1}{C_\alpha},x_2=0$ \\ \hline $\beta$ & $(-1,-3)$ & $(-1,-3),(0,-1)$ &$x_1=\frac{1}{C_\beta},x_2=0$ \\ \hline $\gamma$ & $(2,1)$& $(2,1),(-1,0)$&$x_1=\frac{1}{C_\gamma},x_2=0$ \\ \hline \end{tabular} \label{zzpathtable3} \end{equation} We label the vertices of $\Gamma$ as in Figure \ref{fig:damhex}. The discrete Abel map ${\bf D}$ is given by \begin{align*} {\bf D}({\bf w})&=0, \quad {\bf D}({\rm b}_1)=D_\beta+D_\gamma,\quad {\bf D}({\rm b}_2)=-D_\alpha+2D_\beta+D_\gamma,\\ {\bf D}({\rm b}_3)&=D_\alpha+ D_\beta,\quad {\bf D}({\rm b}_4)=2D_\beta,\quad {\bf D}({\rm b}_5)=-D_\alpha+3D_\beta, \end{align*} and $D_N=2D_\alpha+2 D_\beta+D_\gamma$. Since ${\bf D}({\bf w})=0$ and every black vertex is contained in every zig-zag path, we have \begin{align*} E_{{\rm b} {\bf w}}&=2 D_\alpha + 2 D_\beta + D_\gamma +{\bf D}({\rm b})-D_\alpha-D_\beta-D_\gamma \\ &={\bf D}({\rm b})+D_\alpha+D_\beta. \end{align*} Using this, we compute \begin{align*} E_{{\rm b}_1 {\bf w}}&=D_\alpha+2 D_\beta+D_\gamma,\quad E_{{\rm b}_2 {\bf w}}=3D_\beta+D_\gamma,\quad E_{{\rm b}_3 {\bf w}}=2D_\alpha+2D_\beta,\\ E_{{\rm b}_4 {\bf w}}&=D_\alpha+3D_\beta,\quad E_{{\rm b}_5 {\bf w}}=4D_\beta. \end{align*} \begin{figure} \centering \begin{tikzpicture} \begin{scope}[shift={(10,6)}] \draw[](-1/5,7/5)--(-1,-1)--(3/5,-1/5)--(-1/5,7/5); \draw[fill=black] (0,0) circle (4pt); \draw[fill=black] (0,1) circle (2pt); \draw[fill=black] (1,0) circle (2pt); \draw[fill=black] (0,-1) circle (2pt); \draw[fill=black] (-1,0) circle (2pt); \draw[fill=black] (1,1) circle (2pt); \draw[fill=black] (1,-1) circle (2pt); \draw[fill=black] (-1,-1) circle (2pt); \draw[fill=black] (-1,1) circle (2pt); \node[](no) at (0,-1.5){$N_{{\rm b}_1,{\bf w}}$}; \end{scope} \begin{scope}[shift={(14,6)}] \draw[](1/5,-2/5)--(-3/5,6/5)--(-7/5,-6/5)--(1/5,-2/5); \draw[fill=black] (0,0) circle (4pt); \draw[fill=black] (0,1) circle (2pt); \draw[fill=black] (1,0) circle (2pt); \draw[fill=black] (0,-1) circle (2pt); \draw[fill=black] (-1,0) circle (2pt); \draw[fill=black] (1,1) circle (2pt); \draw[fill=black] (1,-1) circle (2pt); \draw[fill=black] (-1,-1) circle (2pt); \draw[fill=black] (-1,1) circle (2pt); \node[](no) at (0,-1.5){$N_{{\rm b}_2,{\bf w}}$}; \end{scope} \begin{scope}[shift={(18,5)}] \draw[](0,2)--(4/5,2/5)--(-4/5,-2/5)--(0,2 ); \draw[fill=black] (0,0) circle (4pt); \draw[fill=black] (0,1) circle (2pt); \draw[fill=black] (1,0) circle (2pt); \draw[fill=black] (0,2) circle (2pt); \draw[fill=black] (-1,0) circle (2pt); \draw[fill=black] (1,1) circle (2pt); \draw[fill=black] (1,2) circle (2pt); \draw[fill=black] (-1,2) circle (2pt); \draw[fill=black] (-1,1) circle (2pt); \node[](no) at (0,-1){$N_{{\rm b}_3,{\bf w}}$}; \end{scope} \begin{scope}[shift={(12,0)}] \draw[](2/5,1/5)--(-2/5,9/5)--(-6/5,-3/5)--(2/5,1/5); \draw[fill=black] (0,0) circle (4pt); \draw[fill=black] (0,1) circle (2pt); \draw[fill=black] (1,0) circle (2pt); \draw[fill=black] (0,2) circle (2pt); \draw[fill=black] (-1,0) circle (2pt); \draw[fill=black] (1,1) circle (2pt); \draw[fill=black] (1,2) circle (2pt); \draw[fill=black] (-1,2) circle (2pt); \draw[fill=black] (-1,1) circle (2pt); \node[](no) at (0,-1){$N_{{\rm b}_4,{\bf w}}$}; \end{scope} \begin{scope}[shift={(16,0)}] \draw[](0,0)--(-4/5,8/5)--(-8/5,-4/5)--(0,0); \draw[fill=black] (0,0) circle (4pt); \draw[fill=black] (0,1) circle (2pt); \draw[fill=black] (1,0) circle (2pt); \draw[fill=black] (0,2) circle (2pt); \draw[fill=black] (-1,0) circle (2pt); \draw[fill=black] (1,1) circle (2pt); \draw[fill=black] (1,2) circle (2pt); \draw[fill=black] (-1,2) circle (2pt); \draw[fill=black] (-1,1) circle (2pt); \node[](no) at (0,-1){$N_{{\rm b}_5,{\bf w}}$}; \end{scope} \end{tikzpicture} \caption{The small polygons for the hexagonal graph.}\label{fig:hexsp} \end{figure} The small polygons are shown in Figure \ref{fig:hexsp}. Since the Newton polygon $N$ is primitive, we are in the setting of Remark \ref{rem::prim}. Every zig-zag path contains every black vertex, so the expression (\ref{eq:simplepoly}) is $0$. Therefore, there are no equations of type 2 in the linear system $\mathbb V_{{\rm b} {\bf w}}$ for any black vertex ${\rm b}$. Since $g=2$, we have two equations of type $1$ for every black vertex ${\rm b}$. Moreover, we note that each of the small polygons in Figure \ref{fig:hexsp} contains exactly three lattice points, so by Remark \ref{remind}, we get \begin{align*} {\rm V}_{{\rm b}_1 {\bf w}}&=\begin{vmatrix} 1 & w & z^{-1} w^{-1}\\ 1& q_1 & p_1^{-1} q_1^{-1}\\ 1& q_2 & p_2^{-1} q_2^{-1} \end{vmatrix}, \quad {\rm V}_{{\rm b}_2 {\bf w}}=\begin{vmatrix} 1 & z^{-1} & z^{-1} w^{-1}\\ 1& p_1^{-1} & p_1^{-1} q_1^{-1}\\ 1& p_2^{-1} & p_2^{-1} q_2^{-1} \end{vmatrix},\quad {\rm V}_{{\rm b}_3 {\bf w}}=\begin{vmatrix} 1 & w & w^{2}\\ 1& q_1 & q_1^{2}\\ 1& q_2 & q_2^{2} \end{vmatrix},\\ {\rm V}_{{\rm b}_4 {\bf w}}&=\begin{vmatrix} 1 & w & z^{-1}\\ 1& q_1 & p_1^{-1} \\ 1& q_2 & p_2^{-1} \end{vmatrix}, \quad {\rm V}_{{\rm b}_5 {\bf w}}=\begin{vmatrix} 1 & z^{-1}w & z^{-1} \\ 1& p_1^{-1}q_1 & p_1^{-1} \\ 1& p_2^{-1}q_2 & p_2^{-1} \end{vmatrix}. \end{align*} The boundary of the face $f_2$ is the concatenation of the three wedges \[ W_1=({\rm w}_2,\alpha), \quad W_2=({\bf w},\beta),\quad W_3=({\rm w}_4,\gamma). \] We compute \begin{align*} r_{W_1}=-\frac{V_{{\rm b}_1 {\rm w}_2}}{V_{{\rm b}_4 {\rm w}_2}}(\nu(\alpha))=-\frac{\begin{vmatrix} 1 & w & z^{-1} w^{-1}\\ 1& q_1 & p_1^{-1} q_1^{-1}\\ 1& q_2 & p_2^{-1} q_2^{-1} \end{vmatrix}}{\begin{vmatrix} 1 & w & z^{-1}\\ 1& q_1 & p_1^{-1} \\ 1& q_2 & p_2^{-1} \end{vmatrix}}(\nu(\alpha)). \end{align*} To evaluate at $\nu(\alpha)$, as explained in Remark \ref{remark:factor}, we extend $[\alpha]=(-1,2)$ to the basis $(x_1,x_2)$ of ${\rm M}$, where $x_1=[\alpha]=(-1,2)$ and $x_2= (0,-1)$. Then $\nu(\alpha)$ is given by $x_1=\frac{1}{C_\alpha},x_2=0$. Expressing $z,w$ in the basis $(x_1,x_2)$ as $z=\frac{1}{x_1x_2^2} ,w=\frac{1}{x_2}$, we get \begin{align*} r_{W_1}&=-\frac{\begin{vmatrix} 1 & \frac{1}{x_2} & x_1 x_2^3\\ 1& q_1 & p_1^{-1} q_1^{-1}\\ 1& q_2 & p_2^{-1} q_2^{-1} \end{vmatrix}}{\begin{vmatrix} 1 & \frac{1}{x_2} & x_1 x_2^2\\ 1& q_1 & p_1^{-1} \\ 1& q_2 & p_2^{-1} \end{vmatrix}}\left( \frac{1}{C_\alpha},0\right)= -\frac{\begin{vmatrix} x_2 & {1} & {x_1}x_2^4\\ 1& q_1 & p_1^{-1} q_1^{-1}\\ 1& q_2 & p_2^{-1} q_2^{-1} \end{vmatrix}}{\begin{vmatrix} x_2 & 1 & {x_1}{x_2}^3\\ 1& q_1 & p_1^{-1} \\ 1& q_2 & p_2^{-1} \end{vmatrix}}\left( \frac{1}{C_\alpha},0\right)= -\frac{\begin{vmatrix} 0 & {1} & 0\\ 1& q_1 & p_1^{-1} q_1^{-1}\\ 1& q_2 & p_2^{-1} q_2^{-1} \end{vmatrix}}{\begin{vmatrix} 0 & 1 & 0\\ 1& q_1 & p_1^{-1} \\ 1& q_2 & p_2^{-1} \end{vmatrix}}, \end{align*} where we factored out $x_2$ from the numerator and denominator and then evaluated at $(x_1,x_2)=( \frac{1}{C_\alpha},0)$. For $W_2$, letting $(x_1,x_2)=((-1,-3),(0,-1))$ we have $z=\frac{x_2^3}{x_1 },w=\frac{1}{x_2}$, and $\nu(\beta)$ is given by $x_1=\frac{1}{C_\beta},x_2=0$. Therefore, we get \begin{align*} r_{W_2}=-\frac{V_{{\rm b}_3 {\bf w}}}{V_{{\rm b}_1 {\bf w}}}(\nu(\beta))=-\frac{\begin{vmatrix} 1 & w & w^{2}\\ 1& q_1 & q_1^{2}\\ 1& q_2 & q_2^{2} \end{vmatrix}}{\begin{vmatrix} 1 & w & z^{-1} w^{-1}\\ 1& q_1 & p_1^{-1} q_1^{-1}\\ 1& q_2 & p_2^{-1} q_2^{-1} \end{vmatrix}}(\nu(\beta))=-\frac{\begin{vmatrix} 1 & \frac{1}{x_2} & \frac{1}{x_2^2}\\ 1& q_1 & q_1^{2}\\ 1& q_2 & q_2^{2} \end{vmatrix}}{\begin{vmatrix} 1 & \frac{1}{x_2} & \frac{x_1}{ x_2^2}\\ 1& q_1 & p_1^{-1} q_1^{-1}\\ 1& q_2 & p_2^{-1} q_2^{-1} \end{vmatrix}}\left(\frac{1}{C_\beta},0\right)=-\frac{\begin{vmatrix} 0 & 0& 1\\ 1& q_1 & q_1^{2}\\ 1& q_2 & q_2^{2} \end{vmatrix}}{\begin{vmatrix} 0 & 0 & \frac{1}{C_\beta}\\ 1& q_1 & p_1^{-1} q_1^{-1}\\ 1& q_2 & p_2^{-1} q_2^{-1} \end{vmatrix}}. \end{align*} Finally, for $W_3$, letting $(x_1,x_2)=((2,1),(-1,0))$ we have $z=\frac{1}{x_2 },w=x_1{x_2^2}$, and $\nu(\gamma)$ is given by $x_1=\frac{1}{C_\gamma},x_2=0$. Therefore we get \begin{align*} r_{W_3}=-\frac{V_{{\rm b}_4 {\bf w}}}{V_{{\rm b}_3 {\bf w}}}(\nu(\gamma))=-\frac{\begin{vmatrix} 1 & w & z^{-1}\\ 1& q_1 & p_1^{-1} \\ 1& q_2 & p_2^{-1} \end{vmatrix}}{\begin{vmatrix} 1 & w & w^{2}\\ 1& q_1 & q_1^{2}\\ 1& q_2 & q_2^{2} \end{vmatrix}}(\nu(\gamma))=-\frac{\begin{vmatrix} 1 & x_1x_2^2 & x_2\\ 1& q_1 & p_1^{-1} \\ 1& q_2 & p_2^{-1} \end{vmatrix}}{\begin{vmatrix} 1 & x_1x_2^2 & x_1^2 x_2^4\\ 1& q_1 & q_1^{2}\\ 1& q_2 & q_2^{2} \end{vmatrix}}\left(\frac{1}{C_\gamma},0\right)=-\frac{\begin{vmatrix} 1 & 0 & 0\\ 1& q_1 & p_1^{-1} \\ 1& q_2 & p_2^{-1} \end{vmatrix}}{\begin{vmatrix} 1 & 0 & 0\\ 1& q_1 & q_1^{2}\\ 1& q_2 & q_2^{2} \end{vmatrix}}. \end{align*} Putting everything together, we get \[ X_2=-\frac{\begin{vmatrix} 0 & {1} & 0\\ 1& q_1 & p_1^{-1} q_1^{-1}\\ 1& q_2 & p_2^{-1} q_2^{-1} \end{vmatrix}}{\begin{vmatrix} 0 & 1 & 0\\ 1& q_1 & p_1^{-1} \\ 1& q_2 & p_2^{-1} \end{vmatrix}}\frac{\begin{vmatrix} 0 & 0& 1\\ 1& q_1 & q_1^{2}\\ 1& q_2 & q_2^{2} \end{vmatrix}}{\begin{vmatrix} 0 & 0 & \frac{1}{C_\beta}\\ 1& q_1 & p_1^{-1} q_1^{-1}\\ 1& q_2 & p_2^{-1} q_2^{-1} \end{vmatrix}}\frac{\begin{vmatrix} 1 & 0 & 0\\ 1& q_1 & p_1^{-1} \\ 1& q_2 & p_2^{-1} \end{vmatrix}}{\begin{vmatrix} 1 & 0 & 0\\ 1& q_1 & q_1^{2}\\ 1& q_2 & q_2^{2} \end{vmatrix}}, \] {with similar formulas for $X_1,X_3,X_4,A,B$. It may be easily verified that these invert the spectral transform by} plugging in the formulas (\ref{cashexx}) and (\ref{eq:pqpq}) into the right-hand side and simplifying using computer algebra. \subsection{Non-primitve example} \begin{figure} \centering \begin{tikzpicture} \begin{scope} \draw[dashed, gray] (0,0) rectangle (8,8); \coordinate[bvert] (b1) at (2,7); \coordinate[bvert] (b2) at (2,5); \coordinate[bvert] (b3) at (5,6); \coordinate[bvert] (b4) at (7,6); \coordinate[bvert] (b5) at (1,2); \coordinate[bvert] (b6) at (3,2); \coordinate[bvert] (b7) at (6,3); \coordinate[bvert] (b8) at (6,1); \coordinate[wvert] (w1) at (1,6); \coordinate[wvert] (w2) at (3,6); \coordinate[wvert] (w3) at (6,7); \coordinate[wvert] (w4) at (6,5); \coordinate[wvert] (w5) at (2,3); \coordinate[wvert] (w6) at (2,1); \coordinate[wvert] (w7) at (5,2); \coordinate[wvert] (w8) at (7,2); \draw[] (0,2)--(b5)--(w5)--(b6)--(w7)--(b7)--(w8)--(8,2) (b5)--(w6)--(b6) (w7)--(b8)--(w8) (w6)--(2,0) (b8)--(6,0) (w5)--(b2)--(w2)--(b3)--(w4)--(b7) (w4)--(b4)--(8,6) (b4)--(w3)--(b3) (0,6)--(w1)--(b2) (w1)--(b1)--(w2) (b1)--(2,8) (w3)--(6,8) ; \draw[orange!70!yellow,thick] (0,6)to[out=-27,in=207](4,6)to[out=27,in=180-27](8,6); \draw[black!60!green,thick] (2,8)to[out=-90-27,in=117](2,4)to[out=-90+27,in=90-27](2,0); \draw[blue,thick] (2,8)to[out=-90+27,in=90-27](2,4)to[out=-90-27,in=90+27](2,0); \draw[red,thick] (0,6)to[out=27,in=180-27](4,6)to[out=-27,in=180+27](8,6); \draw[red,thick] (0,2)to[out=-27,in=207](4,2)to[out=27,in=180-27](8,2); \draw[blue,thick] (6,8)to[out=-90-27,in=117](6,4)to[out=-90+27,in=90-27](6,0); \draw[black!60!green,thick] (6,8)to[out=-90+27,in=90-27](6,4)to[out=-90-27,in=90+27](6,0); \draw[orange!70!yellow,thick] (0,2)to[out=27,in=180-27](4,2)to[out=-27,in=180+27](8,2); \node[](no) at (2.5,0.5) {$\gamma_1$}; \node[](no) at (1.5,0.5) {$\alpha_1$}; \node[](no) at (0.5,2.5) {$\beta_1$}; \node[](no) at (0.5,1.5) {$\delta_1$}; \node[](no) at (6.5,0.5) {$\alpha_2$}; \node[](no) at (5.5,0.5) {$\gamma_2$}; \node[](no) at (0.5,5.5) {$\beta_2$}; \node[](no) at (0.5,6.5) {$\delta_2$}; \end{scope} \begin{scope}[shift={(11,6)}] \draw[red,thick] (-1,-1)--(1,-1); \draw[black!60!green,thick] (-1,1)--(-1,-1); \draw[blue,thick] (1,-1)--(1,1); \draw[orange!70!yellow,thick] (1,1)--(-1,1); \draw[fill=black] (0,0) circle (4pt); \draw[fill=black] (0,1) circle (2pt); \draw[fill=black] (1,0) circle (2pt); \draw[fill=black] (0,-1) circle (2pt); \draw[fill=black] (-1,0) circle (2pt); \draw[fill=black] (1,1) circle (2pt); \draw[fill=black] (1,-1) circle (2pt); \draw[fill=black] (-1,-1) circle (2pt); \draw[fill=black] (-1,1) circle (2pt); \end{scope} \begin{scope}[shift={(11,2)}] \draw[red,thick,->] (0,0)--(0,1); \draw[black!60!green,thick,->] (0,0)--(1,0); \draw[blue,thick,->] (0,0)--(-1,0); \draw[orange!70!yellow,thick,->] (0,0)--(0,-1); \draw[fill=black] (0,0) circle (2pt); \node[](no) at (1.5,0) {$\gamma$}; \node[](no) at (-1.5,0) {$\alpha$}; \node[](no) at (0,-1.5) {$\beta$}; \node[](no) at (0,1.5) {$\delta$}; \end{scope} \end{tikzpicture} \caption{A square-octagon graph, its Newton polygon and normal fan, with zig-zag paths and rays labeled.}\label{fig:sqoct} \end{figure} Consider the square-octagon graph $\Gamma$ with Newton polygon $N$ and {normal} fan $\Sigma$ as shown in Figure \ref{fig:sqoct}. We label the rays of $\Sigma$ by $\alpha,\beta,\gamma,\delta$ and the two zig-zag paths dual to ray $\tau$ by $\{\tau_1,\tau_2\}$, for $\tau \in \{\alpha,\beta,\gamma,\delta\}$. We can take $X_i=[wt]([\partial f_i]), i=1,\dots,7$, and $A=[wt]([a]), B=[wt]([b])$ as coordinates on $H^1(\Gamma,{\mathbb C})$ (see Figure \ref{fig:damsqoct}). The Casimirs are \begin{align*} C_{\alpha_1}&=X_1 X_3 X_7 B, \quad C_{\alpha_2}=\frac{B {X_2} {X_3} {X_4} {X_6} {X_7}}{{X_1} {X_5}}, \quad C_{\beta_1}= \frac{X_2}{A X_1 X_5},\quad C_{\beta_2}= \frac{1}{A X_7},\\ C_{\gamma_1}&= \frac{X_5}{B X_1 X_3},\quad C_{\gamma_2}= \frac{X_6}{B},\quad C_{\delta_1}=\frac{A X_1}{X_2 X_6},\quad C_{\delta_2}= \frac{A X_1 X_5}{X_2 X_3 X_4 X_6 X_7}. \end{align*} Since the Newton polygon $N$ has only one interior lattice point, the divisor $S=(p,q)$ consists of a single point. The Kasteleyn matrix is \[ K(z,w)=\begin{pmatrix} 1 & 1 & 0 & A z & 0 & 0 & 0 & 0 \\ 1 & -{X_7} & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 & 0 & 0 & 0 & \frac{1}{B w} \\ 0 & 0 & \frac{{X_1} {X_5}}{{X_2} {X_3} {X_4} {X_6} {X_7}} & -1 & 0 & 0 & 1 & 0 \\ 0 & \frac{1}{{X_3}} & 0 & 0 & 1 & \frac{1}{{X_5}} & 0 & 0 \\ B w {X_1} & 0 & 0 & 0 & 1 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac{{X_1}}{{X_2}} & {X_6} & 1 \\ 0 & 0 & 0 & 0 & \frac{1}{A z} & 0 & 1 & -1 \\ \end{pmatrix} .\] Let $P(z,w)=\det K(z,w)$ and $C=\overline{\{P(z,w)=0\}}$. The spectral transform is $\kappa_{\Gamma,{\bf w}}=(C,S,\nu) \in \mathcal S_N$, where \begin{align*} p&=-\frac{{X_2} {X_4} {X_6} \left({X_3} {X_5} {X_6} {X_7} \left({X_1}^2 ({X_4}+1)+{X_2} {X_4}\right)+{X_1} {X_2} {X_3}^2 {X_4} {X_6}^2 {X_7}^2+{X_1} {X_5}^2\right)}{A ({X_1} {X_5}+{X_2} {X_3} {X_4} {X_6} {X_7}) }\\ &~~~~~\times \frac{1}{\left({X_3} {X_4} {X_6} {X_7} \left({X_1}^2 {X_5}+{X_2} ({X_5}+1) ({X_6}+1)\right)+{X_1} {X_5} ({X_6} ({X_4}+{X_5}+1)+{X_5}+1)\right)},\\ q&=\frac{{X_5} \left(-{X_3} {X_4} {X_6} {X_7} \left({X_1}^2+{X_2} {X_6}+{X_2}\right)-{X_1} {X_5} ({X_6}+1)\right)}{B {X_1} {X_3} {X_7} ({X_1} {X_5} ({X_4} {X_6}+{X_6}+1)+{X_2} {X_3} {X_4} {X_6} ({X_6}+1) {X_7})}. \end{align*} The table below lists the points at infinity for each of the zig-zag paths: \begin{equation} \centering \begin{tabular}{||c c c c||} \hline Zig-zag path & Homology class & Basis $x_1,x_2$& Point at infinity \\ [0.5ex] \hline\hline $\alpha_1$ & $(0,1)$ & $(0,1),(-1,0)$&$x_1=\frac{1}{C_{\alpha_1}},x_2=0$ \\ $\alpha_2$ & $(0,1)$ &$(0,1),(-1,0)$& $x_1=\frac{1}{C_{\alpha_2}},x_2=0$ \\ \hline $\beta_1$ & $(-1,0)$ & $ (-1,0),(0,-1)$&$x_1=\frac{1}{C_{\beta_1}},x_2=0$ \\ $\beta_2$ & $(-1,0)$ & $ (-1,0),(0,-1)$&$x_1=\frac{1}{C_{\beta_2}},x_2=0$ \\ \hline $\gamma_1$ & $(0,-1)$& $ (0,-1),(1,0)$&$x_1=\frac{1}{C_{\gamma_1}},x_2=0$ \\ $\gamma_2$ & $(0,-1)$& $ (0,-1),(1,0)$&$x_1=\frac{1}{C_{\gamma_2}},x_2=0$ \\ \hline $\delta_1$ & $(1,0)$ & $ (1,0),(0,1)$&$x_1=\frac{1}{C_{\delta_1}},x_2=0$ \\ $\delta_2$ & $(1,0)$ & $ (1,0),(0,1)$&$x_1=\frac{1}{C_{\delta_2}},x_2=0$ \\ \hline \end{tabular} \label{zzpathtable2} \end{equation} \begin{figure} \centering \begin{tikzpicture} \begin{scope} \draw[dashed, gray] (0,0) rectangle (8,8); \coordinate[bvert,label=below:${{\rm b}_1}$] (b1) at (2,7); \coordinate[bvert,label=above:${{\rm b}_2}$] (b2) at (2,5); \coordinate[bvert,label=right:${{\rm b}_3}$] (b3) at (5,6); \coordinate[bvert,label=left:${{\rm b}_4}$] (b4) at (7,6); \coordinate[bvert,label=right:${{\rm b}_5}$] (b5) at (1,2); \coordinate[bvert,label=left:${{\rm b}_6}$] (b6) at (3,2); \coordinate[bvert,label=below:${{\rm b}_7}$] (b7) at (6,3); \coordinate[bvert,label=above:${{\rm b}_8}$] (b8) at (6,1); \coordinate[wvert,label=below:${\bf w}$] (w1) at (1,6); \coordinate[wvert,label=above:${{\rm w}_2}$] (w2) at (3,6); \coordinate[wvert] (w3) at (6,7); \coordinate[wvert] (w4) at (6,5); \coordinate[wvert] (w5) at (2,3); \coordinate[wvert] (w6) at (2,1); \coordinate[wvert] (w7) at (5,2); \coordinate[wvert] (w8) at (7,2); \draw[] (0,2)--(b5)--(w5)--node[right]{$\frac{1 }{X_5}$}(b6)--node[below]{$\frac{X_1 }{X_2}$}(w7)--node[left]{$X_6$}(b7)--(w8)--node[above]{$\frac{1}{Az}$}(8,2) (b5)--(w6)--node[right]{$-1$}(b6) (w7)--(b8)--node[right]{$-1$}(w8) (w6)--(2,0) (b8)--(6,0) (w5)--node[left]{$\frac{1}{X_3}$}(b2)--node[right]{$-X_7$}(w2)--(b3)--node[left]{$U$}(w4)--(b7) (w4)--node[right]{$-1$}(b4)--node[above]{$Az$}(8,6) (b4)--(w3)--(b3) (0,6)--(w1)--(b2) (w1)--(b1)--(w2) (b1)--node[left]{$X_1 B w$}(2,8) (w3)--node[left]{$\frac{1}{Bw}$}(6,8) ; \draw[red,thick,->] (0,6)--(w1); \draw[red,thick,->] (w1)--(b1); \draw[red,thick,->] (b1)--(w2); \draw[red,thick,->] (w2)--(b3); \draw[red,thick,->] (b3)--(w3); \draw[red,thick,->] (w3)--(b4); \draw[red,thick,->] (b4)--(8,6); \draw[green,thick,->] (6,0)--(b8); \draw[green,thick,->] (b8)--(w8); \draw[green,thick,->] (w8)--(b7); \draw[green,thick,->] (b7)--(w4); \draw[green,thick,->] (w4)--(b4); \draw[green,thick,->] (b4)--(w3); \draw[green,thick,->] (w3)--(6,8); \node[](no) at (6,-0.5){$b$}; \node[](no) at (-0.5,6){$a$}; \node[blue](no) at (0,0) {$f_1$}; \node[blue](no) at (4,0) {$f_2$}; \node[blue](no) at (0,4) {$f_3$}; \node[blue](no) at (4,4) {$f_4$}; \node[blue](no) at (2,2) {$f_5$}; \node[blue](no) at (6,2) {$f_6$}; \node[blue](no) at (2,6) {$f_7$}; \node[blue](no) at (6,6) {$f_8$}; \end{scope} \end{tikzpicture} \caption{Labeling of the vertices and faces of $\Gamma$, and a cocycle and Kasteleyn sign, where $X_i=[wt]([\partial f_i]),A=[wt]([a]),B=[wt](b)$ and $U=\frac{X_1 X_5}{X_2X_3X_4X_6 X_7}$. The edges with no weight indicated have weight $1$.}\label{fig:damsqoct} \end{figure} \begin{figure} \centering \begin{tikzpicture} \begin{scope}[shift={(10,6)}] \draw[](0.5,-1)--(0.5,1)--(-1,1)--(-1,-1)--(0.5,-1); \draw[fill=black] (0,0) circle (4pt); \draw[fill=black] (0,1) circle (2pt); \draw[fill=black] (1,0) circle (2pt); \draw[fill=black] (0,-1) circle (2pt); \draw[fill=black] (-1,0) circle (2pt); \draw[fill=black] (1,1) circle (2pt); \draw[fill=black] (1,-1) circle (2pt); \draw[fill=black] (-1,-1) circle (2pt); \draw[fill=black] (-1,1) circle (2pt); \node[](no) at (0,-1.5){$N_{{\rm b}_1,{\bf w}}$}; \end{scope} \begin{scope}[shift={(14,6)}] \draw[](0.5,-1)--(0.5,1)--(-1,1)--(-1,-1)--(0.5,-1); \draw[fill=black] (0,0) circle (4pt); \draw[fill=black] (0,1) circle (2pt); \draw[fill=black] (1,0) circle (2pt); \draw[fill=black] (0,-1) circle (2pt); \draw[fill=black] (-1,0) circle (2pt); \draw[fill=black] (1,1) circle (2pt); \draw[fill=black] (1,-1) circle (2pt); \draw[fill=black] (-1,-1) circle (2pt); \draw[fill=black] (-1,1) circle (2pt); \node[](no) at (0,-1.5){$N_{{\rm b}_2,{\bf w}}$}; \end{scope} \begin{scope}[shift={(18,6)}] \draw[](0.5-0.5,-1)--(0.5-0.5,1)--(-1-0.5,1)--(-1-0.5,-1)--(0.5-0.5,-1); \draw[fill=black] (0,0) circle (4pt); \draw[fill=black] (0,1) circle (2pt); \draw[fill=black] (1,0) circle (2pt); \draw[fill=black] (0,-1) circle (2pt); \draw[fill=black] (-1,0) circle (2pt); \draw[fill=black] (1,1) circle (2pt); \draw[fill=black] (1,-1) circle (2pt); \draw[fill=black] (-1,-1) circle (2pt); \draw[fill=black] (-1,1) circle (2pt); \node[](no) at (0,-1.5){$N_{{\rm b}_3,{\bf w}}$}; \end{scope} \begin{scope}[shift={(22,6)}] \draw[](0.5-0.5,-1)--(0.5-0.5,1)--(-1-0.5,1)--(-1-0.5,-1)--(0.5-0.5,-1); \draw[fill=black] (0,0) circle (4pt); \draw[fill=black] (0,1) circle (2pt); \draw[fill=black] (1,0) circle (2pt); \draw[fill=black] (0,-1) circle (2pt); \draw[fill=black] (-1,0) circle (2pt); \draw[fill=black] (1,1) circle (2pt); \draw[fill=black] (1,-1) circle (2pt); \draw[fill=black] (-1,-1) circle (2pt); \draw[fill=black] (-1,1) circle (2pt); \node[](no) at (0,-1.5){$N_{{\rm b}_4,{\bf w}}$}; \end{scope} \begin{scope}[shift={(10 ,2)}] \draw[](.5,-0.5)--(.5,1.5)--(-1,1.5)--(-1,-0.5)--(.5,-0.5); \draw[fill=black] (0,0) circle (4pt); \draw[fill=black] (0,1) circle (2pt); \draw[fill=black] (1,0) circle (2pt); \draw[fill=black] (0,-1) circle (2pt); \draw[fill=black] (-1,0) circle (2pt); \draw[fill=black] (1,1) circle (2pt); \draw[fill=black] (1,-1) circle (2pt); \draw[fill=black] (-1,-1) circle (2pt); \draw[fill=black] (-1,1) circle (2pt); \node[](no) at (0,-1.5){$N_{{\rm b}_5,{\bf w}}$}; \end{scope} \begin{scope}[shift={(14 ,2)}] \draw[](.5,-0.5)--(.5,1.5)--(-1,1.5)--(-1,-0.5)--(.5,-0.5); \draw[fill=black] (0,0) circle (4pt); \draw[fill=black] (0,1) circle (2pt); \draw[fill=black] (1,0) circle (2pt); \draw[fill=black] (0,-1) circle (2pt); \draw[fill=black] (-1,0) circle (2pt); \draw[fill=black] (1,1) circle (2pt); \draw[fill=black] (1,-1) circle (2pt); \draw[fill=black] (-1,-1) circle (2pt); \draw[fill=black] (-1,1) circle (2pt); \node[](no) at (0,-1.5){$N_{{\rm b}_6,{\bf w}}$}; \end{scope} \begin{scope}[shift={(18 ,2)}] \draw[](0,-0.5)--(0,1.5)--(-1.5,1.5)--(-1.5,-0.5)--(0,-0.5); \draw[fill=black] (0,0) circle (4pt); \draw[fill=black] (0,1) circle (2pt); \draw[fill=black] (1,0) circle (2pt); \draw[fill=black] (0,-1) circle (2pt); \draw[fill=black] (-1,0) circle (2pt); \draw[fill=black] (1,1) circle (2pt); \draw[fill=black] (1,-1) circle (2pt); \draw[fill=black] (-1,-1) circle (2pt); \draw[fill=black] (-1,1) circle (2pt); \node[](no) at (0,-1.5){$N_{{\rm b}_7,{\bf w}}$}; \end{scope} \begin{scope}[shift={(22 ,2)}] \draw[](0,-0.5)--(0,1.5)--(-1.5,1.5)--(-1.5,-0.5)--(0,-0.5); \draw[fill=black] (0,0) circle (4pt); \draw[fill=black] (0,1) circle (2pt); \draw[fill=black] (1,0) circle (2pt); \draw[fill=black] (0,-1) circle (2pt); \draw[fill=black] (-1,0) circle (2pt); \draw[fill=black] (1,1) circle (2pt); \draw[fill=black] (1,-1) circle (2pt); \draw[fill=black] (-1,-1) circle (2pt); \draw[fill=black] (-1,1) circle (2pt); \node[](no) at (0,-1.5){$N_{{\rm b}_8,{\bf w}}$}; \end{scope} \end{tikzpicture} \caption{The small polygons for the square-octagon graph.}\label{fig:sqoctsp} \end{figure} We label the vertices of $\Gamma$ as in Figure \ref{fig:damsqoct}. The discrete Abel map ${\bf D}$ is: \begin{align*} {\bf D}({\bf w})&=0, \quad {\bf D}({\rm b}_1)=\frac 1 2 D_\gamma+\frac 1 2 D_\delta,\quad {\bf D}({\rm b}_2)=\frac 1 2 D_\beta+\frac 1 2 D_\gamma,\\ {\bf D}({\rm b}_3)&=\frac 1 2(- D_\alpha+D_\beta+D_\gamma+D_\delta),\quad {\bf D}({\rm b}_4)=-D_\alpha+\frac 1 2 D_\beta+D_\gamma+\frac 1 2 D_\delta \\ {\bf D}({\rm b}_5)&=D_\beta,\quad {\bf D}({\rm b}_6)=-\frac 1 2 D_\alpha+ D_\beta+\frac 1 2 D_\gamma \\ {\bf D}({\rm b}_7)&=-\frac 1 2 D_\alpha+\frac 1 2 D_\beta+D_\gamma,\quad {\bf D}({\rm b}_8)=-\frac 1 2 D_\alpha+ D_\beta+D_\gamma-\frac 1 2 D_\delta. \end{align*} We have $D_N=D_\alpha+D_\beta+D_\gamma+D_\delta$, using which we compute \begin{align*} E_{{\rm b}_1 {\bf w}}&=\frac 1 2 D_\alpha+ D_\beta+D_\gamma+D_\delta, \quad E_{{\rm b}_2 {\bf w}}=\frac 1 2 D_\alpha+ D_\beta+D_\gamma+D_\delta,\\ E_{{\rm b}_3 {\bf w}}&=D_\beta+\frac 3 2 D_\gamma+ D_\delta, \quad E_{{\rm b}_4 {\bf w}}=D_\beta+\frac 3 2 D_\gamma+ D_\delta,\\ E_{{\rm b}_5 {\bf w}}&=\frac 1 2 D_\alpha+\frac 3 2 D_\beta+ D_\gamma+\frac 1 2 D_\delta, \quad E_{{\rm b}_6 {\bf w}}=\frac 1 2 D_\alpha+\frac 3 2 D_\beta+ D_\gamma+\frac 1 2 D_\delta,\\ E_{{\rm b}_7 {\bf w}}&=\frac 3 2 D_\beta+ \frac 3 2 D_\gamma+\frac 1 2 D_\delta,\quad E_{{\rm b}_8 {\bf w}}=\frac 3 2 D_\beta+ \frac 3 2 D_\gamma+\frac 1 2 D_\delta. \end{align*} The corresponding small polygons are shown in Figure \ref{fig:sqoctsp}. Therefore, we have \[ {\rm V}_{{\rm b}_1 {\bf w}}= a_{(-1,-1)}z^{-1}w^{-1}+a_{(0,-1)} w^{-1}+a_{(1,-1)}zw^{-1}+a_{(-1,0)}z^{-1}+a_{(0,0)}+a_{(1,0)} z, \] where the $a_{m}$ satisfy the system of equations $\mathbb V_{{\rm b}_1 {\bf w}}$ that we now determine. We have the equation of type 1: \[ a_{(-1,-1)}p^{-1}q^{-1}+a_{(0,-1)} q^{-1}+a_{(1,-1)}pq^{-1}+a_{(-1,0)}p^{-1}+a_{(0,0)}+a_{(1,0)} p =0. \] By (\ref{contpointsatinf}), we have an equation of type 2 for every zig-zag path in \[ \nu(\alpha_1)+\nu(\alpha_2)+\nu(\gamma_2)+\nu(\delta_1). \] Therefore we have $5$ equations and $6$ variables, so we are in the setting of Remark \ref{remind} where ${\rm V}_{{\rm b}_1 {\bf w}}=\det\mathbb {V}^\chi_{{\rm b}_1 {\bf w}}$. Computing the equations of type 2, we get \[ {\rm V}_{{\rm b}_1 {\bf w}}=\begin{vmatrix} z^{-1}w^{-1}&w^{-1}&zw^{-1}&z^{-1}&1&z\\ p^{-1}q^{-1}&q^{-1}&pq^{-1}&p^{-1}&1&p\\ 1&C_{\beta_1}&0&0&0&0\\ 1&C_{\beta_2}&0&0&0&0\\ C_{\gamma_2}&0&1&0&C_{\gamma_2}^{-1}&0\\ 0&0&0&0&C_{\delta_1}&1 \end{vmatrix}. \] In like fashion, we compute \[ {\rm V}_{{\rm b}_2 {\bf w}}=\begin{vmatrix} z^{-1}w&w&z^{-1}&1&z^{-1}w^{-1}&w^{-1}\\ p^{-1}q&q&p^{-1}&1&p^{-1}q^{-1}&q^{-1}\\ 1&C_{\beta_1}&0&0&0&0\\ C_{\gamma_2}&0&1&0&C_{\gamma_2}^{-1}&0\\ 0&0&0&0&C_{\delta_1}&1\\ 0&0&0&0&C_{\delta_2}&1 \end{vmatrix}. \] We write the boundary of the face $f_7$ as the concatenation of the two wedges \[ {W_1}=({\bf w},\gamma_1),\quad {W_2}=({\rm b}_2,\alpha_1). \] We have \begin{align*} r_{W_1}&=\frac{{\rm V}_{{\rm b}_2 {\bf w}}}{{\rm V}_{{\rm b}_1 {\bf w}}}(\nu(\gamma_1))=\frac{\begin{vmatrix} C_{\gamma_1}&0&1&0&C_{\gamma_1}^{-1}&0\\ p^{-1}q&q&p^{-1}&1&p^{-1}q^{-1}&q^{-1}\\ 1&C_{\beta_1}&0&0&0&0\\ C_{\gamma_2}&0&1&0&C_{\gamma_2}^{-1}&0\\ 0&0&0&0&C_{\delta_1}&1\\ 0&0&0&0&C_{\delta_2}&1 \end{vmatrix}}{\begin{vmatrix} C_{\gamma_1}&0&1&0&C_{\gamma_1}^{-1}&0\\ p^{-1}q^{-1}&q^{-1}&pq^{-1}&p^{-1}&1&p\\ 1&C_{\beta_1}&0&0&0&0\\ 1&C_{\beta_2}&0&0&0&0\\ C_{\gamma_2}&0&1&0&C_{\gamma_2}^{-1}&0\\ 0&0&0&0&C_{\delta_1}&1 \end{vmatrix}}, \end{align*} where to evaluate at $\nu(\gamma_1)$, we use the basis $x_1,x_2$ from table (\ref{zzpathtable2}). Similarly, we compute \begin{align*} r_{W_2}&=-\frac{{\rm V}_{{\rm b}_1 {\bf w}}}{{\rm V}_{{\rm b}_2 {\bf w}}}(\nu(\alpha_1))=-\frac{\begin{vmatrix} 0&C_{\alpha_1}^{-1}&0&1&0&C_{\alpha_1}\\ p^{-1}q^{-1}&q^{-1}&pq^{-1}&p^{-1}&1&p\\ 1&C_{\beta_1}&0&0&0&0\\ 1&C_{\beta_2}&0&0&0&0\\ C_{\gamma_2}&0&1&0&C_{\gamma_2}^{-1}&0\\ 0&0&0&0&C_{\delta_1}&1 \end{vmatrix}}{\begin{vmatrix} 0&C_{\alpha_1}^{-1}&0&1&0&C_{\alpha_1}\\ p^{-1}q&q&p^{-1}&1&p^{-1}q^{-1}&q^{-1}\\ 1&C_{\beta_1}&0&0&0&0\\ C_{\gamma_2}&0&1&0&C_{\gamma_2}^{-1}&0\\ 0&0&0&0&C_{\delta_1}&1\\ 0&0&0&0&C_{\delta_2}&1 \end{vmatrix}}. \end{align*} It can be verified using computer algebra that $X_7=r_{W_1} r_{W_2}$. \section{The small polygons}\label{smallpolysection} In the remaining sections, we prove the results stated in Section \ref{sec2}. In order to invert the spectral transform, we want to first reconstruct the $Q_{{{\rm b}} {\bf w}}$, the entries in the adjugate matrix, from the spectral data. To do this, we need to first find the Newton polygon of the $Q_{{{\rm b}} {\bf w}}$, which we call the small polygons and denote by $N_{{{\rm b}} {\bf w}}$. Explicitly, $N_{{{\rm b}} {\bf w}}$ is the convex hull of homology classes of dimer covers of $\Gamma-\{{{\rm b},{\bf w}}\}$. However, it appears difficult to describe $N_{{{\rm b}} {\bf w}}$ in a direct combinatorial way. Instead, we will re-express the problem in terms of toric geometry. The key to doing this is an extension of the Kasteleyn matrix, which is a map of trivial sheaves on $\rm T$, to a map of locally free sheaves on a compactification of $\rm T$. We are led to consider a stacky toric surface $\mathscr X_N$ instead of the toric surface $X_N$, because such an extension does not exist on $X_N$ unless the polygon has only primitive sides. The basics of stacky toric surfaces are recalled in detail in the appendix, Section \ref{A}. For the convenience of the reader we reproduce some notation. Let $\Sigma$ be the normal fan of $N$. There is a \textit{stacky fan} $\boldsymbol \Sigma=(\Sigma,\beta)$ where \begin{align*} \beta:\mathbb Z^{\Sigma(1)} &\rightarrow {\rm M}^\vee,\\ \delta_\rho &\mapsto |E_\rho| u_\rho, \end{align*} where $u_\rho$ is the primitive normal to $E_\rho$. We identify the set of rays $\Sigma(1)$ of the fan $\Sigma$ with the components $D_\rho$ of the divisor at infinity $$ \rho \leftrightarrow \tau_{\rho}={\mathbb R}_{\geq 0}u_\rho. $$ We assign to $\boldsymbol{\Sigma}$ a smooth {\it toric DM stack $\mathscr X_{N}$}, which contains the torus ${\rm T}$ as a dense open subset. Toric stacks appeared in the context of the spectral transform in \cite{TWZ18}. We consider the stack rather than the toric surface since we construct an extension of the Kasteleyn operator to a compactification of the torus ${\rm T}$ in Lemma \ref{main::lem}. There is no such extension on the toric surface when the Newton polygon is not simple, but there is one on the stack. Define the map \begin{align*} \beta^*:{\rm M} &\rightarrow {\mathbb Z}^{\Sigma(1)}\\ m & \mapsto (|E_\rho| \langle m, u_\rho \rangle)_\rho. \end{align*} The Picard group of $\mathscr X_N$ is generated by the divisors $D_\rho$. \begin{lemma}[Borisov and Hua, 2009 \cite{BH09}*{Proposition 3.3}] There is an isomorphism \begin{align*} {\mathbb Z}^{\Sigma(1)}/ \beta^* {\rm M} &\cong {\rm{Pic}}~\mathscr X_N, \\ (b_\rho)_\rho &\mapsto \mathcal O_{\mathscr X_N} \Bigl( \sum_{\rho \in \Sigma(1)} \frac{b_\rho}{|E_\rho|} D_\rho \Bigr). \end{align*} \end{lemma} Let $D=\sum_\rho \frac{b_\rho}{|E_\rho|} D_\rho$ be a divisor at infinity on $\mathscr X_N$. Associated to $D$ is a polygon $P_D$ in $\rm M_{\mathbb R}$, see (\ref{equivalence}). Global sections of toric line bundles are identified with integral points in the associated polygons: \begin{proposition}[Borisov and Hua \cite{BH09}*{Proposition 4.1}] We have \[ H^0(\mathscr X_N, \mathcal O_{\mathscr X_N}(D))\cong \bigoplus_{m \in P_D \cap {\rm M}} {\mathbb C} \cdot \chi^m.\] \end{proposition} \subsection{Extension of the Kasteleyn operator} Define for each black vertex ${\rm b}$ the line bundle \[ \mathcal E_{{\rm b}}:=\mathcal O_{\mathscr X_N}\Bigl({{\bf D}}({\rm b})-\sum_{\rho \in \Sigma(1)}\sum_{\alpha \in Z_\rho : {\rm b} \in \alpha } \frac{1}{|E_{\rho}|}D_{\rho} \Bigr), \] and for each white vertex ${\rm w}$, the line bundle \[ \mathcal F_{{\rm w}}:=\mathcal O_{\mathscr X_N}({{\bf D}}({\rm w})). \] Let $$ \mathcal E:=\bigoplus_{{\rm b} \in B} \mathcal E_{\rm b}, \quad \quad \mathcal F:=\bigoplus_{{\rm w} \in W} \mathcal F_{\rm w}. $$ They are locally free sheaves of the same rank $\# B=\#W$ on $\mathscr X_N$. \begin{proposition}\label{main::lem} The Kasteleyn operator $K$ extends to a map of locally free sheaves on $\mathscr X_N$: \begin{equation} \label{MK} \widetilde{K}: \mathcal E \rightarrow \mathcal F. \end{equation} \end{proposition} \begin{proof} By definition, $$ K(z,w)_{{\rm w} {\rm b}}=\sum_{e \in E(\Gamma)\text{ incident to } {\rm b},{\rm w}} wt(e) \kappa(e)\phi(e). $$ We need to show that for any edge $e$ with vertices ${\rm b},{\rm w}$, the character $\phi(e)$ is a global section of \[ \mathcal H om_{\mathscr X_N}(\mathcal E_{\rm b},\mathcal F_{\rm w}) \cong \mathcal O_{\mathscr X_N}\Bigl({{\bf D}}({\rm w})-{{\bf D}}({\rm b})+\sum_{\rho \in \Sigma(1)}\sum_{\alpha \in Z_\rho : {\rm b} \in \alpha } \frac{1}{|E_{\rho}|}D_{\rho} \Bigr), \] which by (\ref{divisorpolygonbijection}) and Proposition \ref{pro:globalsec} is equivalent to showing that for every edge $e={\rm b} {\rm w}$, we have \[ \text{div }\phi(e)+{{\bf D}}({\rm w})-{{\bf D}}({\rm b})+\sum_{\rho \in \Sigma(1)}\sum_{\alpha \in Z_\rho : {\rm b} \in \alpha } \frac{1}{|E_{\rho}|}D_{\rho} \geq 0. \] Let $\alpha, \beta$ be the zig-zag paths through $e$, with $\alpha \in Z_\sigma, \beta \in Z_\rho$. Then by Lemma \ref{lem:DAM}, we have \[ {{\bf D}}({\rm w})-{\bf D}({\rm b})=-\frac{1}{|E_{{\sigma}}|}D_{{\sigma}}-\frac{1}{|E_{{\rho}}|}D_{{\rho}}-\text{div } \phi(e). \] This implies \begin{equation} \label{eq:tau} \text{div } \phi(e)+{\bf D}({\rm w}) - {\bf D}({\rm b})+\sum_{\tau \in \Sigma(1)}\sum_{\gamma \in Z_\tau:{\rm b} \in \gamma } \frac{1}{|E_{\tau}|}D_{\tau} = \sum_{\tau \in \Sigma(1)} \sum_{\substack{\gamma \in Z_\tau : {\rm b} \in \gamma \\ \gamma \neq \alpha,\beta}}^{}\frac{1}{|E_{\tau}|}D_{\tau} \geq 0. \end{equation} \end{proof} We can now take exterior powers of $\widetilde K$ to find all the polygons. Taking the determinant of the map (\ref{MK}), we see that $ \mathrm{det} \ \widetilde{K}$ is a global section of the line bundle \begin{equation} \label{SH1} \mathcal H om_{\mathscr X_N}\Bigl(\bigwedge_{{\rm b} \in B} \mathcal E_{\rm b},\bigwedge_{{\rm w} \in W} \mathcal F_{\rm w} \Bigr) \cong \mathcal O_{\mathscr X_N}\Bigl( \sum_{{\rm w} \in W } {\bf D}({\rm w})-\sum_{{\rm b} \in B }\Bigl({\bf D}({\rm b})-\sum_{\rho \in \Sigma(1)}\sum_{\alpha \in Z_\rho:b \in \alpha } \frac{1}{|E_{\rho}|}D_{\rho} \Bigr)\Bigr). \end{equation} \begin{lemma}\label{detpoly} Let $D_N$ be the divisor associated to $P$ by the correspondence (\ref{divisorpolygonbijection}) between divisors and polygons. Then one has \begin{equation}\label{tt} \sum_{{\rm w} \in W } {\bf D}({\rm w})-\sum_{{\rm b} \in B }\Bigl({\bf D}({\rm b})-\sum_{\rho \in \Sigma(1)}\sum_{\alpha \in Z_\rho:b \in \alpha } \frac{1}{|E_{\rho}|}D_{\rho} \Bigr)=D_N. \end{equation} Therefore we have \[ \mathrm{det }\widetilde{K} \in H^0(\mathscr X_N, \mathcal O_{\mathscr X_N}(D_N)). \] \end{lemma} \begin{proof} Let $a_\rho$ be the coefficient of $D_\rho$ in $D_N$. Let $(i_1,i_2)$ be a vertex of $P$ containing $E_\rho$ and let ${\mathrm{m}}$ be associated the extremal dimer cover. We pair up black and white vertices in the sum according to ${\mathrm{m}}$: $$ \sum_{e={\rm b} {\rm w} \in {\mathrm{m}}}\Bigl({\bf D}({\rm w})-{\bf D}({\rm b})+\sum_{\rho \in \Sigma(1)}\sum_{\alpha \in Z_\rho : {\rm b} \in \alpha} \frac{1}{|E_{\rho}|}D_{\rho}\Bigr). $$ Now we observe that if $e$ is not contained in any zig-zag path in $Z\rho$, then $D_\rho$ does not appear in the summand, and if $e$ is contained in a zig-zag path associated to $E_\rho$, then $D_\rho$ appears twice but with opposite signs, modulo contributions from intersections of edges with $\gamma_z,\gamma_w$. Therefore, there is no net contribution to the coefficient of $D_\rho$ except for the intersections of edges in ${\mathrm{m}}$ with $\gamma_z,\gamma_w$, which is the same as in $$ -\sum_{e \in {\mathrm{m}}}\text{div } \phi(e)=-\text{div }z^{i_1} w^{i_2}, $$ which is $a_\rho$. Comparing with (\ref{SH1}), we see that (\ref{tt}) implies the second statement. \end{proof} As a consequence, we see that $\text{det} \widetilde K$ cuts out the compactification of $C^\circ$ in $\mathcal X_N$. Now we consider the codimension 1 exterior power, where we remove $\{{ {\rm b}},{\bf w}\}$. Let $\widetilde {Q}$ be the adjugate matrix of $\widetilde K$. Set \[ E_{{\rm b} {\rm w}}:=D_N-{\bf D}({\rm w})+{\bf D}({\rm b})-\sum_{\rho \in \Sigma(1)}\sum_{\alpha \in Z_\rho : {\rm b} \in \alpha } \frac{1}{|E_{\rho}|}D_{\rho}. \] \begin{corollary} \label{Cor3.5} $\widetilde { Q}_{{\rm b} {\rm w}} \in H^0(\mathscr X_N, \mathcal O_{\mathscr X_N}(E_{{\rm b} {\rm w}}))$. \end{corollary} We therefore arrive at the definition of the small polygon $N_{{\rm b} {\rm w}}$ given in Definition \ref{SNP1}. \subsection{Points at infinity} In this section, we prove that the points at infinity of $C$ are as described in Section \ref{sec:cas}. From (\ref{eq:tau}) and Proposition \ref{pro:globalsec}, we see that $\phi(e)$ is the $G$-invariant section of $\mathcal Hom_{\mathscr X_N}(\mathcal E_{\rm b},\mathcal F_{\rm w})$ given by \[ \phi(e) = \prod_{\tau \in \Sigma(1)} \prod_{\substack{\gamma \in Z_\tau : {\rm b} \in \gamma \\ \gamma \neq \alpha,\beta}}^{}z_\tau, \] and therefore for $e={\rm b} {\rm w}$, $\phi(e)$ vanishes on $D_\rho$ precisely when there is a zig-zag path $\alpha \in Z_\rho$ such that ${\rm b}$ is contained in $\alpha$ but $e$ is not contained in $\alpha$. Therefore on $D_\rho$, the extended Kasteleyn operator $\widetilde K$ takes a block-upper-triangular form $$ \begin{pmatrix} \restr{\widetilde K}{\alpha_1} & & & &*\\ & \restr{\widetilde K}{\alpha_2} & & &*\\ & & \ddots & & \vdots \\ & & &\restr{\widetilde K}{\alpha_n} & \ast \\ && & &\restr{\widetilde K}{\Gamma-\alpha_1-\dots -\alpha_n} \end{pmatrix}, $$ where $Z_\rho=\{\alpha_1,\dots,\alpha_n\}$, $\restr{\widetilde K}{\alpha_i}$ is the restriction of $\widetilde K$ to the black and white vertices in $\alpha_i$, and the $*$'s denote some possibly nonzero blocks. If $\alpha \in Z_\rho$ is ${\rm b}_1 \rightarrow {\rm w}_1 \rightarrow {\rm b}_2 \rightarrow \cdots \rightarrow {\rm w}_d \rightarrow {\rm b}_1$, the determinant of the block $\restr{\widetilde K}{\alpha}$ is \begin{align*} \restr{\text{det }\widetilde K}{\alpha}&= \det\begin{pmatrix} K_{{\rm b}_1 {\rm w}_1} &&&& K_{{\rm b}_1 {\rm w}_d}\\ K_{{\rm b}_2 {\rm w}_1} & K_{{\rm b}_2 {\rm w}_2} & & \\ & K_{{\rm b}_3 {\rm w}_2} \\ &&\ddots&\ddots \\ & & & K_{{\rm b}_d {\rm w}_{d-1}} & K_{{\rm b}_d {\rm w}_d} \end{pmatrix}\\ &= \prod_{i=1}^d K_{{\rm b}_i {\rm w}_i} - (-1)^{d} \prod_{i=1}^d K_{{\rm b}_i {\rm w}_{i-1}}\\ &=-\prod_{i=1}^d\left(wt({\rm b}_i {\rm w}_{i-1})\kappa({\rm b}_i {\rm w}_{i-1})\phi({\rm b}_i {\rm w}_i) \right) \left(\chi^{-[\alpha]}-C_\alpha \right), \end{align*} where we have used the definition of the Kasteleyn matrix, see (\ref{edgeph}), (\ref{Kastdet}). Therefore, the matrix $\restr{\widetilde K}{\alpha}$ is singular when \begin{equation} \label{caspoint} \chi^{-[\alpha]}= C_\alpha. \end{equation} These are the points at infinity of $C$. \section{Behaviour of the Laurent polynomial \texorpdfstring{${ Q}_{{\rm b} {\rm w}}(z,w)$}{Qbw} at infinity}\label{extensionsection} We proved in Corollary \ref{Cor3.5} that the Laurent polynomial ${ Q}_{{\rm b} {\rm w}}(z,w)$ lies in the finite dimensional vector space $H^0(\mathscr X_N, \mathcal O_{\mathscr X_N}(E_{{\rm b} {\rm w}}))$. We need some additional constraints on ${Q}_{{\rm b} {\rm w}}(z,w)$ to determine it. Corollary \ref{cordeg} provides $g$ linear equations for the coefficients of ${ Q}_{{\rm b} {\rm w}}(z,w)$ coming from the vanishing of ${Q}_{{\rm b} {\rm w}}(z,w)$ at the $g$ points of the divisor $S_{\rm w}$. We obtain additional equations from the behaviour of ${ Q}_{{\rm b} {\rm w}}(z,w)$ at the points at infinity of the spectral curve, which we study in this section. Recall that $X_N$ is the toric surface associated to $N$ compactifying $T$. The restriction of the Kasteleyn operator to the open spectral curve $C^\circ$ is a map of trivial sheaves: \[ \restr{K}{C^\circ}: \bigoplus_{ {\rm b} \in B} {\mathcal O}_{C^\circ} \longrightarrow \bigoplus_{ w \in W} \mathcal O_{C^\circ}. \] Let us extend it to a morphism $\overline K$ of locally free sheaves on $C$, providing an exact sequence $$ 0 \rightarrow \mathcal M \rightarrow \bigoplus_{ {\rm b} \in B} \mathcal O_C \Bigl({\bf d}( {\rm b})-\sum_{ \alpha \in Z:{\rm b} \in \alpha} \nu(\alpha)\Bigr) \xrightarrow[]{\overline K} \bigoplus_{ {\rm w} \in W} \mathcal O_C({\bf d}({\rm w})) \rightarrow \mathcal L \rightarrow 0, $$ where $\mathcal M$ and $\mathcal L$ are the kernel and cokernel of the map $\overline K$ respectively. For generic dimer weights, $\Sigma$ is smooth, and $\mathcal M$ and $\mathcal L$ are line bundles. Let $\overline s_{ {\rm b}}$ and $\overline s_{ {\rm w}}$ be sections of $$\mathcal M^\vee \otimes \mathcal O_C\Bigl({\bf d}( {\rm b})-\sum_{\alpha \in Z: {\rm b} \in \alpha} \nu(\alpha)\Bigr)$$ and $\mathcal L \otimes \mathcal O_C({\bf d} ({\rm w}))^\vee$ respectively, given by the ${\rm b}$-entry of the kernel map and ${\rm w}$-entry of the cokernel map respectively. Denote by $S_{\rm b}$ and $S_{\rm w}$ the degree-$g$ effective divisors on the open spectral curve $C^\circ$ given by vanishing of the ${\rm b}$-row and ${\rm w}$-column of $Q$ respectively. \begin{lemma}\label{divbw} \begin{align*} \rm{div}_{C}\overline s_{\rm b} &= S_ {\rm b}+\sum_{\alpha \in Z: {\rm b} \notin \alpha} \nu(\alpha),\\ \rm{div}_{C}\overline s_{\rm w} &= S_{\rm w}, \end{align*} \end{lemma} \begin{proof} By the definition, $\text{div}_{C}\overline s_ {\rm b}|_C=S_{\rm b}$ and $\text{div}_{C}\overline s_ {\rm w}|_C=S_{\rm w}$, so it only remains to find their orders of vanishing at infinity. Let $u$ be a local parameter near $\nu(\alpha)$ that vanishes to order $1$ at $\nu(\alpha)$ and nowhere else. Let us order the black and white vertices so that the vertices on $\alpha$ come first. Then near $\nu(\alpha)$, we have $$ \overline{K}= \begin{pmatrix} K_1 & B \\ u A & K_2 \end{pmatrix}+O(u), $$ where $K_1, K_2$ are the restrictions of $\overline{K}$ to $\alpha$ and $\Gamma - \alpha$ respectively. Since corank $\overline{K}=1$, we have corank $K_1=1$ and that $K_2$ is invertible. Let $v \in \text{Ker }K_1$. $$ \text{Ker }\overline K= (v,-u K_2^{-1} A v)+O(u), $$ so $\overline s_{\rm b}$ has a simple zero at $\nu(\alpha)$ for all ${\rm b} \not\in \alpha$ and has no zero or poles for ${\rm b} \in \alpha$. Similarly, let $v' \in \text{Ker }K_1^*$. We have $$ \text{Ker }\overline K^*= (v',-(K_2^*)^{-1} B v')+O(u). $$ For generic dimer weights, none of the entries of $(K_2^*)^{-1} B v'$ can vanish, so $\overline s_{\rm w}$ has no zeroes or poles at $\nu(\alpha)$. \end{proof} Let $\overline Q_{{\rm b}{\rm w}}$ denote the adjugate matrix of $\overline K_{{\rm w} {\rm b}}$. \begin{corollary}\label{divQbw} $\text{\rm div}_{C} Q_{{\rm b}{\rm w}}=S_{\rm b}+S_{\rm w}-\restr{D_N}{C}+{\bf d}({\rm w})-{\bf d}({\rm b})+\sum_{\alpha}\nu(\alpha).$ \end{corollary} \begin{proof} Since $\overline {Q}_{{\rm b} {\rm w}}$ has rank $1$, we have $\overline{Q}_{{\rm b} {\rm w}}=\overline s_{\rm b} \overline s_{\rm w}$, so that $$ \text{div}_{C}\overline Q_{{\rm b} {\rm w}}=S_{\rm b}+S_{\rm w}+\sum_{\alpha \in Z: {\rm b} \notin \alpha} \nu(\alpha). $$ Therefore \begin{align*} \text{div}_{C}Q_{{\rm b}{\rm w}}&=\text{div}_{C} \overline{Q}_{{\rm b} {\rm w}}-\restr{D_N}{C}+{\bf d}({\rm w})-{\bf d}({\rm b})+\sum_{\alpha \in Z:{\rm b} \in \alpha}\nu(\alpha)\\ &=S_{\rm b}+S_{\rm w}-\restr{D_N}{C}+{\bf d}({\rm w})-{\bf d}({\rm b})+\sum_{\alpha \in Z}\nu(\alpha). \end{align*} \end{proof} \begin{corollary}\label{cordeg} We have for all ${\rm b} \in B,{\rm w} \in W$, $\rm{deg }~S_{\rm b}=\rm{deg }~S_{\rm w}=g$, where $g$ is the genus of $C$. \end{corollary} \begin{proof} We have $K_{\mathcal X_N} =-\sum_{\rho \in \Sigma(1)} D_\rho$. By the adjunction formula, we get $K_{C} = \restr{D_N}{C} -\sum_{\alpha \in Z} \nu(\alpha)$. Since $Q_{{\rm b}{\rm w}}$ is a rational function on $C$, we have $\text{deg div}_{C} Q_{{\rm b}{\rm w}} =0$. Since $\text{deg } ({\bf d}({\rm w})-{\bf d}({\rm b}))=-2$ and $\text{deg }K_{C}=2g-2$, we get $\text{deg }(S_{\rm b}+S_{\rm w})=2g$. By symmetry under interchanging $B$ and $W$, we get $\text{deg }S_{\rm b}=g$. \end{proof} The number $g$ is the number of interior lattice points in $N$ for generic $C \in |D_N|$. \begin{proposition}\label{degl} The line bundle $\mathcal L $ is isomorphic to $\mathcal O_{C}(S_{\rm w}+{\bf d}({\rm w}))$ for any ${\rm w} \in W$. It has degree $g-1$. \end{proposition} \begin{proof} By Lemma \ref{divbw}, $\overline s_{\rm w}$ is a section of $\mathcal L \otimes \mathcal O_C({\bf d}({\rm w} ))^\vee$ with divisor $S_{\rm w}$. Therefore, we must have \[ \mathcal L \otimes \mathcal O_C({\bf d}({\rm w} ))^\vee \cong \mathcal O_C\left(S_{\rm w}\right), \] which implies $\mathcal L \cong \mathcal O_{C}(S_{\rm w}+{\bf d}({\rm w}))$. Since ${\rm{deg }}~S_{\rm w}=g$ and ${\rm{deg}}~{\bf d}({\rm w})=-1$, we get ${\rm{deg}}~\mathcal L=g-1$. \end{proof} \section{Equations for the Laurent polynomial \texorpdfstring{$Q_{{{\rm b}}{\bf w}}$}{Qbw}}\label{laurentsection} Since $Q_{{{\rm b}}{\bf w}}$ has Newton polygon $N_{{\rm b} {\bf w}}$, we have \[ Q_{{{\rm b}}{\bf w}}=\sum_{m \in N_{{\rm b} {\bf w} } \cap {\rm M}}a_m \chi^m, \] for some $a_m \in {\mathbb C}$. We know that $Q_{{{\rm b}}{\bf w}}$ vanishes on $S_{\bf w}$, which gives $g$ linear equations among the $(a_m)_{m \in N_{{\rm b} {\bf w} } \cap {\rm M}}$. However these $g$ linear equations are not usually sufficient to determine $Q_{{{\rm b}}{\bf w}}$, so we need to find some additional equations. These additional equations will come from the vanishing of $Q_{{{\rm b}}{\bf w}}$ at the points at infinity. The fact that the Newton polygon of $Q_{{{\rm b}}{\bf w}}$ is the small polygon $N_{{{\rm b}}{\bf w}}$ imposes certain inequalities on the order of vanishing of $Q_{{{\rm b}}{\bf w}}$ at points at infinity of $C$. Corollary \ref{divQbw} imposes additional constraints that are linear equations in the coefficients of $Q_{{{\rm b}}{\bf w}}$. Inverting this linear system gives $(a_m)_{m \in N_{{\rm b} {\bf w} } \cap {\rm M}}$ and therefore $Q_{{{\rm b}}{\bf w}}$. We now give the precise statement. For a ${\mathbb Q}$-divisor $D = \sum_{\rho \in \Sigma(1)} b_\rho D_\rho$, we define a (${\mathbb Z}$-) divisor $[{D}]:=\sum_{\rho \in \Sigma(1)} [{b_\rho}] D_\rho$, where $[x]$ is the largest integer $n$ such that $n \leq x$. It is the pushforward of $D$ by the canonical projection $\mathscr X_N \rightarrow X_N$. \begin{proposition}\label{lemextraeq} The extra linear equations for $(a_m)_{m \in N_{{\rm b} {\bf w} } \cap {\rm M}}$ from vanishing of $Q_{{{\rm b}}{\bf w}}$ at points at infinity correspond to the points in \begin{equation} \label{extraeq} -\restr{D_N}{C}+{\bf d}({\bf w})-{\bf d}({\rm b})+\sum_{\alpha \in Z} \nu(\alpha) + \restr{[{E_{{{\rm b} }{\bf w}}}]}{C}. \end{equation} \end{proposition} \begin{proof} A generic Laurent polynomial $F$ of the form $\sum_{m \in N_{{\rm b} {\bf w} } \cap {\rm M}}a_m \chi^m$ has order of vanishing $$ \text{div}_{C} \restr{F}{C} \geq -\restr{[E_{{{\rm b} }{\bf w}}]}{C} $$ at the points at infinity of $C$. From Corollary \ref{divQbw}, we have that $\text{div}_{C} Q_{{\rm b} {\bf w}}=S_{\rm b}+S_{\bf w}-\restr{D_N}{C}+{\bf d}({\bf w})-{\bf d}({\rm b})+\sum_{\alpha \in Z} \nu(\alpha).$ The discrepancy provides the extra equations. \end{proof} Now we describe these extra linear equations explicitly. Suppose $\alpha \in Z_\rho$ is a zig-zag path that contributes a linear equation. We extend $[\alpha]$ to a basis $([\alpha]=x_1,x_2)$ of $\rm M$ such that $\langle x_2,u_\rho \rangle =1$, so that for any $m \in {\rm M}$, we can write \[ \chi^m = x_1^{b_m} x_2^{c_m}, ~~~~b_m,c_m \in {\mathbb Z}. \] Let $N_{{\rm b} {\bf w}}^\rho$ be the set of lattice points in $N_{{\rm b} {\bf w}}$ closest to the edge $E_{\rho}$ of $N$ i.e. the set of points in $N_{{\rm b} {\bf w}}$ that minimize the functional $\langle *,u_\rho \rangle$. \begin{proposition}\label{caseqn} Suppose $Q_{{{\rm b}}{\bf w}}=\sum_{m \in N_{{\rm b} {\bf w} } \cap {\rm M}}a_m \chi^m$. The linear equation given by $\alpha$ is: \[ \sum_{m \in N^\rho_{{\rm b} {\bf w} } \cap {\rm M}}a_m C_\alpha^{-b_m}=0. \] \end{proposition} \begin{proof} The affine open variety in $X_N$ corresponding to the cone $\rho$ is \[ U_\rho=\text{Spec }{\mathbb C}[x_1^{\pm 1 },x_2] \cong {\mathbb C}^\times \times {\mathbb C}, \] and $D_\rho \cap U_\rho$ is defined by $x_2=0$. Generically the curve $C$ meets $D_\rho$ transversely at $\nu(\alpha)$, and therefore we may take $x_2$ as a uniformizer of the local ring $\mathcal O_{C, \nu(\alpha)}$ at $\nu(\alpha)$. For each $m \in N^\rho_{{\rm b} {\bf w} } \cap {\rm M}$, we have $$ \chi^m = x_1^{b_m} x_2^p, ~~~~b_\gamma,p \in {\mathbb Z}, $$ where $p$ is the same for all of them, and is the coefficient of $\nu(\alpha)$ in $-\restr{[{E_{{\bf b}w}}]}{C}$. Then using $x_1^{-1}=C_\alpha$ at $\nu(\alpha)$, we have \begin{equation} \label{qbweqn} Q_{{{\rm b}}{\bf w}}= \sum_{m \in N^\rho_{{\rm b} {\bf w} } \cap {\rm M}} a_m C_\alpha^{-b_m}x_2^p+O(x_2^{p+1}). \end{equation} Since $\alpha$ contributes a linear equation, (\ref{qbweqn}) must vanish at order $x_2^p$. \end{proof} \subsection{The system of linear equations \texorpdfstring{$\mathbb V_{{{\rm b}}{\bf w}}$}{Vbw}} We now define the system of linear equations $\mathbb V_{{{\rm b}}{\bf w}}$ from Section \ref{sec2}. These are linear equations in the variables $(a_m)_{m \in N_{{\rm b} {\bf w} } \cap {\rm M}}$ of the following two types: \begin{enumerate} \item For each $1 \leq i \leq g$, we have the linear equations \begin{equation} \label{eq1} \sum_{m \in N_{{\rm b} {\bf w} } \cap {\rm M}} a_m \chi^m(p_i,q_i) =0. \end{equation} \item For every zig-zag path $\alpha \in Z_\rho$ in (\ref{extraeq}), we have the equation \begin{equation} \label{eq2} \sum_{m \in N^\rho_{{\rm b} {\bf w} } \cap {\rm M}}a_m C_\alpha^{-b_m}=0. \end{equation} \end{enumerate} The matrix $\mathbb V_{{{\rm b}}{\bf w}}$ is defined such that these equations are given by $$\mathbb V_{{{\rm b}}{\bf w}}(a_m)=0.$$ It is not necessarily a square matrix. However, we have: \begin{proposition} \label{prop:uniq} For generic spectral data, $Q_{{{\rm b}}{\bf w}}$ is the unique solution of the linear system of equations $\mathbb V_{{{\rm b}}{\bf w}}$ modulo scaling. \end{proposition} \begin{remark} \begin{enumerate} \item Here and elsewhere, we identify a Laurent polynomial $F = \sum_{m \in {\rm M}} b_m \chi^m$ with its vector of coefficients $(b_m)_{m \in {\rm M}}$. \item While $\mathbb V_{{{\rm b}}{\bf w}}$ is defined for all ${\rm w} \in W$, Proposition \ref{prop:uniq} only holds when ${\rm w} = {\bf w}$. \item For generic spectral data, the equations (\ref{eq1}) are linearly independent, but the equations (\ref{eq2}) may not be. \end{enumerate} \end{remark} The rest of this Section is devoted to the proof of Proposition \ref{prop:uniq}. Consider following the exact sequence on $X_N$, obtained by tensoring the closed embedding exact sequence of $i:C \hookrightarrow X_N$ by $\mathcal O_{X_N}([E_{{{\rm b}}{\bf w}}])$. $$ 0 \rightarrow \mathcal O_{X_N}([E_{{{\rm b}}{\bf w}}]-D_N) \rightarrow \mathcal O_{X_N}([E_{{{\rm b}}{\bf w}}]) \rightarrow i_* \restr{\mathcal O_C([E_{{{\rm b}}{\bf w}}]}{C}) \rightarrow 0. $$ The following is a portion of the long exact sequence of cohomology. \begin{equation} \label{coh} 0 \rightarrow H^0(X_N,[E_{{{\rm b}}{\bf w}}]-D_N) \rightarrow H^0(X_N,[E_{{{\rm b}}{\bf w}}]) \rightarrow H^0(C,\restr{[E_{{{\rm b}}{\bf w}}]}{C}). \end{equation} We need the following technical lemma. \begin{lemma}\label{cohlem} The restriction map $H^0(X_N,[E_{{{\rm b}}{\bf w}}]) \rightarrow H^0(C,\restr{[E_{{{\rm b}}{\bf w}}]}{C})$ is injective. \end{lemma} \begin{proof} If $\chi^m \in H^0( X_N,[E_{{{\rm b}}{\bf w}}]-D_N)$, then $\text{div}~\chi^m +[E_{{{\rm b}}{\bf w}}]-D_N \geq 0$. This implies that \begin{align}\label{inter} \text{div}~\chi^m +E_{{{\rm b}}{\bf w}}-D_N=\sum_{\rho \in \Sigma(1)}\sum_{\alpha \in Z_\rho} \langle m,u_\rho \rangle \frac{D_{\rho}}{|E_{\rho}|}- {\bf D}({\bf w})+{\bf D}({\rm b})-\sum_{\rho \in \Sigma(1)}\sum_{\alpha \in Z_\rho:{{\rm b}} \in \alpha }\frac{1}{|E_\rho|}D_\rho \geq 0. \end{align} Let $\gamma$ be a cycle in ${\mathbb T}$ with homology class $m$. The total number of signed intersections $\gamma$ with all zig-zag paths is zero. Let ${\rm w}'$ be any white vertex adjacent to ${\rm b}$. Then we have $$ -{\bf D}({\bf w})+{\bf D}({\rm b})-\sum_{\rho \in \Sigma(1)}\sum_{\alpha \in Z_\rho:{{\rm b}} \in \alpha }\frac{1}{|E_\rho|}D_\rho = ({\bf D}({\rm w}')-{\bf D}({\bf w}))-\sum_{\rho \in \Sigma(1)}\sum_{\substack{\alpha \in Z_\rho: { {\rm b}} \in \alpha\\ {\rm b} {\rm w}' \notin \alpha }}\frac{1}{|E_\rho|}D_\rho. $$ ${\bf D}({\rm w}')-{\bf D}({\bf w})$ records the signed number of intersections with zig-zag paths of any path in $R$ from ${\bf w}$ to ${\rm w}'$, the total number of which is also $0$. Since the last term $-\sum_{\rho \in \Sigma(1)}\sum_{\substack{\alpha \in Z_\rho: { {\rm b}} \in \alpha\\ {\rm b} {\rm w}' \notin \alpha }}\frac{1}{|E_\rho|}D_\rho$ is strictly negative, the sum in (\ref{inter}) cannot be non-negative. Therefore $H^0(X_N,[E_{{{\rm b}}{\bf w}}]-D_N)=0$, which by (\ref{coh}) means that the map $H^0(X_N,[E_{{{\rm b}}{\bf w}}]) \rightarrow H^0(C,\restr{[E_{{{\rm b}}{\bf w}}]}{C})$ is injective. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:uniq}] \begin{enumerate} \item Existence: By Theorem 7.3 of \cite{GK12}, the map $\kappa_{\Gamma,{\bf w}}$ is dominant. So a generic spectral data is in the image of $\kappa_{\Gamma,{\bf w}}$. For such a spectral data, $Q_{{{\rm b}}{\bf w}}$ satisfies: \begin{enumerate} \item The system of equations (\ref{eq1}) because, by definition of the spectral transform, $Q_{{{\rm b}}{\bf w}}$ vanishes at the points of the divisor $S=\sum_{i=1}^g (p_i,q_i)$. \item The equations (\ref{eq2}) by Proposition \ref{caseqn}. \end{enumerate} \item Uniqueness: Suppose $V_{{{\rm b}}{\bf w}}$ is a solution of $\mathbb V_{{{\rm b}}{\bf w}}$. Therefore we have $$ \text{div}_{C}V_{{{\rm b}}{\bf w}} \geq S+E, $$ where $E=-\restr{D_N}{C}+{\bf d}({\bf w})-{\bf d}({\rm b})+\sum_{\alpha \in Z} \nu(\alpha)$ satisfies $\text{deg}~E = -2g$. Therefore, $\restr{V_{{{\rm b}}{\bf w}}}{C}$ can be identified with a section of $\mathcal O_{C}(-E)$ vanishing at the points of $S$. Riemann-Roch gives $$ h^0(C,\mathcal O_{C}(-E)-h^1(C,\mathcal O_{C}(-E))=\text{deg }(-E)-g+1=g+1. $$ By Serre duality, $h^1(C,\mathcal O_{C}(-E))=h^0(C, \omega_{C}(E))$, which equals $0$ since $\omega_{C}(E)$ has negative degree $-2$. For generic $S$ that avoids the base locus of $\mathcal O_{C}(-E)$, the requirement that the section of $\mathcal O_{C}(-E)$ corresponding to $V_{{{\rm b}}{\bf w}}$ vanishes at each of the $g$ points of $S$ imposes $g$ independent conditions, and therefore determines $\restr{V_{{{\rm b}}{\bf w}}}{C}$ uniquely up to multiplication by a nonzero complex number. By Lemma \ref{cohlem}, $V_{{{\rm b}}{\bf w}}$ is unique up to multiplication by a nonzero complex number. \end{enumerate} \end{proof} \begin{remark} It is easy to see using Riemann-Roch that the number of equations in $\mathbb V_{{{\rm b}}{\bf w}}$ is equal to $h^0(C,\restr{[E_{{{\rm b}}{\bf w}}]}{C})-1$. On the other hand, the number of variables is $h^0(X_N,{[E_{{{\rm b}}{\bf w}}]}{})$. However, the map in Lemma \ref{cohlem} is not necessarily an isomorphism (there may be sections on the curve that are not restrictions of sections on the surface), so we only have the inequality \[ \#~\text{equations in}~\mathbb V_{{{\rm b}}{\bf w}} \geq \#~\text{variables}-1. \] \end{remark} \subsection{Face weights from \texorpdfstring{$Q_{{{\rm b}}{\bf w}}$}{Qbw}} \label{S5.2} \begin{proof}[Proof of Theorem \ref{Th2.3} Let ${\rm b} \xrightarrow[]{e}{\rm w} \xrightarrow[]{e'} {\rm b}'$ be a wedge with zig-zag path $\alpha \in Z_\rho$. The restriction of the characteristic polynomial $\restr{P(z,w)}{D_\rho}$ is the partition function of those dimers whose homology class in $N$ lies on $E_\rho$. From the explicit construction of external dimers in \cite{GK12} (that is, dimers whose homology classes are in $\partial N$), we have that each dimer with homology class in $E_\rho$ uses exactly one of the edges $e$ or $e'$. Since $Q_{{\rm b} {\rm w}}(z,w)$ is the partition function of dimers with the vertices ${\rm b},{\rm w}$ removed, we have \[ \restr{P}{D_\rho}=wt (e) \kappa{(e)} \phi(e) \restr{Q_{{\rm b} {\rm w}}}{D_\rho}+wt(e') \kappa{(e')} \phi(e') \restr{Q_{{\rm b}' {\rm w}}}{D_\rho}. \] Since $\nu(\alpha)$ is on the spectral curve, $P(\nu(\alpha))=0$, from which we get \begin{equation} \label{eq:altprod} \frac{wt(e) }{wt(e') } =-\frac{\kappa(e' )\phi(e')Q_{{\rm b}' {\rm w}}}{\kappa(e)\phi(e) Q_{{\rm b} {\rm w}}}(\nu(\alpha)). \end{equation} We have $\text{corank}(K)=1$ at smooth points of $C$. Note that $\restr{KQ}{C}=0$. Therefore for generic $wt$, since $C$ is smooth, $Q$ is a rank $1$ matrix given by $$ Q=\text{ker }K^* \otimes \text{coker }K. $$ This implies that $$ \frac{Q_{{\rm b} {\rm w}}}{Q_{{\rm b}' {\rm w}}}(\nu(\alpha))=\frac{Q_{{{\rm b}}{\bf w}}}{Q_{{{\rm b}}'{\bf w}}}(\nu(\alpha)). $$ \end{proof}
1,116,691,501,072
arxiv
\section{Introduction} Over the last decade, optical metasurfaces, representing nm-thin planar arrays of resonant subwavelength elements, have been extensively investigated, demonstrating diverse and multiple functionalities that make use of the available complete control over the transmitted and reflected fields \cite{Yu2011,Hsiao2017,Chen2016,Ding2017}. This progress led to the realization of numerous flat optical components in concert with the current trend of miniaturization in photonics. Large flexibility in the design of optical metasurfaces enabled numerous demonstrations of various functionalities, including beam-steering \cite{Pors2013_BS,Li2015_BS,DamgaardCarstensen2020_BS}, optical holograms \cite{Chen2013_OH,Wen2015_OH,Huang2015_OH}, and planar lenses \cite{Ding2019_PL,Yi2017_PL,Boroviks2017_PL}. Most of the developed optical metasurfaces are however static, featuring well-defined optical responses determined by the configuration of material and geometrical parameters that are chosen by design and set in the process of fabrication. Realization of dynamic metasurfaces faces formidable challenges associated with the circumstance that metasurfaces are fundamentally very thin, i.e., of subwavelength thickness, limiting thereby severely the interaction length available. Efficient tunability can be achieved through material property (phase) transitions or structural reconfigurations that result in very large refractive index changes, but these effects are inherently slow \cite{Che2020,vandeGroep2020,Park2020,Shirmanesh2020}. The speed limitations jeopardize the application prospects in emerging technologies, such as light detection and ranging (LIDAR) and computational imaging and sensing \cite{Schwarz2010,Jung2018}. The electro-optic Pockels effect enables fast electrically controlled modulation of material properties in several active media, e.g. lithium niobate (LN), electro-optic polymers or aluminum nitride \cite{Thomaschewski2020,Zhang2018,Smolyaninov2019}. Especially LN offers an attractive platform, due to its large electro-optic coefficients ($r_{33} = \SI{31.45}{\pico\meter/\volt})$, superb chemical and mechanical stability resulting in long-term reliability, and wide optical transparency range (0.35 - \SI{4.5}{\micro\meter}) \cite{Weis1985}. The aforementioned limitations in the available interaction length makes however exploiting comparatively weak electro-optic material effects problematic, resulting in rather weak tunability and modulation efficiency \cite{Zhang2018,Gao2021}. We introduce in this work an approach to realize electrically tunable optical metasurfaces by utilizing the electro-optic effect in a thin LN layer sandwiched between a continuous thick bottom and nanostructured top gold film serving as electrodes. Our approach is based on electrically tuning the light reflectivity near a high-fidelity Fabry-Perot resonance. This concept is implemented in dynamic (electrically controlled) Fresnel lens focusing. By conducting detailed numerical simulations and experiments for a 300-nm-thick LN layer, we demonstrate that the active Fresnel lens (AFL) exhibits tunable focusing and modulation in reflection at near-infrared wavelengths. The fabricated AFL is found to exhibit focusing of 800-\SI{900}{\nano\meter} radiation at the distance of \SI{40}{\micro\meter} with the focusing efficiency of \SI{15}{\percent} and modulation depth of \SI{1.5}{\percent} (for the driving voltage of \SI{\pm10}{\volt}) within the bandwidth of $\sim\!\SI{4}{\mega\hertz}$. We believe that the introduced electro-optic metasurface concept is useful for designing dynamic flat optics components. \section{Results and Discussion} \begin{figure}[!tb] \centering \includegraphics{Fig1_SKETCHES.pdf} \caption{Schematics of the designed active Fresnel lens (AFL). (a) Three dimensional rendering of the zone plate showing focusing of the incident light under an applied voltage. (b) Cross section sketch displaying a semi-transparent gold ring deposited on a lithium niobate thin film, adhered to a gold back-reflector by a thin chromium adhesive layer.} \label{fig1:Schematic} \end{figure} \begin{figure*}[!tb] \centering \includegraphics{Fig2_SIMULATIONS.pdf} \caption{Calculated performance of the AFL. (a) Calculated focusing efficiency (left axis) as a function of wavelength without an applied modulation voltage, and variation in focusing efficiency (right axis) when applying a DC modulation voltage of \SI{\pm10}{\volt}. (b) Calculated modulation of the focusing efficiency as a function of wavelength, when applying a DC modulation voltage of \SI{\pm10}{\volt} (c) Calculated \textit{x}-component of the scattered field above an AFL with a designed focal length of \SI{40}{\micro\meter} at an incident wavelength of \SI{815}{\nano\meter} for a DC modulation voltage of \SI{-10}{\volt}. } \label{fig2:Simulations} \end{figure*} Figure \ref{fig1:Schematic} shows schematics of the proposed structure consisting of semi-transparent gold rings deposited on a continuous $t_{LN} \simeq \SI{300}{\nano\meter}$ z-cut LN thin film, adhered to a \SI{300}{\nano\meter} optically thick gold back-reflector by a \SI{10}{\nano\meter} chromium adhesive layer. The areas covered by semi-transparent gold constitute Fabry-Perot resonators, whose resonances determine the operation wavelength of the device. The two-dimensional (2D) Fresnel lenses allow polarization-independent focusing, due to their radial symmetry. For simplicity the polarization is set to be along the x-direction, denoted TM polarization. In the design of a Fresnel lens, the relation between wavelength, focal length, and lens dimension is given by $r_m = \sqrt{m\lambda f + \frac{1}{4}m^2 \lambda^2}$, where $\lambda$ is the wavelength of light to be focused, $f$ is the focal length, $m$ is an integer describing the zones, and $r_m$ is the radius of the $m^{th}$ zone. The focal length is a key parameter in the design of a zone plate, and to realize a tight focal spot, a focal length of \SI{40}{\micro\meter} is selected in combination with a wavelength range of 800-\SI{900}{\nano\meter} and a total of $m_{Tot}=19$ zones. This results in a total zone plate radius of $r_{19} \simeq \SI{26}{\micro\meter}$, and a minimum zone width of $\Delta r_{19} \simeq \SI{0.75}{\micro\meter}$. The concentric gold rings and the gold back-reflector can serve as integrated metal electrodes for electro-optic tuning of the Fresnel lens (Figure \ref{fig1:Schematic}). The concentric rings of the Fresnel lens are electrically connected by a \SI{2}{\micro\meter} wide wire (Figure \ref{fig1:Schematic}a). Applying a voltage across the gold rings and the bottom back-reflector electrode generates an electric field in the sandwiched LN thin film, which induces a change of the refractive index due to the Pockels effect. This shifts the resonance position of the Fabry-Perot resonators, thus giving rise to electrical tunability of the light reflectivity. With the Fabry-Perot optical mode propagating along the $z$-direction, the optical electric field component effectively influencing the Fabry-Perot resonance is in the $x$-direction, thus the relevant electro-optic Pockels coefficient is $r_{13} = \SI{10.12}{\pico\meter/\volt}$ \cite{Jazbinek2002}. The induced change in refractive index is given by $|\Delta n| \simeq \frac{r}{2} n_0^3 \frac{V}{d}$, where $r$ is the relevant Pockels coefficient, $n_0$ is the refractive index, $V$ is the applied voltage, and $d$ is the distance across which the voltage is applied \cite{Pedrotti}. To realize an effective AFL, the modulation and reflective properties of the Fabry-Perot resonators are investigated to determine the optimal thickness of the top concentric gold rings, leading to the choice of using a thickness of $t_g=\SI{15}{\nano\meter}$ (see Supporting Information, Section 1). \begin{figure*}[!tb] \centering \includegraphics{Fig3_2DLENS.pdf} \caption{Experimental characterization of the focusing effect of the AFL. (a) Scanning electron microscopy image of the fabricated AFL. (b,c) Verification of the (b) electrical and (c) spectral tunability of the focal spot intensity as a function of DC modulation voltage for a wavelength of \SI{865}{\nano\meter} and of wavelength for a DC modulation voltage of \SI{-10}{\volt}, respectively. The shaded error region represents the linearly interpolated standard deviation of the mean deduced from repeated measurements. (d,e) Optical images in planes A and B (Supporting Information, Figure S4) when the incident light of wavelength \SI{865}{\nano\meter} illuminate (d) flat unstructured gold and (e) the fabricated AFL, respectively. } \label{fig3:2DLens} \end{figure*} An important characteristic of a focusing element is the focusing efficiency, describing the amount of incident light that is directed to the designed focal spot. Another equally important characteristic when discussing active optical components is the modulation efficiency (calculated as $1-(|I_{min}(\lambda)|/|I_{max}(\lambda)|)$, where $|I_{min}(\lambda)|$ and $|I_{max}(\lambda)|$ are the minimum and maximum achievable intensity at the focal spot for a given wavelength, respectively \cite{Yao2014}), namely the ability to modulate the device performance by applying an external voltage. In this work, we optimize the design to achieve the highest possible modulation efficiency. Given the previously mentioned design parameters, the only parameter left to optimize is the design wavelength, which for optimal modulation is determined by calculating the focusing and modulation efficiency as a function of design wavelength, meaning that the zone plate design is adjusted at each iteration of the wavelength (Supporting Information, Figure S3). Simulations show that the focusing efficiency increases significantly from $\sim\! 3$ to $\sim\! \SI{20}{\percent}$ in the investigated wavelength range (see Supporting Information, Section 2). The design wavelength is chosen to be at the point of maximum modulation efficiency, thus $\lambda_0=\SI{815}{\nano\meter}$. Similar simulations are performed for varying incident wavelength but with a constant lens design (Figure \ref{fig2:Simulations}). The performance is equivalent in the vicinity of the design wavelength, and the most significant distinction is for focusing efficiency at longer wavelengths, where the performance loss due to mismatch between the incident and design wavelengths overcomes the otherwise increasing focusing efficiency. The largest differences in focusing efficiencies are observed at the part of steepest slope. The shape of the curve for modulation efficiency resembles those for difference in focusing efficiencies, only slightly blue-shifted, because the modulation efficiency is calculated based on the difference relative to the unmodulated signal. Figure \ref{fig2:Simulations}c shows a scattered field simulation of the investigated AFL at the design wavelength illustrating the focusing ability. \begin{figure*}[!tb] \centering \includegraphics{Fig4_2DMODULATION.pdf} \caption{Experimental characterization of the modulation performance of the AFL. (a) Intensity in the focal spot, measured with the photodetector (PD), as a function of time for a wavelength of \SI{865}{\nano\meter}, while the modulation voltage is cycled between \SI{-5}{\volt} and \SI{5}{\volt}, indicated by grey and white backgrounds, respectively. (b) Measured modulation efficiency of the intensity in the focal spot as a function of wavelength for a modulation voltage of \SI{\pm10}{\volt} at a frequency of \SI{3}{\kilo\hertz}. The shaded error region represents the linearly interpolated estimated standard deviation of the mean. (c) Measured modulation efficiency as a function of modulation voltage for a wavelength of \SI{865}{\nano\meter} at a frequency of \SI{3}{\kilo\hertz}. Indicated voltages represent amplitudes of the applied signal. The shaded error region represents the linearly interpolated estimated standard deviation of the mean. (d) Measured frequency response as a function of applied RF signal frequency, normalized to the lowest applied frequency, at a wavelength of \SI{865}{\nano\meter}. The dashed line marks \SI{-3}{\decibel}, and the blue line represents the response of a first order low pass filter with a cutoff frequency of \SI{2.3}{\mega\hertz}, which is calculated as the cutoff of the macroscopic electrodes. Error bars are in the order of data point sizes. } \label{fig4:2DModulation} \end{figure*} After thorough numerical investigations of the AFL performance, we move on to experimental characterization. An AFL with the chosen design parameters was fabricated using the standard technological procedure based on electron-beam lithography (see Methods). A scanning electron microscopy image of the AFL is shown in Figure \ref{fig3:2DLens}a, showing regular concentric circles without significant fabrication defects. Due to the expected short focal length of the AFL and the relatively low focusing efficiency, it proved difficult to characterize the focusing effect using a conventional imaging setup with near-parallel illumination \cite{Ding2019_PL}, because the focal point will be difficult to distinguish from the interference pattern between incident and normally reflected light from the sample. However, it is possible to verify the focusing effect and determine the focal length by shifting the sample away from the objective from the plane which resulted in a tight focal spot under illumination of flat unstructured gold (plane A in Supporting Information Figure S4) into the plane where the reflected light from the AFL is tightly focused (plane B in Supporting Information Figure S4) \cite{Pors2013,Boroviks2017_PL}. It is deducible from geometrical optics that the distance between these planes is equivalent to twice the focal length. This approach of positioning the sample in planes A and B for radiation incident on flat unstructured gold and the fabricated AFL produces optical images as shown in Figure \ref{fig3:2DLens}d,e, respectively, clearly demonstrating the focusing ability. Experimental characterization of the Fabry-Perot modulator shows a measured thickness of the LN thin film of \SI{323}{\nano\meter} (see Supporting Information, Section 1). This corresponds to a deviation of $\sim\!\SI{7.5}{\percent}$ of the nominal thickness, which results in a shift in resonant wavelength of approximately \SI{50}{\nano\meter}. For this reason we see a new wavelength for highest modulation of \SI{865}{\nano\meter} (Figure \ref{fig4:2DModulation}b), which is used as the central wavelength for experimental characterization. This is expected to result in a decrease in performance, as the lens is designed for a wavelength of \SI{815}{\nano\meter}. The measured focal length is $f=\SI{40(2)}{\micro\meter}$. Focusing efficiency is investigated as a function of modulation voltage and wavelength (Figure \ref{fig3:2DLens}b,c). The measured and calculated values are not directly comparable as the simulations are for a 2D model. However, the simulations provide trends in the performance for varying voltage and wavelength, which are comparable to the experiments. As is shown by simulations (Figure \ref{fig2:Simulations}a), applying a negative (positive) bias results in an increase (decrease) in focusing efficiency, which is verified by experiments (Figure \ref{fig3:2DLens}b). Similarly, the evolution of focusing efficiency with wavelength (Figure \ref{fig3:2DLens}c) follows that shown by simulations (Figure \ref{fig2:Simulations}a). So far, we have characterized the focusing abilities of the AFL, and now we move on to characterize the modulation properties of the intensity in the focal spot (see Methods). The ability to modulate the focal point intensity is visualized by applying an electrical square signal alternating between \SI{\pm5}{\volt}. Measured response of the AFL at an electrical frequency of \SI{3}{\kilo\hertz} shows the dynamic modulation of focusing versus time, and demonstrates as previously stated that a negative bias leads to an increase in focusing efficiency (Figure \ref{fig4:2DModulation}a). Modulation efficiency is measured at a driving voltage of \SI{\pm10}{\volt} for the wavelength range of 800-\SI{910}{\nano\meter} (Figure \ref{fig4:2DModulation}b). The maximum modulation efficiency of \SI{1.5}{\percent} is measured at a wavelength of \SI{865}{\nano\meter}, and the measured dispersion of the modulation efficiency is in agreement with the simulated wavelength dependence (Figure \ref{fig2:Simulations}b). A linear relation is expected between modulation efficiency and voltage due to the previously stated formula for induced refractive index change, and the resulting shift in the wavelength of Fabry-Perot resonance. This relation is verified by experimental characterization (Figure \ref{fig4:2DModulation}c). The electro-optic frequency response is characterized from \SI{10}{\kilo\hertz} to \SI{4.5}{\mega\hertz} (Figure \ref{fig4:2DModulation}d). The device frequency response exhibits an increase in performance for larger signal frequency before abruptly dropping, resulting in a \SI{-3}{\decibel} cutoff frequency of \SI{4}{\mega\hertz}. Frequency response fluctuations might be attributed to piezoelectric resonances in LN, and the accompanied variations of the permittivity and the electro-optic activity in LN when the crystal strain becomes unable to follow the external electric field (clamped crystal response) \cite{Takeda2012,Jazbinek2002,Thomaschewski2020}. Using a simple parallel plate capacitor formula, the capacitance of the device and electrodes is calculated to be $C_c = \SI{1.36}{\nano\farad}$, which corresponds well with the measured capacitance of $C_m \simeq \SI{1.5}{\nano\farad}$. Assuming a \SI{50}{\ohm} resistive load ($f=1/[2\pi RC]$), the calculated \SI{-3}{\decibel} cutoff frequency is \SI{2.3}{\mega\hertz}, which is indicated by a first order low pass filter response (blue line of Figure \ref{fig4:2DModulation}d), intersecting the measured data just below the \SI{-3}{\decibel}-line. Disregarding the macroscopic electrodes and electrical wiring, the capacitance of the device is calculated to be \SI{0.83}{\pico\farad}, resulting in a cutoff frequency of \SI{3.8}{\giga\hertz}, which is easily supported by the fast electro-optic Pockels effect. Thus the electrical bandwidth can be considerably improved by optimizing the macroscopic electrodes and electrical wiring. \section{Conclusion} In summary, we have presented and experimentally investigated an approach to realize a flat electrically tunable Fresnel lens by utilizing the electro-optic effect in a thin lithium niobate layer sandwiched between a continuous thick bottom and nanostructured top gold film serving as electrodes. We have designed, fabricated and characterized the active Fresnel lens that exhibits focusing of 800-\SI{900}{\nano\meter} radiation at the distance of \SI{40}{\micro\meter} with the focusing efficiency of \SI{15}{\percent} and modulation depth of \SI{1.5}{\percent} for the driving voltage of \SI{\pm10}{\volt} within the bandwidth of \SI{4}{\mega\hertz}. It should be noted that the modulation efficiency can significantly be improved by using a high-quality top gold film with the optimal thickness of \SI{12}{\nano\meter} (see Supporting Information, Section 1), as the currently used \SI{15}{\nano\meter}-thin gold film is likely to be inhomogeneous (island-like). Furthermore, redesigning the macroscopic electrodes and electrical wiring can considerably improve the electrical bandwidth reaching the GHz range as discussed above. In comparison with other electrically tunable thin lenses \cite{Park2020,Shirmanesh2020,vandeGroep2020}, the configuration presented here is attractive due to its simplicity in design and fabrication and inherently fast electro-optic response (see Supporting Information, Section 4). Overall, we believe the introduced electro-optic metasurface concept is useful for designing dynamic, electrically tunable flat optics components. \section{Methods} \textit{Modeling.} Simulations are performed in the commercially available finite element software \textit{COMSOL Multiphysics}, ver. 5.5. Fabry-Perot modulators and Fresnel lenses are modulated to determine reflectivity and focusing properties. All simulations are performed for 2D models, due to computational restraints. In all setups, the incident wave is a plane wave traveling downward, normal to the sample. Interpolated experimental values are used for the permittivity of gold \cite{Johnson1972}, LN \cite{Zelmon1997}, and chromium \cite{Johnson1974}, and the medium above the sample is air. For simulation of the Fabry-Perot modulators, periodic boundary conditions are applied on both sides of the cell, while the top and bottom boundaries are truncated by ports to minimize reflections. The top port, positioned a distance of one wavelength from the top electrode, handles wave excitation and measures complex reflection coefficient. For simulation of the AFL, periodic boundary conditions are applied on one side, so it is only necessary to model half the zone plate. All other boundaries are truncated by scattering boundary conditions, also to eliminate reflections. Focusing efficiency is determined by integrating the reflected power over an area corresponding to twice the beam waist of a Gaussian beam focused at the focal point and dividing by the incident optical power. \textit{Fabrication.} Fabrication of the AFL is done using a combination of nanostenciling and electron beam lithography and lift-off. A substrate with the following layered structure is obtained commercially: Bulk LN substrate, \SI{3}{\micro\meter} SiO\textsubscript{2}, \SI{30}{\nano\meter} of chromium, \SI{300}{\nano\meter} of gold, \SI{10}{\nano\meter} of chromium and lastly a \SI{300}{\nano\meter} thin film of LN (NANOLN). Initially, macroscopic electrodes are deposited by thermal evaporation of \SI{3}{\nano\meter} titanium and \SI{50}{\nano\meter} gold through a shadow mask. Subsequently, $\sim \! \SI{200}{\nano\meter}$ of PMMA 950K A4 is spin-coated, and the Fresnel zones and modulator squares are exposed at \SI{30}{\kilo\volt} using electron beam lithography. Alignment between the macroscopic electrodes and optical devices is performed manually. After development, the devices are formulated by thermal evaporation of \SI{1}{\nano\meter} titanium and \SI{15}{\nano\meter} gold followed by lift-off in acetone. The fabricated modulator squares are \SI{100x100}{\micro\meter}, and the AFL has a radius of \SI{26.1}{\micro\meter} and consists of 19 zones, with even zones formed by gold deposition. \textit{Electro-optical characterization.} During fabrication, the concentric rings are interfaced to macroscopic electrodes. For electro-optical characterization, the sample is mounted on a home-made sample holder, that connects to the macroscopic top electrode, and electrical connection to the bottom electrode is obtained by applying a conductive paste on the edge of the sample. The incident light is a low power, continuous-wave laser beam from a tunable laser, which is focused by a 50X objective to form a tightly focused spot in plane A (Supporting Information, Figure S4) on flat unstructured gold. The reflected light is collected by the same objective, separated from the incident light by a beam splitter and viewed on a camera. Focusing efficiency is determined as the ratio of focused light from the device viewed in plane B to the amount of reflected light on flat unstructured gold viewed in plane A. For characterization of the modulation properties, the focal spot is manually isolated with an iris and the camera is replaced with a photodetector connected to an oscilloscope. RF modulation signals are supplied by a function generator, and modulation of the focal spot intensity is observed on the oscilloscope. \begin{acknowledgement} The authors thank for financial support from Villum Fonden (Award in Technical and Natural Sciences 2019 and Grant No. 00022988 and 37372). \\ C.D.-C. acknowledges advice from Chao Meng on the optical characterization of focusing devices. \end{acknowledgement} \begin{suppinfo} The following files are available free of charge \\ \\ \textbf{Supporting Information.} Investigation of the Fabry-Perot modulator, determination of the design wavelength, setup for electro-optical characterization, and comparison of electrically tunable thin lenses \end{suppinfo} \section{Author contributions} S.I.B. conceived the idea. C.D.-C. designed the sample and performed the numerical simulations with F.D.. C.D.-C. and M.T. fabricated the structures and conducted the electro-optical characterization. C.D.-C. analyzed the results, which were discussed by all authors. C.D.-C. and S.I.B. wrote the manuscript with revisions by all authors. S.I.B. supervised the project.
1,116,691,501,073
arxiv
\section{Motivation and outline of this article} The concept of "stochastic parallel transport" in a vector bundle $E$ over a Riemannian manifold $M$ is usually presented as a byproduct of the concept of "stochastic differential equation"; this is the approach taken in most texts, for instance in \cite{IW89} and in \cite{Meyer82}. Nevertheless, K. It\^{o} had originally conceived it differently (\cite{Ito63}, \cite{Ito75a}, \cite{Ito75b}): for every continuous curve $c : [0,t] \to M$, consider the unique geodesic segment joining the consecutive "dyadic" points $c (\frac {jt} {2^k})$ and $c (\frac {(j+1)t} {2^k})$, join these $2^k$ geodesic segments into a single zig-zag piecewise-geodesic line, and parallel-transport the vector $v \in E_{c(0)}$ along this line to $E_{c(t)}$; for Wiener-almost all continuos curves $c$, the limit when $k \to \infty$ will exist and will be called "the stochastic parallel transport of $v$ along $c$". Both approaches are equivalent, as shown in \cite{Meyer82} and \cite{Emery90}, and both are constructed within the framework of probability theory, therefore being accessible mostly to probabilists. The aim of this article is to reconstruct the concept of "stochastic parallel transport" using only functional-analytic tools and concepts, thus opening it up to a much larger class of mathematicians. Since the constructions in this text will be fairly technical, let us sketch the intuition underpinning them. Let $D_t = \{ \frac {jt} {2^k} \mid k \in \mathbb N, \ j \in \mathbb N \cap [0,2^k] \}$ - the "dyadic" numbers between $0$ and $t$. Following It\^{o}'s idea, the parallel transport of $v \in E_{c(0)}$ along the zig-zag line determined by the points $\{ c(0), c(\frac t {2^k}), \dots, c (\frac {(2^k-1)t} {2^k}), c(t) \}$ is the parallel transport $T_{k,0}$ from $c(0)$ to $c(\frac t {2^k})$, followed by the parallel transport $T_{k,1}$ from $c(\frac t {2^k})$ to $c(\frac {2t} {2^k})$ and so on, ending with the parallel transport $T_{k, 2^k-1}$ from $c(\frac {(2^k-1)t} {2^k})$ to $c(t)$; symbolically, it is $T_{k, 2^k-1} \dots T_{k,0} v$. Now comes the remark that is the backbone of the present work: $T_{k, 2^k-1} \dots T_{k,0} v$ can be viewed as the "contraction" of all the tensor products in \begin{align*} T_{k, 2^k-1} \otimes \dots \otimes T_{k,0} \otimes v & \in \left( E_{c(t)} \otimes E_{c(\frac {(2^k-1)t} {2^k})} ^* \right) \otimes \dots \otimes \left( E_{c(\frac t {2^k})} \otimes E_{c(0)} ^* \right) \otimes E_{c(0)} \simeq \\ & \simeq E_{c(t)} \otimes \left( E_{c(\frac {(2^k-1)t} {2^k})} ^* \otimes E_{c(\frac {(2^k-1)t} {2^k})} \right) \otimes \dots \otimes \left( E_{c(0)} ^* \otimes E_{c(0)} \right) \simeq \\ & \simeq E_{c(t)} \otimes \operatorname{End} E_{c(\frac {(2^k-1)t} {2^k})} ^* \otimes \dots \otimes \operatorname{End} E_{c(0)} ^* \ . \end{align*} Let us see now what "contraction" means. If $U_1, \dots, U_N$ are finite-dimensional vector spaces, if $u \in U_N$ and $\omega \in U_1 ^*$, and $A_j : U_{j+1} \to U_j$ is a linear operator for all $1 \le j \le N-1$, then \[ \omega \otimes A_1 \otimes \dots A_{N-1} \otimes u \in U_1 ^* \otimes (U_1 \otimes U_2 ^*) \otimes \dots \otimes (U_{N-1} \otimes U_N ^*) \otimes U_N \simeq \operatorname{End} U_1^* \otimes \dots \otimes \operatorname{End} U_N ^* \ ; \] if $\operatorname{Id} _{U_j}$ is the identity operator on $U_j$, then $\operatorname{Id} _{U_1} \otimes \dots \otimes \operatorname{Id} _{U_N} \in \operatorname{End} U_1 \otimes \dots \otimes \operatorname{End} U_N$, therefore it makes sense to apply $\omega \otimes A_1 \otimes \dots \otimes A_N \otimes u$ on $\operatorname{Id} _{U_1} \otimes \dots \otimes \operatorname{Id} _{U_N}$, the result being $\omega (A_1 \dots A_N u)$. We see that in order to perform this contraction in the product of parallel transports considered above we need to add in a supplementary factor $E_{c(t)} ^*$ with which to pair the factor $E_{c(t)}$ in order to obtain $\operatorname{End} E_{c(t)} ^*$ and be able to perform the contraction described above. This means that if $\eta_{c(t)} \in E_{c(t)}$, then \[ \eta_{c(t)} \otimes T_{k, 2^k-1} \otimes \dots \otimes T_{k,0} \otimes v \in \operatorname{End} E_{c(t)} ^* \otimes \dots \otimes \operatorname{End} E_{c(0)} ^* \] and \[ \eta_{c(t)} (T_{k, 2^k-1} \dots T_{k,0} v) = (\eta_{c(t)} \otimes T_{k, 2^k-1} \otimes \dots \otimes T_{k,0} \otimes v) (\operatorname{Id} _{E_{c(t)}} \otimes \dots \otimes \operatorname{Id} _{E_{c(0)}}) \ .\] Following now in the footsteps of Itô, we let $k \to \infty$; what we get, then, will be a contraction between tensor products with infinitely many factors; the rigorous construction of these tensor products will be our first task, but we can say that these tensor product spaces will be $\mathcal E _c = \otimes _{s \in D_t} \operatorname{End} E_{c(s)}$ and its dual. If we denote the space of continuous curves by $\mathcal C_t$, the fact that $\mathcal E_c$ depends on $c \in \mathcal C_t$ suggests that the disjoint union $\coprod _{c \in \mathcal C_t} \mathcal E _c$ will be a (topological) vector bundle of infinite rank over $\mathcal C_t$. Since $\eta_{c(t)} \otimes T_{k, 2^k-1} \otimes \dots \otimes T_{k,0} \otimes v$ takes values in the fiber $\mathcal E_c ^*$ for all $k \in \mathbb N$ and all $c \in \mathcal C_t$, we deduce that these tensor products will all be some kind of sections in $\mathcal E ^*$, whence it is reasonable to assume that their limit for $k \to \infty$ (the stochastic parallel transport, once we get rid of $\eta$) will be a section of the same kind. Indeed, this will turn out to be the case, and in order to obtain this we shall resort to Chernoff's theorem about the approximation of contraction semigroups. An unexpected byproduct of the construction in this article is a new version of the Feynman-Kac formula in vector bundles: not only will its proof be completely new, but its hypotheses seem to be the most general considered so far in the literature, to the author's best knowledge; more precisely, the potential will be taken to be only locally-integrable and lower-bounded, while no restrictions will be imposed upon the manifold. The plan of the article is the following, the notations going to be explained as soon as they become necessary: \begin{itemize}[wide] \item we shall construct a Hermitian vector bundle $\mathcal E$ over $\mathcal C_t$, the fibers of which will be infinite-dimensional Hilbert spaces; \item we shall consider spaces of square-integrable sections in $\mathcal E$ and $\mathcal E ^*$ and, in particular, we shall obtain by an abstract argument a specific essentially bounded section $\rho_{t, \omega, \eta}$, which will be the limit of a sequence of sections $(P_{t, \omega, \eta, k}) _{k \in \mathbb N}$ given by explicit formulae; \item we shall emphasize a conjugate-linear continuous map $\mathcal P _{t,v} ^2 : \Gamma^2 (\mathcal E) \to \Gamma^2 (p_t ^* E)$, which we shall see to enclose a lot of information about both the geometry of the bundle $E \to M$ and the Wiener measure $w_t$ on $\mathcal C_t$; \item using the map $\mathcal P _{t,v} ^2$ we shall be able to give meaning to the concept of stochastic parallel transport from a functional-analytic point of view; \item finally, using the same map $\mathcal P _{t,v} ^2$, we shall study an extension of the Feynman-Kac formula in the bundle $E$. \end{itemize} Finally, it is a pleasure to thank Mr. Radu Purice of the "Simion Stoilow" Institute of Mathematics of the Romanian Academy for his constant moral and mathematical support offered during the elaboration of the present work. His seemingly infinite patience in reading the successive draft versions of this article has helped finding and eliminating many errors, and the discussions with him helped me understand the correct mathematical setting in which to place the problem discussed here such that its solution ended up emerging naturally. \section{A Hermitian vector bundle of infinite rank} In the following, $M$ will be a separable connected Riemannian manifold of dimension $n$, and $x_0 \in M$ some fixed arbitrary point. We shall denote by $d : M \times M \to [0, \infty)$ the distance induced on $M$ by the Riemannian structure. If $t>0$, we shall repeatedly make use of the space \[ \mathcal C _t = \{ c:[0,t] \to M \mid c \text{ is continuous, with } c(0) = x_0 \} \ , \] that we shall endow with the topology given by the distance $D(c_1, c_2) = \max _{s \in [0,t]} d(c_1(s), c_2(s))$ and with the natural Wiener measure $w_t$ (a non-probabilistic, functional-analytic and geometric construction of the latter may be found in \cite{BP11}). It is known that $\mathcal C_t$ endowed with this topology is separable (see \cite{Michael61}). Since we shall be working with various Banach or Hilbert spaces, the norm and the Hermitian product on each of them will be displayed as a lower index: if $v, w \in X$, then $\| v \| _X$ will be the norm of $v$ and $\langle v, w \rangle _X$ will be the Hermitian product of $v$ and $w$. For bounded linear operators between normed spaces, $\| \cdot \|_{op}$ will denote the operator norm, without us specifying the spaces when they are clear from the context. Let $E \to M$ be a Hermitian vector bundle of complex rank $r \in \mathbb N$, endowed with a Hermitian connection $\nabla$. The fiber of $E$ over $x \in M$ will be denoted by $E_x$, and the Hermitian product on it will be $\langle \cdot, \cdot \rangle _{E_x}$ (all the Hermitian products used in this text will be linear in the first argument). Let $D_t = \{ \frac {jt} {2^k} \mid k \in \mathbb N, \ j \in \mathbb N \cap [0,2^k] \}$ - the "dyadic" numbers between $0$ and $t$. Our purpose in this section is to give meaning to the Hermitian vector bundle described intuitively by $\mathcal E = \bigboxtimes _{s \in D_t} \operatorname{End} E \to \mathcal C_t$. If $c \in \mathcal C_t$ we let the fiber $\mathcal E _c$ of $\mathcal E$ over $c$ be $\bigotimes _{s \in D_t} \operatorname{End} E_{c(s)}$. This is a tensor product of countably many factors, the definition of which is not trivial and deserves some clarifications. As such, we endow the space $\operatorname{End} E_x$ with the Hermitian product given by $\langle A, B \rangle _{\operatorname{End} E_x} = \frac 1 r \operatorname{Trace} (A B^*)$ for $A, B \in \operatorname{End} E_x$, for every $x \in M$. Notice that $\langle \cdot , - \rangle _{\operatorname{End} E_x} = \frac 1 r \langle \cdot , - \rangle _{E_x \otimes E_x ^*}$, the Hermitian product on the right-hand side being the natural one on $E_x \otimes E_x ^*$. If $\operatorname{Id} _{E_x} \in \operatorname{End} E_x$ is the identity operator, then $\| \operatorname{Id} _{E_x} \| _{\operatorname{End} E_x} = 1$. This allows us to construct the tensor product $\mathcal E _c$ rigorously as follows. If $D_{t,k} = \{ \frac {jt} {2^k} \mid j \in \mathbb N \cap [0,2^k] \}$ for each $k \in \mathbb N$, then for every $k \le k'$ we identify the tensor monomial $\otimes _{s \in D_{t,k}} e_{c(s)} \in \bigotimes _{s \in D_{t,k}} \operatorname{End} E_{c(s)}$ with the monomial $\otimes _{s \in D_{t,k'}} e'_{c(s)} \in \bigotimes _{s \in D_{t,k'}} \operatorname{End} E_{c(s)}$ in which $e'_{c(s)} = e_{c(s)}$ for $s \in D_{t,k}$ and $e'_{c(s)} = \operatorname{Id} _{E_{c(s)}}$ for $s \in D_{t,k'} \setminus D_{t,k}$. This procedure identifies the space $\bigotimes _{s \in D_{t,k}} \operatorname{End} E_{c(s)}$ with a subspace of $\bigotimes _{s \in D_{t,k'}} \operatorname{End} E_{c(s)}$, which allows us to consider the algebraic inductive limit $\varinjlim _{k \in \mathbb N} \bigotimes _{s \in D_{t,k}} \operatorname{End} E_{c(s)}$. This being the algebraic inductive limit of finite tensor products of finite-dimensional Hilbert spaces, it will carry a natural Hermitian product; one then considers the Hilbert space completion of the algebraic inductive limit with respect to this Hermitian product, the resulting Hilbert space being denoted $\mathcal E _c$. It is important to notice that $\mathcal E _c$ is separable because the index set in the inductive limit is $\mathbb N$ and each space in the inductive limit is finite-dimensional. We define now the total space of the putative Hermitian vector bundle as $\mathcal E = \bigcup _{c \in \mathcal C_t} \{ c \} \times \mathcal E _c$. The natural projection of $\mathcal E$ onto $\mathcal C_t$ will be $\operatorname{pr}_{\mathcal E} : \mathcal E \to \mathcal C_t$, given by the projection $\operatorname{pr}_{\mathcal E} ((c,e)) = c$. So far, $\mathcal E$ has been constructed fiberwise only as a set; in what follows, we shall endow it with a topology, but in order to do this we shall need an auxiliary result. \begin{lemma} \label{approximation of continuous curves} For every continuous curve $c \in \mathcal C_t$ and every $\varepsilon > 0$ there exists a piecewise-smooth curve $c' : [0,t] \to M$ such that $D(c, c') < \varepsilon$. \end{lemma} \begin{proof} Let $c \in \mathcal C_t$. The idea of the proof is the following: if $k \in \mathbb N$ is large enough, then the points $c(0), c(\frac t {2^k}), \dots, c(\frac {(2^k-1)t} {2^k}), c(t)$ will be close enough to each other so that any two consecutive of them may be joined by a unique minimizing geodesic; by joining these geodesic segments together, we shall obtain a piecewise-smooth curve $c'$ (a geodesic interpolation of the $2^k+1$ points above) which, for sufficiently large $k$, will be at distance at most $\varepsilon$ from $c$. The rest of the proof formalizes this idea rigourously. Being defined on a compact interval, $c$ will be uniformly continuous; let $\delta$ be an increasing modulus of continuity for it. Since \[ d \left( c \left( \frac {jt} {2^k} \right), c \left( \frac {(j+1)t} {2^k} \right) \right) \le \delta \left( \frac t {2^k} \right) \to 0 \ , \] for every $0 \le j \le 2^k - 1$, we deduce that for large enough $k \in \mathbb N$ the points $c(\frac {jt} {2^k})$ and $c(\frac {(j+1)t} {2^k})$ may be joined by a unique minimizing geodesic $\gamma_{k,j} : [0,1] \to M$ with $\gamma_{k,j} (0) = c(\frac {jt} {2^k})$ and $\gamma_{k,j} (1) = c(\frac {(j+1)t} {2^k})$, for all $0 \le j \le 2^k - 1$. Let us consider the piecewise-geodesic curve $c' : [0,t] \to M$ obtained by gluing these geodesic segments together: on evey interval $[\frac {jt} {2^k}, \frac {(j+1)t} {2^k}]$ it will be given by $c' (s) = \gamma_{k,j} (\frac {2^k} t s - j)$, for all $0 \le j \le 2^k - 1$. Using the triangle inequality in the triangle of vertices $c(\frac {jt} {2^k})$, $c(s)$ and $c'(s)$ for $s \in [\frac {jt} {2^k}, \frac {(j+1)t} {2^k}]$, let us notice that \begin{align*} D(c, c') & = \max_{0 \le j \le 2^k - 1} \max_{\frac {jt} {2^k} \le s \le \frac {(j+1)t} {2^k}} d \left( c(s), \gamma_{k,j} \left( \frac {2^k} t s - j \right) \right) \le \\ & \le \max_{0 \le j \le 2^k - 1} \max_{\frac {jt} {2^k} \le s \le \frac {(j+1)t} {2^k}} d \left( c(s), c \left( \frac {jt} {2^k} \right) \right) + d \left( c \left( \frac {jt} {2^k} \right), \gamma_{k,j} \left( \frac {2^k} t s - j \right) \right) \le \\ & \le \max_{0 \le j \le 2^k - 1} \max_{\frac {jt} {2^k} \le s \le \frac {(j+1)t} {2^k}} d \left( c(s), c \left( \frac {jt} {2^k} \right) \right) + d \left( c \left( \frac {jt} {2^k} \right), c \left( \frac {(j+1)t} {2^k} \right) \right) \le \\ & \le \max_{0 \le j \le 2^k - 1} \max_{\frac {jt} {2^k} \le s \le \frac {(j+1)t} {2^k}} \delta \left( s - \frac {jt} {2^k} \right) + \delta \left( \frac t {2^k} \right) \le 2 \delta \left( \frac t {2^k} \right) \, \end{align*} whence it follows that $D(c, c') < \varepsilon$ for large enough $k$. \end{proof} We shall construct the topology on $\mathcal E$ first locally, on the restrictions of $\mathcal E$ to open balls $B(c,r)$ centered at each curve $c \in \mathcal C_t$, and then we shall show that all these local topologies are compatible with each other, which will allow us to glue them together into a global topology on $\mathcal E$. For every $r \in (0, \min_{s \in [0,t]} \operatorname{injrad} (c(s)))$ consider the open metric ball $B(c, r) = \{ \gamma \in \mathcal C_t \mid D(c, \gamma) < r \}$ and a piecewise-smooth curve $c' \in B(c, r)$, the existence of which being guaranteed by lemma \ref{approximation of continuous curves}. If $\gamma \in B(c,r)$ and $s \in [0,t]$ then \[ d(\gamma(s), c(s)) < D(\gamma, c) < \min_{s \in [0,t]} \operatorname{injrad} (c(s)) < \operatorname{injrad} (c(s)) \ , \] so there exists a unique minimizing geodesic defined on $[0,1]$ from $\gamma(s)$ to $c(s)$. Next, using the same argument, there exists a unique minimizing geodesic defined on $[0,1]$ from $c(s)$ to $c'(s)$. We may then parallel-transport the vector $e \in E_{\gamma(s)}$ to $c(s)$, and then to $c'(s)$, each time along the geodesics found above; we finally parallel-transport the vector obtained so far from $c'(s)$ to $c'(0) = x_0$ along $c'$, which is piecewise-smooth, thus obtaining a vector in $E_{x_0}$. The procedure just described gives a linear isometry from $E_{\gamma(s)}$ to $E_{x_0}$; it is clear that it may be inverted (by traversing the same curves in the opposite direction and in the inverse order), so this procedure is an isometric isomorphism. We may extend it in the natural way to tensor monomials of the form $e_{\gamma(s_1)} \otimes \dots \otimes e_{\gamma(s_N)} \in \mathcal E_\gamma$ with $N \in \mathbb N \setminus \{0\}$ and $s_1, \dots, s_N \in D_t$, thus obtaining tensor monomials in $(\operatorname{End} E_{x_0}) ^{\otimes \{ s_1, \dots, s_N \}} \subset (\operatorname{End} E_{x_0}) ^{\otimes D_t}$. This extension will still be a surjective isometry between monomials. Let us now introduce two helpful auxiliary notations: if $x,y \in M$ and if $c$ is a piecewise-smooth curve from $x$ to $y$, and if $e \in E_x$, then we shall denote by $PT_{x \to y, c} (e) \in E_y$ the parallel transport of $e$ from $x$ to $y$ along $c$. The unique minimizing geodesic defined on $[0,1]$ from $x$ to $y$ will be denoted by $\gamma_{x,y}$, whenever it exists. We may now define a local trivialization $\varphi : \operatorname{pr}_{\mathcal E}^{-1} (B(c,r)) \to B(c,r) \times (\operatorname{End} E_{x_0}) ^{\otimes D_t}$ as follows: \begin{itemize}[wide] \item if $\alpha \in B(c,r)$ and $e_{\alpha(s_1)} \otimes \dots \otimes e_{\alpha(s_N)} \in \mathcal E_\alpha$, then \begin{align*} \varphi ((\alpha, \, & e_{\alpha(s_1)} \otimes \dots \otimes e_{\alpha(s_N)})) = \\ & = (c, TP_{c'(s_1) \to x_0, c'} \, TP_{c(s_1) \to c'(s_1), \gamma_{c(s_1), c'(s_1)}} \, TP_{\alpha(s_1) \to c(s_1), \gamma_{\alpha(s_1), c(s_1)}} e_{\alpha(s_1)} \otimes \dots \\ & \dots \otimes TP_{c'(s_N) \to x_0, c'} \, TP_{c(s_N) \to c'(s_N), \gamma_{c(s_N), c'(s_N)}} \, TP_{\alpha(s_N) \to c(s_N), \gamma_{\alpha(s_N), c(s_N)}} e_{\alpha(s_N)}) \ , \end{align*} as explained above; \item on linear combinations of such tensor monomials we extend $\varphi$ by linearity, thus obtaining a linear isometric isomorphism; \item since $\mathcal E_\alpha$ is the Hilbert completion of an algebraic inductive limit, we define $\varphi$ on limits of elements from the algebraic inductive limit by continuity. \end{itemize} The map $\varphi : \mathcal E | _{B(c,r)} \to B(c,r) \times (\operatorname{End} E_{x_0}) ^{\otimes D_t}$ allows us now to define a topology on $\mathcal E | _{B(c,r)}$ by transporting the topology from $B(c,r) \times (\operatorname{End} E_{x_0}) ^{\otimes D_t}$ back under $\varphi ^{-1}$. In particular, since $B(c,r)$ is a metric space and $(\operatorname{End} E_{x_0}) ^{\otimes D_t}$ is a Hilbert space, the topology so constructed on $\mathcal E | _{B(c,r)}$ will be first-countable. It remains to show that these topologies defined only locally are compatible with each other. More precisely, let us show that if the curves $c_1, c_2 \in \mathcal C_t$ and the numbers $r_1, r_2 > 0$ are such that $B(c_1, r_1) \cap B(c_2, r_2) \ne \emptyset$, and if $\varphi_1, \varphi_2$ are two local trivializations above these two balls constructed as above, then the local topologies induced by $\varphi_1$ and $\varphi_2$ coincide on $\mathcal E | _{B(c_1, r_1) \cap B(c_2, r_2)}$. But this is easy, since the map $\phi_1 \circ \phi_2 ^{-1} : B(c_1, r_1) \cap B(c_2, r_2) \times (\operatorname{End} E_{x_0}) ^{\otimes D_t} \to B(c_1, r_1) \cap B(c_2, r_2) \times (\operatorname{End} E_{x_0}) ^{\otimes D_t}$ is the identity on the first factor, and is a continuous map defined on $B(c_1, r_1) \cap B(c_2, r_2)$ with values in the group of isometries of $(\operatorname{End} E_{x_0}) ^{\otimes D_t}$ on the second factor, whence the conclusion is clear. Since the local topologies constructed above have turned out to be compatible with each other, they may be glued together into a unique (first-countable) global topology on $\mathcal E$. In this topology, the maps $\varphi$ constructed above become continuous local trivializations. \begin{remark} Since $\mathcal C_t$ is separable, it follows from the above considerations that $\mathcal C_t$ may be covered by a countable family of trivialization domains, a fact which will be useful later on. \end{remark} Let $\pi_k : \mathcal C_t \to M^{2^k + 1}$ denote the projection given by \[ \pi_k (c) = \left( c(0), c \left( \frac t {2^k} \right), \dots, c\left( \frac {(2^k-1)t} {2^k} \right), c(t) \right) \ . \] \begin{proposition} The projections $\pi_k : \mathcal C_t \to M^{2^k+1}$ and the projection $\operatorname{pr}_{\mathcal E} : \mathcal E \to \mathcal C_t$ are continuous, for all $k \in \mathbb N$. \end{proposition} \begin{proof} If we denote by $d_k$ the distance induced by the Riemannian tensor on $M^{2^k+1}$ for every $k \in \mathbb N$, then \begin{align*} d_k (\pi_k(c), \pi_k(c')) & = d\left( \left( c(0), c\left( \frac t {2^k} \right), \dots, c(t) \right) , \left( c'(0), c'\left( \frac t {2^k} \right), \dots, c'(t) \right) \right) = \\ & = \sqrt{ \sum_{j=0} ^{2^k} d \left( c\left( \frac {jt} {2^k} \right), c'\left( \frac {jt} {2^k} \right) \right) ^2 } \le 2 ^{\frac k 2} \sup_{0 \le j \le 2^k} d \left( c\left( \frac {jt} {2^k} \right), c'\left( \frac {jt} {2^k} \right) \right) \le \\ & \le 2 ^{\frac k 2} \sup_{0 \le s \le t} d(c(s), c'(s)) = 2 ^{\frac k 2} D(c, c') \ , \end{align*} so $\pi_k$ is continuous. Since the local trivializations constructed above are continuous, and since the restriction of the projection $\operatorname{pr}_{\mathcal E}$ to such trivializations has the form $(c, \dots) \mapsto c$, the continuity of this projection is clear. \end{proof} \section{Integrable sections in bundles of infinite rank} If $s$ is a section of $E$, the notation $\| s \|$ (without any indices) will denote the function $M \ni x \mapsto \| s(x) \|_{E_x} \in [0, \infty)$. The space $\Gamma_0 (E)$ will be the space of compactly-supported smooth sections in $E$, the space $\Gamma_c (E)$ will be the space of compactly-supported continuous sections in $E$, and $\Gamma_{cb} (E)$ will be the space of bounded continuous sections in $E$ (i.e. those continuous sections $\eta$ such that $\sup _{x \in M} \| \eta_x \| _{E_x} < \infty$). For each $1 \le p \le \infty$ the space $\Gamma^p (E)$ will be the space of classes of measurable sections that coincide almost everywhere, with the property that $\| s \| \in L^p(M)$. It is known that $\Gamma_0 (E)$ is dense in $\Gamma^p (E)$ in the norm topology if $p \ne \infty$, and in the weak-$*$ topology if $p = \infty$. The corresponding spaces of \textit{locally} $p$-integrable sections will be $\Gamma^p _{loc} (E)$. The quadratic form $Q_{E, \nabla} : \Gamma_0 (E) \subset \Gamma^2 (E) \to \mathbb R$ defined by $Q_{E, \nabla} (\eta) = \int _M \| (\nabla \eta) _x \| _{E_x} ^2 \, \mathrm d x$ gives rise to a self-adjoint, positive, densely-defined operator $H_\nabla$ in $\Gamma^2 (E)$ (the Friedrichs extension of the connection Laplacean); by functional calculus one may define next the contraction semigroup $(\mathrm e ^{-s H_\nabla}) _{s \ge 0}$ acting in $\Gamma^2 (E)$, which we shall call "the heat semigroup in $E$ corresponding to $\nabla$" (full details can be found in \cite{Davies80}). It is shown then in chapter XI of \cite{Guneysu17} that this semigroup admits a unique integral kernel ("the heat kernel in $E$ corresponding to $\nabla$"), that is a jointly measurable map $(0, \infty) \times M \times M \ni (s,x,y) \mapsto h_\nabla (s,x,y) \in E_x \otimes E_y ^* \subset E \boxtimes E^*$ such that $h_\nabla (s, x, \cdot) \in \Gamma^2 (E^*)$, $h_\nabla (s, \cdot, y) \in \Gamma^2 (E)$, and $(\mathrm e ^{-s H_\nabla} \eta) (x) = \int _M h_\nabla (s, x, y) \, \eta (y) \, \mathrm d y$ for almost all $x, y \in M$, all $s>0$ and all $\eta \in \Gamma^2 (E)$. It is proved in the same chapter that $h_\nabla (s,x,y) ^* = h_\nabla (s,y,x)$ for all $s>0$ and almost all $x,y \in M$, where the star denotes the adjoint with respect to the Hermitian products on the fibers $E_x$ and $E_y$. One then shows that $h_\nabla$ satisfies locally the partial differential equation $(2 \partial_s + H_{\nabla,x} + H_{\nabla,y}) u = 0$ in the distributional sense (where $H_{\nabla,x}$ means the operator $H_\nabla$ acting on the argument $x$), whence it follows that $h_\nabla$ is smooth using theorem 1 in \cite{Mizohata57}. The same conclusions hold if instead of working on $M$ we work on some relatively compact open subset of it with smooth boundary. If $M = \bigcup _{i \in \mathbb N} U_i$ is an exhaustion of $M$ with such subsets, we shall use the notation $H_\nabla ^{(i)}$ for the Friedrichs extension of the connection Laplacean acting in $\Gamma^2 (E | _{U_i})$, and the corresponding heat kernel will be $h _\nabla ^{(i)}$. In the special case when the vector bundle is $M \times \mathbb C$ endowed with the usual Hermitian product and with the trivial connection, the Friedrichs extension of the connection Laplacean will be denoted simply by $H$, and the corresponding heat kernel simply by $h$; when working on a domain $U_i$ as above these will be $H^{(i)}$ and, respectively, $h^{(i)}$. It is known that $h_i \to h$ pointwise and monotonically (theorem 4 in chapter VIII of \cite{Chavel84}). It is shown in subchapter VII.3 of \cite{Guneysu17} that $\| h_\nabla (t,x,y) \| _{op} \le h(t,x,y)$ for all $t>0$ and almost all $x,y \in M$; since both these heat kernels have been seen to be smooth, and since co-null subsets are dense in $M$, it follows that the inequality is in fact true for all $x,y \in M$. A similar inequality holds on domains $U_i$ as above. This result is known as the \textbf{"diamagnetic inequality"} and will turn out to be crucial in our construction below. \begin{definition} We shall say that the section $\sigma : \mathcal C_t \to \mathcal E$ is a \textbf{cylindrical section} if and only if there exists a section $s \in \Gamma^\infty \left( (\operatorname{End} E)^{\boxtimes (2^k + 1)} \right)$ such that $\sigma = s \circ \pi_k$. \end{definition} \begin{definition} We define the Lebesgue space $\Gamma^2 (\mathcal E)$ of square-integrable sections as the space of measurable sections $\sigma : \mathcal C_t \to \mathcal E$ identified under equality almost everywhere, with the property that the function $\mathcal C_t \ni c \mapsto \| \sigma(c) \| _{\mathcal E_c} \in [0, \infty)$ is in $L^2 (\mathcal C_t, w_t)$. \end{definition} \begin{theorem} The space $\Gamma^2 (\mathcal E)$ endowed with the scalar product \[ \langle \sigma_1, \sigma_2 \rangle _{\Gamma^2 (\mathcal E)} = \int _{\mathcal C_t} \langle \sigma_1 (c), \sigma_2 (c) \rangle _{\mathcal E_c} \, \mathrm d w_t (c) \] is a Hilbert space. Its dual is $\Gamma^2 (\mathcal E ^*)$, where $\mathcal E^*$ is the dual bundle of $\mathcal E$ in which the fiber $\mathcal E_c ^*$ is the dual space of $\mathcal E_c$ for all $c \in \mathcal C_t$. \end{theorem} \begin{proof} That $\Gamma^2 (\mathcal E)$ is an inner product space is easy. The proof of its metric completeness follows faithfully the usual one of the completeness of the space $L^2$. The main ingredients are the fact that each fiber is, in turn, complete (being a Hilbert space), and the fact that $\mathcal C_t$ may be covered by a countable family of trivialization domains (a consequence if its separability). More specifically, assume that $(\sigma_k) _{k \in \mathbb N} \subset \Gamma^2 (\mathcal E)$ is a Cauchy sequence. There exists a subsequence $(\sigma_{k_m}) _{m \in \mathbb N}$ such that \[ \| \sigma_{k_{m+1}} - \sigma_{k_m} \| _{\Gamma^2 (\mathcal E)} \le \frac 1 {2^{m+1}} \] for each $m \in \mathbb N$. Define the function \[ f_{m+1} (x) = \sum _{l=0} ^m \| \sigma_{k_{m+1}} (c) - \sigma_{k_m} (c) \| _{\mathcal E _c} \] for every $m \in \mathbb N$ and notice that $\| f_m \| _{L^2 (\mathcal C_t)} \le 1$. As a consequence of the monotone convergence theorem, $(f_m) _{m \ge 1}$ has a limit $f \in L^2 (\mathcal C_t)$, finite almost everywhere. If $m \ge l \ge 0$, then for almost all $c \in \mathcal C_t$ we have \[ \| \sigma_{k_m} (c) - \sigma_{k_l} (c) \| _{\mathcal E _c} \le \| \sigma_{k_m} (c) - \sigma_{k_m - 1} (c) \| _{\mathcal E _c} + \dots + \| \sigma_{k_l + 1} (c) - \sigma_{k_l} (c) \| _{\mathcal E _c} \le f(c) - f_{k_l} (c) \to 0 \ , \] therefore for almost all $c \in \mathcal C_t$ the sequence $(\sigma_{k_m} (c)) _{m \in \mathbb N} \subset \mathcal E _c$ is Cauchy. Since the space $\mathcal E _c$ is, by construction, a Hilbert space, hence complete, it follows that for almost all $c \in \mathcal C_t$ there exists a unique element $\sigma(c) \in \mathcal E _c$ such that $\sigma_{k_m} (c) \to \sigma(c)$. We have already noticed that $\mathcal C_t$ may be covered by a countable family of trivialization domains (open balls) of $\mathcal E$; on each of them, $\sigma_{k_m} \to \sigma$ almost everywhere, therefore the restriction of $\sigma$ to each such trivialization domain is measurable. Since this family of trivialization domains is countable, it follows that $\sigma$ is measurable. Furthermore, passing to the limit in the inequality \[ \| \sigma_{k_m} (c) - \sigma_{k_l} (c) \| _{\mathcal E _c} \le f(c) - f_{k_l} (c) \le f(c) \] we obtain $\| \sigma(c) - \sigma_{k_l} (c) \| _{\mathcal E _c} \le f(c)$, whence \[ | \|\sigma\| (c) - \|\sigma_{k_l}\| (c) | \le \| \sigma(c) - \sigma_{k_l} (c) \| _{\mathcal E _c} \le f(c) \ , \] hence $\| \sigma \| \in L^2(\mathcal C_t)$, which means that $\sigma \in \Gamma^2 (\mathcal E)$. Finally, applying the dominated convergence theorem to the sequence $c \mapsto \| \sigma(c) - \sigma_{k_l} (c) \| _{\mathcal E _c} ^2$ (which is dominated by $f^2$), we conclude that $\sigma_{k_m} \to \sigma$ in $\Gamma^2 (\mathcal E)$, so $\sigma_k \to \sigma$ in $\Gamma^2 (\mathcal E)$. That the dual of $\Gamma^2 (\mathcal E)$ is $\Gamma^2 (\mathcal E^*)$ is now easy, using the same techniques. \end{proof} More generally, and along the same lines of thought, one may introduce the space $\Gamma^p (\mathcal E)$ for every $p \in [1, \infty]$, which will be a Banach space. In particular, $\Gamma^q (\mathcal E) \subseteq \Gamma^p (\mathcal E)$ if $p \le q$, because the Wiener measure is finite. Also, $\Gamma^p (\mathcal E ^*)$ is the dual of $\Gamma^{\frac p {p-1}} (\mathcal E)$ for every $p \in (1, \infty]$. The proofs are analogous to those for the spaces $L^p$, the latter being found, for instance, in chap.4 of \cite{Brezis11}. \begin{theorem} The space $\operatorname {Cyl} _t (\mathcal E)$ of continuous and bounded cylindrical sections is dense in $\Gamma^2 (\mathcal E)$. \end{theorem} \begin{proof} The inclusion $\operatorname {Cyl} _t (\mathcal E) \subset \Gamma^2 (\mathcal E)$ is trivial: if $s : M^{2^k + 1} \to (\operatorname{End} E)^{\boxtimes (2^k + 1)}$ is essentially bounded, then \[ \int _{\mathcal C_t} \| (s \circ \pi_k) (c) \| _{\mathcal E_c} \, \mathrm d w_t (c) \le \sup_{c \in \mathcal C_t} \| (s \circ \pi_k) (c) \| \, w_t (\mathcal C_t) < \infty \ . \] Let now $\sigma' \in \operatorname {Cyl} _t (\mathcal E) ^\perp$; we shall show that $\sigma' = 0$. If $f \in \operatorname{Cyl} (\mathcal C_t)$ is a cylindrical function (the definition of which is given in \cite{Mustatea22}) and $\sigma \in \operatorname{Cyl}_t (\mathcal E)$ is a cylindrical section, then it is easy to show that $f \sigma \in \operatorname{Cyl}_t (\mathcal E)$ and, since $\sigma' \in \operatorname {Cyl} _t (\mathcal E) ^\perp$, we shall have in particular that \[ 0 = \langle f \sigma, \sigma' \rangle _{\Gamma^2 (\mathcal E)} = \int _{\mathcal C_t} f(c) \, \langle \sigma(c), \sigma'(c) \rangle _{\mathcal E_c} \, \mathrm d w_t (c) \ . \] Using theorem 2.1 in \cite{Mustatea22}, the cylindrical functions are dense in $L^2 (\mathcal C_t)$, so \[ \int _{\mathcal C_t} f(c) \, \langle \sigma(c), \sigma'(c) \rangle _{\mathcal E_c} \, \mathrm d w_t (c) = 0 \] for all $f \in L^2 (\mathcal C_t)$, whence we deduce that $\langle \sigma(c), \sigma'(c) \rangle _{\mathcal E_c} = 0$ for all $c$ in some co-null subset $C_\sigma \subseteq \mathcal C_t$. Let $M = \bigcup _{i \in \mathbb N} V_i '$ be a cover of $M$ with open trivialization domains for $E$. Let $V_0 = V_0 '$ and $V_i = V_i ' \setminus (V_0 \cup \dots \cup V_{i-1})$ for $i \ge 1$; these subsets will be measurable, pairwise disjoint, trivialization domains. Let $\{ \eta _i ^1, \dots, \eta _i ^{r^2} \}$ be a measurable orthonormal frame in $\operatorname{End} E | _{V_i}$ in which $\eta _i ^1 (x) = \operatorname{Id} _{E_x}$ for all $x \in V_i$. Defining $\eta ^l$ by $\eta ^l | _{V_i} = \eta _i ^l$ for all $1 \le l \le r^2$ and $i \in \mathbb N$, we obtain a global measurable orthonormal frame $\{ \eta ^1, \dots, \eta ^{r^2} \}$ in $\operatorname{End} E$ made of sections from $\Gamma^\infty (\operatorname {End} E)$, in which $\eta ^1 (x) = \operatorname{Id} _{E_x}$ for all $x \in M$. For every $k \in \mathbb N$ and $1 \le j_0, \dots, j_{2^k} \le r^2$ define \[ \sigma _{j_0 \dots j_{2^k}} (c) = \eta ^{j_0} (c(0)) \otimes \eta ^{j_1} (c(\frac t {2^k})) \otimes \dots \otimes \eta ^{j_{2^k}} (c(t)) \] and notice that $\sigma _{j_0 \dots j_{2^k}} \in \operatorname{Cyl}_t (\mathcal E)$ and that the subset $\{ \sigma _{j_0 \dots j_{2^k-1}} (c) \mid k \in \mathbb N, \ 1 \le j_0, \dots, j_{2^k} \le r^2 \}$ is a countable orthonormal basis in the fiber $\mathcal E_c$ for all $c \in \mathcal C_t$. We then deduce that there exists a co-null subset $C_{j_0 \dots j_{2^k}} \subseteq \mathcal C_t$ such that \[ \langle \sigma _{j_0 \dots j_{2^k-1}} (c), \sigma' (c) \rangle _{\mathcal E_c} = 0 \] for all $c \in C_{j_0 \dots j_{2^k}}$, all $k \in \mathbb N$ and $1 \le j_0, \dots, j_{2^k} \le r^2$. If \[ C = \bigcap _{k \in \mathbb N} \bigcap _{1 \le j_1, \dots, j_{2^k} \le r^2} C_{j_0 \dots j_{2^k}} \] then $C$ will be co-null and \[ \langle \sigma _{j_0 \dots j_{2^k-1}} (c), \sigma' (c) \rangle _{\mathcal E_c} = 0 \] for all $c \in C$, for all $k \in \mathbb N$ and $1 \le j_0, \dots, j_{2^k} \le r^2$, whence $\langle u, \sigma' (c) \rangle _{\mathcal E_c} = 0$ for all $u \in \mathcal E_c$, hence $\sigma' (c) = 0$ for all $c \in C$, so $\sigma'=0$ in $\Gamma^2 (\mathcal E)$, so $\operatorname {Cyl} _t (\mathcal E) ^\perp = 0$, meaning that $\operatorname {Cyl} _t (\mathcal E)$ is dense in $\Gamma^2 (\mathcal E)$. \end{proof} In what follows, the main technical result (theorem \ref{application of Chernoff's theorem}) will be based upon the use of Chernoff's approximation theorem for $1$-parameter semigroups. This, in turn, will require us to work on compact subsets of $M$ in order to be able to guarantee the boundedness of certain complicated continuous functions. For this reason we shall consider an exhaustion $M = \bigcup _{i \in \mathbb N} U_i$ of $M$ with relatively compact connected domains with smooth boundary, such that $x_0 \in U_0$. In particular, these domains will be Riemannian manifolds, therefore all the above considerations will apply to them, too. All the mathematical objects on $U_i$ obtained as restrictions of some extrinsic objects will be represented visually by the restriction symbol (such as in, for instance, the bundle $E | _{U_i}$), and all the objects intrinsically associated to $U_i$ will carry the index $(i)$ (for instance: the heat kernel associated to the connection $\nabla$ in $E | _{U_i}$ will be $h_\nabla ^{(i)}$, the Laplacean understood as the generator of the heat semigroup acting in $C (\overline {U_i})$ will be $L^{(i)}$ etc.). For each $i \in \mathbb N$ we shall consider the space \[ \mathcal C_t (\overline{U_i}) = \{ c \in \mathcal C_t \mid c([0,t]) \subseteq \overline {U_i} \} \] endowed with the restriction of the distance $D$ introduced on $\mathcal C_t$. The natural measure on $\mathcal C_t (\overline{U_i})$ will \textit{not} be the restriction of the Wiener measure $w_t$, but rather the intrinsic Wiener measure $w_t ^{(i)}$ obtained from the intrinsic heat kernel $h ^{(i)}$ on $\overline {U_i}$. It is elementary that $\mathcal C_t (\overline{U_i})$ is closed (and therefore Borel) in $\mathcal C_t$: the evaluation map $\operatorname{ev} : [0,t] \times \mathcal C_t \to M$ defined by $\operatorname{ev} (s, \gamma) = \gamma(s)$ is obviously continuous, whence \[ \mathcal C_t (\overline {U_i}) = \{ \gamma \in C_t \mid \gamma(s) \in \overline {U_i} \ \forall s \in [0,t] \} = \bigcap _{s \in [0,t]} \operatorname{ev} (s, \cdot) ^{-1} (\overline {U_i}) \] is obviously closed. One shows similarly that, if $i \le j$, then $\mathcal C_t (\overline {U_i})$ is closed in $\mathcal C_t (\overline {U_j})$. It is also known that $w_t ^{(i)} \le w_t | _{\mathcal C_t (\overline{U_i})}$. For details about the Wiener measure, the article \cite{BP11} contains all the necessary constructions and explanations; note that the constructions therein are not probabilistic, but functional-analytic, therefore our project of a purely functional-analytic construction of the stochastic parallel transport is not compromised. In the following, we shall define a continuous linear functional on $\Gamma^2 (\mathcal E | _{\mathcal C_t (\overline {U_i})})$ to which, by Riesz's representation theorem, there will correspond a section from $\Gamma^2 (\mathcal E^* | _{\mathcal C_t (\overline {U_i})})$ which will be seen to be intimately linked to the stochastic parallel transport. Fix $\omega \in E_{x_0} ^*$ and $\eta \in \Gamma_{cb} (E)$, and define the functional $W_{t, \omega, \eta} ^{(i)}$ on continuous and bounded cylindrical sections as follows: if $s : \overline {U_i} ^{2^k + 1} \to (\operatorname{End} E)^{\boxtimes (2^k + 1)} | _{U^{2^k + 1}}$ is a continuous and bounded section, define \begin{align*} W_{t, \omega, \eta} ^{(i)} (s \circ \pi_k) = & \int _{U_i} \mathrm d x_1 \dots \int _{U_i} \mathrm d x_{2^k} \left[ \omega \otimes h_\nabla ^{(i)} \left( \frac t {2^k}, x_0, x_1 \right) \otimes \dots \right. \\ & \left. \dots \otimes h_\nabla ^{(i)} \left( \frac t {2^k}, x_{2^k-1}, x_{2^k} \right) \otimes \eta(x_{2^k}) \right] \cdot s(x_0, x_1, \dots, s_{x_{2^k}}) \ . \end{align*} The dot inside the integral denotes not a scalar product but a tensor contraction which, in order to be understood, requires a brief discussion. The term $\omega \otimes h_\nabla \left( \frac t {2^k}, x_0, x_1 \right) \otimes \dots \otimes h_\nabla \left( \frac t {2^k}, x_{2^k-1}, x_{2^k} \right) \otimes \eta(x_{2^k})$ belongs to the space $E_{x_0} ^* \otimes (E_{x_0} \otimes E_{x_1} ^*) \otimes \dots \otimes (E_{x_{2^k - 1}} \otimes E_{x_{2^k}} ^*) \otimes E_{x_{2^k}}$ which is naturally isomorphic to $(E_{x_0} ^* \otimes E_{x_0}) \otimes \dots \otimes (E_{x_{2^k}} ^* \otimes E_{x_{2^k}})$, which in turn is isomorphic to $(\operatorname{End} E^*) _{x_0} \otimes \dots \otimes (\operatorname{End} E^*) _{x_{2^k}}$ (notice that the latter isomorphism is not the natural one, but the natural one multiplied by a normalization factor, because the scalar product of two endomorphisms has been defined such that the identity should have norm $1$). In turn, $s(x_0, \dots, x_{2^k})$ belongs to the space $(\operatorname{End} E) _{x_0} \otimes \dots \otimes (\operatorname{End} E) _{x_{2^k}}$, therefore the term on the left of the dot may be naturally applied to the one on the right of the dot, this being the meaning of the tensor contraction inside the integral. Let us show that, indeed, the functional is well defined. First, if $l > k$ then there exists a projection $\pi_{kl} : M^{2^l+1} \to M^{2^k+1}$ given by $\pi_{kl} (x_0, \dots, x_{2^l}) = (x_{j 2^{l-k}})_{0 \le j \le 2^k}$, so that $s \circ \pi_k = (s \circ \pi_{kl}) \circ \pi_l$. This shows that a cylindrical section may have multiple writings of the form $s \circ \pi_k$. This fact is fortunately compensated inside the integral by the convolution property of the kernel $h_\nabla ^{(i)}$, which insures that the formula of definition of $W_{t, \omega, \eta} ^{(i)}$ does not depend on the writing of the cylindrical sections. In order to show that the integral in the definition of $W_{t, \omega, \eta} ^{(i)}$ exists, let us notice that \begin{gather*} \left| \left[ \omega \otimes h_\nabla ^{(i)} \left( \frac t {2^k}, x_0, x_1 \right) \otimes \dots \otimes h_\nabla ^{(i)} \left( \frac t {2^k}, x_{2^k-1}, x_{2^k} \right) \otimes \eta(x_{2^k}) \right] \cdot s(x_0, x_1, \dots, s_{x_{2^k}}) \right| \le \\ \left\| \omega \otimes h_\nabla ^{(i)} \left( \frac t {2^k}, x_0, x_1 \right) \otimes \dots \otimes h_\nabla ^{(i)} \left( \frac t {2^k}, x_{2^k-1}, x_{2^k} \right) \otimes \eta(x_{2^k}) \right\| \ \| s(x_0, x_1, \dots, s_{x_{2^k}}) \| \ , \end{gather*} each of the two norms being considered in the appropriate space. We shall leave the second as it is, but we shall work on the first one. Let us consider the orthonormal basis $\{ e^i _1, \dots, e^i _r \}$ in each fiber $E_{x_i}$, and the dual basis $\{ f_i ^1, \dots, f_i ^r \}$ in each fiber $E_{x_i} ^*$. In these bases we have (using Einstein's summation convention) $\omega = \omega_{i'_0} \, f_0 ^{i'_0}$, $h_\nabla ^{(i)} \left( \frac t {2^k}, x_{j-1}, x_j \right) = h^{(i), i_{j-1}} _{i'_j} \, e^{j-1} _{i_{j-1}} \otimes f_j ^{i'_j}$ (for $1 \le j \le 2^k$) and $\eta(x_{2^k}) = \eta^{i_{2^k}} \, e^{2^k} _{i_{2^k}}$, hence \begin{gather*} \left\| \omega \otimes h_\nabla ^{(i)} \left( \frac t {2^k}, x_0, x_1 \right) \otimes \dots \otimes h_\nabla ^{(i)} \left( \frac t {2^k}, x_{2^k-1}, x_{2^k} \right) \otimes \eta(x_{2^k}) \right\| ^2 _{(\operatorname {End} E^*) _{x_0} \otimes \dots \otimes (\operatorname {End} E^*) _{x_{2^k}}} = \\ = \omega_{i'_0} \, h^{(i), i_0} _{i'_1} \dots h^{(i), i_{2^k-1}} _{i'_{2^k}} \, \eta^{i_{2^k}} \, \overline {\omega_{j'_0} \, h^{(i), j_0} _{j'_1} \dots h^{(i), j_{2^k-1}} _{j'_{2^k}} \, \eta^{j_{2^k}}} \cdot \\ \cdot \langle f_0 ^{i'_0} \otimes e^0 _{i_0} \otimes \dots \otimes f_{2^k} ^{i'_{2^k}} \otimes e^{2^k} _{i_{2^k}}, f_0 ^{j'_0} \otimes e^0 _{j_0} \otimes \dots \otimes f_{2^k} ^{j'_{2^k}} \otimes e^{2^k} _{j_{2^k}} \rangle _{(\operatorname {End} E^*) _{x_0} \otimes \dots \otimes (\operatorname {End} E^*) _{x_{2^k}}} = \\ = \omega_{i'_0} \, h^{(i), i_0} _{i'_1} \dots h^{(i), i_{2^k-1}} _{i'_{2^k}} \, \eta^{i_{2^k}} \, \overline {\omega_{j'_0} \, h^{(i), j_0} _{j'_1} \dots h^{(i), j_{2^k-1}} _{j'_{2^k}} \, \eta^{j_{2^k}}} \cdot \\ \cdot \langle f_0 ^{i'_0} \otimes e^0 _{i_0}, f_0 ^{j'_0} \otimes e^0 _{j_0} \rangle _{(\operatorname{End} E^*) _{x_0}} \dots \langle f_{2^k} ^{i'_{2^k}} \otimes e^{2^k} _{i_{2^k}}, f_{2^k} ^{j'_{2^k}} \otimes e^{2^k} _{j_{2^k}} \rangle _{(\operatorname{End} E^*) _{x_{2^k}}} = \\ = \left( \frac 1 r \right) ^{2^k + 1} \omega_{i'_0} \, h^{(i), i_0} _{i'_1} \dots h^{(i), i_{2^k-1}} _{i'_{2^k}} \, \eta^{i_{2^k}} \, \overline {\omega_{j'_0} \, h^{(i), j_0} _{j'_1} \dots h^{(i), j_{2^k-1}} _{j'_{2^k}} \, \eta^{j_{2^k}}} \cdot \\ \cdot \langle f_0 ^{i'_0} \otimes e^0 _{i_0}, f_0 ^{j'_0} \otimes e^0 _{j_0} \rangle _{E_{x_0} ^* \otimes E_{x_0}} \dots \langle f_{2^k} ^{i'_{2^k}} \otimes e^{2^k} _{i_{2^k}}, f_{2^k} ^{j'_{2^k}} \otimes e^{2^k} _{j_{2^k}} \rangle _{E_{x_{2^k}} ^* \otimes E_{x_{2^k}}} = \\ = \left( \frac 1 r \right) ^{2^k + 1} \sum_{i'_0} |\omega_{i'_0}|^2 \sum_{i_0, i'_1} |h^{(i), i_0} _{i'_1}|^2 \dots \sum_{i_{2^k-1}, i'_{2^k}} |h^{i_{2^k-1}} _{i'_{2^k}}|^2 \sum_{i_{2^k}} |\eta^{i_{2^k}}|^2 = \\ = \left( \frac 1 r \right) ^{2^k + 1} \| \omega \| ^2 _{E_{x_0} ^*} \left\| h_\nabla ^{(i)} \left( \frac t {2^k}, x_0, x_1 \right) \right\| ^2 _{E_{x_0} \otimes E_{x_1}^*} \dots \\ \dots \left\| h_\nabla ^{(i)} \left( \frac t {2^k}, x_{2^k - 1}, x_{2^k} \right) \right\| ^2 _{E_{x_{2^k-1}} \otimes E_{x_{2^k}}^*} \| \eta (x_{2^k}) \| ^2 _{E_{x_{2^k}}} \le \\ \le \frac 1 r \| \omega \| ^2 _{E_{x_0} ^*} \left\| h_\nabla ^{(i)} \left( \frac t {2^k}, x_0, x_1 \right) \right\| ^2 _{op} \dots \left\| h_\nabla ^{(i)} \left( \frac t {2^k}, x_{2^k - 1}, x_{2^k} \right) \right\| ^2 _{op} \| \eta (x_{2^k}) \| ^2 _{E_{x_{2^k}}} \le \\ \le \frac 1 r \| \omega \| ^2 _{E_{x_0} ^*} h ^{(i)} \left( \frac t {2^k}, x_0, x_1 \right) ^2 \dots h ^{(i)} \left( \frac t {2^k}, x_{2^k - 1}, x_{2^k} \right) ^2 \| \eta (x_{2^k}) \| ^2 _{E_{x_{2^k}}} \ , \end{gather*} where we have used that $\| A \| _{V \otimes U^*} \le \sqrt r \| A \| _{op}$ for any linear map $A : U \to V$ between vector spaces of dimension $r$. We have also used the diamagnetic inequality $\| h_\nabla ^{(i)} (s, x, y) \| _{op} \le h^{(i)} (s,x,y)$. We conclude that \begin{gather*} \left| \left[ \omega \otimes h_\nabla ^{(i)} \left( \frac t {2^k}, x_0, x_1 \right) \otimes \dots \otimes h_\nabla ^{(i)} \left( \frac t {2^k}, x_{2^k-1}, x_{2^k} \right) \otimes \eta(x_{2^k}) \right] \cdot s(x_0, x_1, \dots, s_{x_{2^k}}) \right| \le \\ \le \frac 1 {\sqrt r} \| \omega \| _{E_{x_0} ^*} h ^{(i)} \left( \frac t {2^k}, x_0, x_1 \right) \dots h ^{(i)} \left( \frac t {2^k}, x_{2^k - 1}, x_{2^k} \right) \| \eta (x_{2^k}) \| _{E_{x_{2^k}}} \| s(x_0, x_1, \dots, s_{x_{2^k}}) \| \ , \end{gather*} hence that $W_{t, \omega, \eta} ^{(i)}$ is well defined, trivially linear, and that \begin{align*} |W_{t, \omega, \eta} ^{(i)} (s \circ \pi_k)| & \le \frac 1 {\sqrt r} \| \omega \| _{E_{x_0} ^*} \int _{\mathcal C_t (\overline {U_i})} \| (\eta (c(t)) \| (s \circ \pi_k) (c) \| \, \mathrm d w_t ^{(i)} (c) \le \\ & \le \frac 1 {\sqrt r} \| \omega \| _{E_{x_0} ^*} [(\mathrm e ^{-t L^{(i)}} \| \eta \| ^2) (x_0)] ^{\frac 1 2} \, \| (s \circ \pi_k) (c) \| _{\Gamma^2 (\mathcal E | _{\mathcal C_t (\overline {U_i})})} \ , \end{align*} where $\| \eta \|$ denotes the function $M \ni x \mapsto \| \eta(x) \| _{E_x} \in [0, \infty)$. Since the continuous and bounded cylindrical sections are dense in $\Gamma^2 (\mathcal E | _{\mathcal C_t (\overline {U_i})})$, it follows that $W_{t, \omega, \eta} ^{(i)}$ extends uniquely to a continuous linear functional on this space, therefore there exists a unique $\rho_{t, \omega, \eta} ^{(i)} \in \Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline {U_i})})$ such that \[ W_{t, \omega, \eta} ^{(i)} (\sigma) = \int _{\mathcal C_t (\overline {U_i})} \rho_{t, \omega, \eta} ^{(i)} (c) (\sigma (c)) \, \mathrm d w_t ^{(i)} (c) \] for every $\sigma \in \Gamma^2 (\mathcal E | _{\mathcal C_t (\overline {U_i})})$. Furthermore, $\| \rho_{t, \omega, \eta} ^{(i)} \| _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline {U_i})})} \le \frac 1 {\sqrt r} \| \omega \| _{E_{x_0} ^*} [(\mathrm e ^{-t L^{(i)}} \| \eta \| ^2) (x_0)] ^{\frac 1 2}$. In the following we shall try to uncover some of the geometrical properties of $\rho_{t, \omega, \eta} ^{(i)}$; more precisely, we shall investigate its connection with the parallel transport in $E$. To this end, let us define the "cut-off parallel transport" $P(x,y) : E_y \to E_x$ for every $(x,y) \in M \times M$ by: \begin{itemize}[wide] \item $P(x,y) = $ the parallel transport in $E$ from $y$ to $x$, whenever there exists a unique minimizing geodesic in $M$ defined on $[0,1]$ between $x$ and $y$, \item $P(x,y) = 0$ otherwise. \end{itemize} Let us notice that $P$ so defined is a section in the external tensor product bundle $E \boxtimes E^* \to M \times M$. Since the subset \[ \{ (x,y) \in M \times M \mid \text{there exists a unique minimizing geodesic between } x \text{ and } y \text{ defined on } [0,1] \} \] is open in $M \times M$, it will be Borel measurable. Since the section $(x,y) \mapsto P(x,y)$ in $E \boxtimes E^*$ is continuous on this subset, $P$ will be a measurable section in this bundle. With $P$ so defined, define \[ P_{t, \omega, \eta, k} (c) = \omega \otimes P \left( c(0), c \left( \frac t {2^k} \right) \right) \otimes \dots \otimes P \left( c \left( \frac {(2^k-1) t} {2^k} \right), c(t) \right) \otimes \eta (c(t)) \] for every curve $c \in \mathcal C_t$ and every $k \in \mathbb N$. Since $P$ is measurable, with operator norm bounded by $1$ at every point of $M \times M$, we conclude that $P_{t, \omega, \eta, k}$ is a measurable and bounded cylindrical section in the bundle $\mathcal E ^*$. We shall show that $\rho_{t, \omega, \eta}$ is the limit of the sequence $(P_{t, \omega, \eta, k}) _{k \in \mathbb N}$ in the norm topology of $\Gamma^2 (\mathcal E ^*)$. We shall need to use $P$ (which is not smooth) in contexts requiring differential calculus methods; in order to do this, we shall now introduce some smooth cut-off functions. Let $\kappa : [0, \infty) \to [0,1]$ be a smooth function such that $\kappa | _{[0, \frac 1 3]} = 1$ and $\kappa | _{[\frac 1 2, \infty)} = 0$. Let $\operatorname{injrad}_U : U \to (0, \infty)$ be the injectivity radius function on $U$; we emphasize that this is not the restriction of $\operatorname{injrad}_M$ to $U$, but rather is is computed intrinsically, using the restriction of the Riemannian structure on $U$ (for basic details about the injectivity radius, see p.118 of \cite{Chavel06}). Being continuous and strictly positive, we may find a smooth function $\operatorname{rad} : U \to (0, \infty)$ such that $\operatorname{rad}(x) < \operatorname{injrad}_U (x)$. In particular, $\operatorname{rad}(x) \le d_U (x, \partial U)$ (the distance up to the boundary of $U$, computed using the intrinsic distance $d_U$ of $U$, not using the distance $d$ restricted to $U$). We may now finally define the desired cut-off function $\chi : U \times U \to [0,1]$ by $\chi (x,y) = \kappa \left( \frac {d_U (x,y)^2} {\operatorname{rad}(x) ^2} \right)$. Notice that $\chi$ is smooth (the square is necessary in order to guarantee the smoothness close to the points with $y=x$). We shall also define the cut-off function $\chi_k : \mathcal C_t (\overline U) \to [0,1]$ by \[ \chi_k (c) = \chi \left( c(0), c\left( \frac t {2^k} \right) \right) \chi \left( c \left( \frac t {2^k} \right), c \left( \frac {2t} {2^k} \right) \right) \dots \chi \left( c \left( \frac {(2^k - 1)t} {2^k} \right), c(t) \right) \] for every $k \ge 0$. If $h^{(i)}$ is the intrinsic heat kernel of $\overline {U_i}$, the operators defined by \[ C(\overline {U_i}) \ni f \mapsto \int _{U_i} h^{(i)} (t, \cdot ,y) \, f(y) \, \mathrm d y \in C(\overline {U_i}) \] together with the identity operator form a strongly continuous one-parameter semigroup in $C(\overline {U_i})$. This will have a generator (closed operator) that we shall denote by $L^{(i)}$, densely defined, with the domain given by (see \cite{Davies80}, chap. 1) \[ \operatorname{Dom} (L^{(i)}) = \left\{ f \in C(\overline {U_i}) ; \lim _{t \to 0} \frac 1 t \left( \int _U h^{(i)} (t, \cdot ,y) \, f(y) \, \mathrm d y - f \right) \in C(\overline {U_i}) \right\} \ . \] We shall denote this semigroup by $(\mathrm e ^{-s L^{(i)}}) _{s \ge 0}$. Integrating twice by parts, it is obvious that $C_0 ^\infty (U_i) \subset \operatorname{Dom} (L^{(i)})$. An essential domain for $L^{(i)}$ is \[ \mathcal E = \bigcup _{s > 0} \mathrm e ^{-s L^{(i)}} (C(\overline {U_i})) \ . \] Since the heat semigroup is smoothing (again, one may use \cite{Mizohata57}, or one's favourite Sobolev spaces techniques, to see this), the functions in $\mathcal E$ will be smooth. Since $h^{(i)}$ vanishes on the boundary $\partial {U_i}$, the functions in $\mathcal E$ will also vanish on $\partial {U_i}$. With exactly the same arguments, but using now the integral kernel $h_\nabla ^{(i)}$ instead of $h ^{(i)}$, we shall obtain a semigroup acting on $\Gamma_c (\overline {U_i})$, the generator of which will be denoted by $L_\nabla ^{(i)}$, and the domain of which will contain $\Gamma_0 ^\infty (E | _{U_i})$. This semigroup will be denoted $(\mathrm e ^{-s L^{(i)} _\nabla}) _{s \ge 0}$. The crucial tool to be used in the following will be Chernoff's theorem (lemma 3.28 in \cite{Davies80}). For the reader's convenience, we shall give its statement here. \begin{theorem}[Chernoff] Assume that $(R_t) _{t \ge 0}$ is a family of contractions in a Banach space $X$, with $R_0 = \operatorname{Id}_X$. Let $\mathcal E \subseteq X$ be an essential domain for the generator $L$ of a strongly continuous one-parameter semigroup $(\mathrm e ^{-t L}) _{t \ge 0}$ on $X$. If $\lim _{t \to 0} \frac 1 t (R_t f - f) = -L f$ for every $f \in \mathcal E$, then $\mathrm e ^{-t L} = \lim _{k \to \infty} \big( R_{\frac t k} \big) ^k$ strongly for every $t \ge 0$. Furthermore, the convergence is uniform with respect to $t$ on bounded subsets of $[0, \infty)$. \end{theorem} With all these preparations, we are ready now for the main technical result of this work, from which all the developments announced in the introduction will unravel. \begin{theorem} \label{application of Chernoff's theorem} If $R_t ^{(i)} : C(\overline {U_i}) \to C(\overline {U_i})$, with $t \ge 0$, is the family of operators given by $R_0 ^{(i)} f = f$ and \[ (R_t ^{(i)} f) (x) = \frac 1 r \int _{U_i} \langle h_\nabla ^{(i)} (t, x, y), \chi(x,y) P(x,y) \rangle _{E_x \otimes E_y ^*} f(y) \, \mathrm d y \] for $f \in C(\overline {U_i})$, then $\lim _{k \to \infty} \left( R_ {\frac t k} ^{(i)} \right) ^k f = \mathrm e^{-t L^{(i)}} f$ for every $f \in C(\overline {U_i})$, uniformly with respect to $t$ from compact subsets of $[0, \infty)$. \end{theorem} \begin{proof} The proof reduces to the verification of the hypotheses in Chernoff's theorem. To begin with, notice that for $t>0$ \begin{align*} |(R_t ^{(i)} f) (x)| & \le \int _{U_i} \chi(x,y) \, \frac 1 {\sqrt r} \| h_\nabla ^{(i)} (t,x,y) \| _{E_x \otimes E_y ^*} \, \frac 1 {\sqrt r} \| P(x,y) \| _{E_x \otimes E_y ^*} \, |f(y)| \, \mathrm d y \le \\ & \le \int _{U_i} \chi(x,y) \, \| h_\nabla ^{(i)} (t,x,y) \| _{op} \, \| P(x,y) \| _{op} \, |f(y)| \, \mathrm d y \le \\ & \le \int _{U_i} h ^{(i)} (t,x,y) |f(y)| \, \mathrm d y \ , \le \| f \| _{C(\overline{U_i})} \ , \end{align*} so $R_t ^{(i)}$ is a contraction for every $t \ge 0$ (we have used again the fact that $\| A \| _{V \otimes U^*} \le \sqrt r \| A \| _{op}$, the diamagnetic inequality and the obvious inequalities $\chi \le 1$ și $\| P(x,y) \| _{op} \le 1$). It remains to show that $\lim _{t \to 0} \| \frac 1 t (R_t ^{(i)} f - f) + L ^{(i)} f \| _{C(\overline{U_i})} = 0$ for every $f \in \mathcal E$; to this end, let us show first that $(R_t ^{(i)} f) (x)$ is smooth with respect to $t$ for every $x \in U$. If $\operatorname{Trace}$ denotes the trace in $\operatorname{End} E_x$, notice that \begin{align*} (R_t ^{(i)} f) (x) & = \frac 1 r \int _{U_i} \langle h_\nabla ^{(i)} (t, x, y), \chi(x,y) P(x,y) \rangle _{E_x \otimes E_y ^*} f(y) \, \mathrm d y = \\ & = \frac 1 r \int _{U_i} \operatorname{Trace} [h_\nabla ^{(i)} (t, x, y) \chi(x,y) P(x,y) ^* ] f(y) \, \mathrm d y = \\ & = \frac 1 r \int _{U_i} \operatorname{Trace} [h_\nabla ^{(i)} (t, x, y) \chi(x,y) P(y,x) ] f(y) \, \mathrm d y = \\ & = \frac 1 r \operatorname{Trace} \{ \mathrm e ^{-t L^{(i)} _\nabla} [\chi(x, \cdot) \, P(\cdot, x) \, f] \} (x) \ . \end{align*} Examining the construction of $\chi$, it is clear that $\chi(x, \cdot) \, P(\cdot, x)$ is a smooth section in $E | _{\overline {U_i}}$ with compact support, the possible singularities of $P(\cdot, x)$ being away from the support of $\chi(x, \cdot)$; since $f$ is smooth, being from $\mathcal E$, their product is a smooth section in $E | _{\overline {U_i}}$ with compact support, therefore in the domain of every power of $L^{(i)} _\nabla$. Under these conditions, we know from the general theory of $1$-parameter $C_0$-semigroups in Banach spaces that the map \[ [0, \infty) \ni t \mapsto \mathrm e ^{-t L^{(i)} _\nabla} [\chi(x, \cdot) \, P(x, \cdot) \, f] \in \Gamma_c (E | _{\overline{U_i}}) \] is smooth. If $\{e_1, \dots, e_r\}$ is an orthonormal basis in $E_x$, and if $\delta_x$ is the Dirac measure concentrated at $x$, then $\delta_x \otimes e_i$ is easily seen to be a continuous linear functional on $\Gamma_c (E | _{\overline{U_i}})$ for each $1 \le i \le r$; since \[ \{ \mathrm e ^{-t L^{(i)} _\nabla} [\chi(x, \cdot) \, P(x, \cdot) \, f] \} (x) = \sum _{i=1} ^r (\delta_x \otimes e_i) \left( \mathrm e ^{-t L^{(i)} _\nabla} [\chi(x, \cdot) \, P(x, \cdot) \, f] \right) e_i \ , \] the smoothness of the map $[0, \infty) \ni t \mapsto \{ \mathrm e ^{-t L^{(i)} _\nabla} [\chi(x, \cdot) \, P(x, \cdot) \, f] \} (x) \in E_x$ is clear, whence the smoothness of the function $[0, \infty) \ni t \mapsto (R_t ^{(i)} f) (x) \in \mathbb C$ follows immediately. Expanding with respect to $t$ we have, for every $x \in \overline{U_i}$, \begin{equation} (R_t ^{(i)} f) (x) = f(x) + \partial_t |_{t=0} (R_t ^{(i)} f) (x) \, t + \int _0 ^t (t-s) \partial _s ^2 (R_s ^{(i)} f) (x) \, \mathrm d s \ . \label{Taylor expansion} \end{equation} For the calculation of the first derivative of $(R_t ^{(i)} f) (x)$ at $t=0$ we have \begin{align*} \partial_t |_{t=0} (R_t ^{(i)} f) (x) & = \partial_t |_{t=0} \int _{U_i} \frac 1 r \, \operatorname{Trace} [h_\nabla ^{(i)} (t,x,y) \, \chi(x,y) \, P(x,y)^* ] \, f(y) \, \mathrm d y = \\ & = \frac 1 r \lim _{t \to 0} \int _{U_i} \operatorname{Trace} \{[-L_{\nabla, (y)} ^{(i)} h_\nabla ^{(i)} (t,x,y)] \, \chi(x,y) \, P(y,x) \, f(y) \} \, \mathrm d y = \\ & = \frac 1 r \operatorname{Trace} \left\{ \lim _{t \to 0} \int _{U_i} h_\nabla ^{(i)} (t,x,y) \{ -L_{\nabla, (y)} ^{(i)} [\chi(x,y) \, P(y,x) \, f(y)] \} \, \mathrm d y \right\} = \\ & = \frac 1 r \operatorname{Trace} \left\{ \{ -L_{\nabla, (y)} ^{(i)} [\chi(x,y) \, P(y,x) \, f(y)] \} _{y=x} \right\} . \end{align*} Some clarifications about the above calculations are in order. First, the notation $L_{\nabla, y} ^{(i)}$ means that the Laplacian acts with respect to $y \in \overline{U_i}$. Second, we have been able to move the Laplacian from acting on $h_{\nabla} ^{(i)} (t, x, \cdot)$ over to acting on the product $\chi(x, \cdot) \, P(\cdot, x) \, f$ because $\chi (x, \cdot)$ is smooth with compact support, and the other two factors are also smooth inside this support; this is, in fact, the only reason for which the introduction of $\chi$ in our reasoning was necessary. Since $L_\nabla ^{(i)}$ is a local operator, since $\chi(x, \cdot) = 1$ near $x$, and since $\chi(x, \cdot) \, P(\cdot, x) \, f$ is smooth near $x$, we may replace $L_\nabla ^{(i)}$ with $\nabla^* \nabla$ and we may also drop $\chi$ in order to obtain the simpler formula \[ \partial_t |_{t=0} (R_t ^{(i)} f) (x) = \frac 1 r \operatorname{Trace} \{ \nabla^* \nabla \, [P(\cdot, x) \, f] \} (x) . \] Choosing an orthonormal basis $\{e_1, \dots, e_r\}$ in $E_x$, the above formula becomes \[ \partial_t |_{t=0} (R_t ^{(i)} f) (x) = \frac 1 r \sum _{k=1} ^r \langle \nabla^* \nabla \, [P(\cdot, x) e_k \, f] (x), e_k \rangle _{E_x} . \] Since $P$ is the parallel transport with respect to $\nabla$, its covariant derivative will be $0$, so \[ \partial_t |_{t=0} (R_t ^{(i)} f) (x) = \frac 1 r \sum _{k=1} ^r \langle \nabla^* \, [P(\cdot, x) e_k \otimes \mathrm d f] (x), e_k \rangle _{E_x} . \] It is a known result in Riemannian geometry that $\nabla ^* (\eta \otimes \alpha) = - \nabla _{\alpha^\sharp} \eta - (\mathrm d ^* \alpha) \eta$ for every real smooth $1$-form $\alpha$ and every smooth section $\eta$ in $E$, where $\alpha^\sharp$ is the tangent field dual to the $1$-form $\alpha$ under the usual "musical" isomorphisms. In particular, if $f$ is real \[ \nabla ^* (\eta \otimes \mathrm d f) = - \nabla _{\operatorname{grad} f} \eta + (\Delta f) \eta \ , \] whence it follows that \begin{align*} \partial_t |_{t=0} (R_t ^{(i)} f) (x) & = \frac 1 r \sum _{k=1} ^r \langle [-\nabla_{\operatorname{grad} f} P(\cdot, x) e_k] (x) + (\Delta f) (x) \, [P(\cdot, x) e_k] (x), e_k \rangle _{E_x} = \\ & = \frac 1 r \sum _{k=1} ^r \langle (\Delta f) (x) \, [P(x, x) e_k] (x), e_k \rangle _{E_x} = (\Delta f) (x) = -(L^{(i)} f) (x) \ , \end{align*} where we have used again that $\nabla_{\operatorname{grad} f} P(\cdot, x) e_k = 0$ for the very same geometrical reasons as above. The result, obtained for real $f$, extends now trivially to complex $f$. Returning to formula (\ref{Taylor expansion}), we have \begin{align*} \| (R_t ^{(i)} f - f) & + t \, L^{(i)} f \| _{C (\overline{U_i})} \le \sup _{x \in \overline{U_i}} \left| \int _0 ^t (t-s) \partial _s ^2 (R_s ^{(i)} f) (x) \, \mathrm d s \right| \le \frac {t^2} 2 \sup _{x \in \overline{U_i}} \sup_{s \in [0,t]} |\partial_s ^2 (R_s ^{(i)} f) (x)| \le \\ & \le \frac {t^2} 2 \sup _{x \in \overline{U_i}} \sup_{s \in [0,t]} \left| \frac 1 r \int _{U_i} \langle \partial_s ^2 h_\nabla ^{(i)} (s, x, y), \chi(x,y) P(x,y) \rangle _{E_x \otimes E_y ^*} f(y) \, \mathrm d y \right| \le \\ & \le \frac 1 r \frac {t^2} 2 \sup _{x \in \overline{U_i}} \sup_{s \in [0,t]} \left| \int _{U_i} \langle h_\nabla ^{(i)} (s, x, y), (L ^{(i)})^2 [\chi(x,y) P(x,y) \overline{f(y)}] \rangle _{E_x \otimes E_y ^*} \, \mathrm d y \right| \le \\ & \le \frac 1 r \frac {t^2} 2 \sup _{x \in \overline{U_i}} \sup_{y \in \overline{U_i}} |(L ^{(i)})^2 [\chi(x,y) P(x,y) \overline{f(y)}]| \ , \end{align*} where in the last inequality we have used the diamagnetic inequality and the sub-markovianity of $h^{(i)}$. Since the function \[ \overline{U_i} \times \overline{U_i} \ni (x,y) \mapsto |(L ^{(i)})^2 [\chi(x,y) P(x,y) \overline{f(y)}]| \in [0, \infty) \] is smooth, the double supremum obtained in the last inequality will have a finite value $C \in [0, \infty)$, so \[ \left\| \frac 1 t (R_t ^{(i)} f - f) + \, L^{(i)} f \right\| _{C (\overline{U_i})} \le \frac C {2r} \, t \to 0 \ , \] which checks the last hypothesis in Chernoff's theorem, which we may now apply in order to obtain the conclusion of our theorem. \end{proof} The following corollary is essentially the above theorem in the trivial bundle $U_i \times \mathbb C$ endowed with the trivial connection given by differentiation (a situation in which the cut-off parallel transport $P$ may be replaced by the constant function $1$). The proof is essentially the same, but in an even simpler context, so we shall omit it. \begin{corollary} \label{second application of Chernoff's theorem} If $S_t ^{(i)} : C(\overline {U_i}) \to C(\overline {U_i})$, with $t \ge 0$, is the family of operators given by $S_0 ^{(i)} f = f$ and \[ (S_t ^{(i)} f) (x) = \int _{U_i} h^{(i)} (t, x, y) \chi(x,y) f(y) \, \mathrm d y \] for $f \in C(\overline {U_i})$, then $\lim _{k \to \infty} \left( S_ {\frac t k} ^{(i)} \right) ^k f = \mathrm e^{-t L^{(i)}} f$ for every $f \in C(\overline {U_i})$, uniformly with respect to $t$ from compact subsets of $[0, \infty)$. \end{corollary} \begin{remark} Before going any further, let us pause for a moment and examine where in the above proof we have used the compactness of $\overline {U_i}$, and whether this compactness assumption is essential or not. It turns out that the only step in the proof where this assumption was used was in the bounding of the function \[ \overline{U_i} \times \overline{U_i} \ni (x,y) \mapsto |(L ^{(i)})^2 [\chi(x,y) P(x,y) \overline{f(y)}]| \in [0, \infty) \ . \] If instead of working on $\overline {U_i}$ we had worked on $M$, we would have needed to choose $f$ with the properties that: \begin{itemize}[wide] \item $f$ should be in the domain of $L^2$, where $L$ is the Friedrichs extension of the Laplace-Beltrami operator $-\Delta$ of $M$; \item the product of $f$ with any compactly-supported smooth function should again be in the domain of $L^2$; \item $f$ should have compact essential support, in order to guarantee the desired boundedness. \end{itemize} If $M$ had been metrically complete, then an essential domain for $L$ made of such functions would have been the space of compactly-supported smooth functions (see theorem 11.5 in \cite{Grigor'yan09}). For arbitrary Riemannian manifolds, though, no essential domain satisfying the above three conditions is known to the author, hence the need to treat the problem on relatively compact domains - a technical restriction that we shall see later on how to get rid of. \end{remark} The above results allow us to finally approach the statement that we were after, namely to prove that the sequence $(P_{t, \omega, \eta, k}) _{k \in \mathbb N}$ approximates $\rho_{t, \omega, \eta} ^{(i)}$. \begin{theorem} \label{approximation on regular domains} The sequence $(P_{t, \omega, \eta, k} | _{\mathcal C_t (\overline {U_i})}) _{k \in \mathbb N}$ converges to $\rho_{t, \omega, \eta} ^{(i)}$ in $\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline {U_i})})$, uniformly with respect to $t$ from bounded subsets of $(0, \infty)$. \end{theorem} \begin{proof} In order to simplify the notations, we shall no longer indicate visually the restriction of functions or sections to $\mathcal C_t (\overline {U_i})$ where this is obvious. In the equality \begin{align*} \| \rho_{t, \omega, \eta} ^{(i)} - P_{t, \omega, \eta, k} \| _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline {U_i})})} ^2 & = \| \rho_{t, \omega, \eta} ^{(i)} \| _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline {U_i})})} ^2 - \langle \rho_{t, \omega, \eta} ^{(i)}, P_{t, \omega, \eta, k} \rangle _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline {U_i})})} - \\ & - \langle P_{t, \omega, \eta, k}, \rho_{t, \omega, \eta} ^{(i)} \rangle _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline {U_i})})} + \| P_{t, \omega, \eta, k} \| _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline {U_i})})} ^2 \end{align*} the first term is less or equal than $\frac 1 r \| \omega \| _{E_{x_0} ^*} ^2 (\mathrm e ^{-t L^{(i)}} \| \eta \| ^2) (x_0)$. Performing majorizations similar to the ones made when we showed that $W_{t, \omega, \eta} ^{(i)}$ is well defined, in which we only replace $h_\nabla ^{(i)} \left( \frac t {2^k}, x_{j-1}, x_j \right)$ with $P(x_{j-1}, x_j)$ (the operator norm of which is less or equal than $1$), we obtain that \[ \| P_{t, \omega, \eta, k} (c) \| _{\mathcal E ^* _c} ^2 \le \frac 1 r \| \omega \| _{E_{x_0}^*} ^2 \| \eta (c(t)) \| _{E_{c(t)}} ^2 \ , \] whence we may bound the last term in the above right-hand side by \begin{align*} \| P & _{t, \omega, \eta, k} \| _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline {U_i})})} ^2 = \int _{\mathcal C_t (\overline {U_i})} \left\| \omega \otimes P \left( c(0), c \left( \frac t {2^k} \right) \right) \otimes \dots \right. \\ & \left. \dots \otimes P \left( c \left( \frac {(2^k-1) t} {2^k} \right), c(t) \right) \otimes \eta (c(t)) \right\| _{\mathcal E ^* _c} ^2 \, \mathrm d w_t ^{(i)} (c) \le \frac 1 r \| \omega \| _{E_{x_0} ^*} ^2 (\mathrm e ^{-t L^{(i)}} \| \eta \| ^2) (x_0) \ . \end{align*} We shall now show that $\lim _{k \to \infty} \langle \rho_{t, \omega, \eta} ^{(i)}, P_{t, \omega, \eta, k} \rangle _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline {U_i})})} = \frac 1 r \| \omega \| _{E_{x_0} ^*} ^2 (\mathrm e ^{-t L^{(i)}} \| \eta \| ^2) (x_0)$. If $P_{t, \omega, \eta, k} ^* \in \Gamma^2 (\mathcal E ^*)^* \simeq \Gamma^2 (\mathcal E)$ denotes the element dual to $P_{t, \omega, \eta, k}$ with respect to the Hermitian product on $\Gamma^2 (\mathcal E ^*)$, we have that \begin{align*} \langle \rho_{t, \omega, \eta} ^{(i)}, & P_{t, \omega, \eta, k} \rangle _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline {U_i})})} = \int _{\mathcal C_t (\overline {U_i})} \rho_{t, \omega, \eta} ^{(i)} (c) [P_{t, \omega, \eta, k} ^* (c)] \, \mathrm d w_t ^{(i)} (c) = W_{t, \omega, \eta} ^{(i)} (P_{t, \omega, \eta, k} ^*) = \\ & = W_{t, \omega, \eta} ^{(i)} (\chi_k P_{t, \omega, \eta, k} ^*) + W_{t, \omega, \eta} ^{(i)} ((1 - \chi_k) P_{t, \omega, \eta, k} ^*) \ . \end{align*} The first term in the right-hand side is \begin{gather*} \int _{U_i} \mathrm d x_1 \dots \int _{U_i} \mathrm d x_{2^k} \Bigg\langle \omega \otimes h_\nabla ^{(i)} \left( \frac t {2^k}, x_0, x_1 \right) \otimes \dots \otimes h_\nabla ^{(i)} \left( \frac t {2^k}, x_{2^k-1}, x_{2^k} \right) \otimes \eta(x_{2^k}) , \\ \omega \otimes \chi(x_0, x_1) P(x_0, x_1) \otimes \dots \otimes \chi(x_{2^k-1}, x_{2^k}) P(x_{2^k-1}, x_{2^k}) \otimes \eta (x_{2^k}) \Bigg\rangle = \\ = \left( \frac 1 r \right) ^{2^k + 1} \| \omega \| ^2 _{E_{x_0} ^*} \int _{U_i} \mathrm d x_1 \dots \int _{U_i} \mathrm d x_{2^k} \left\langle h_\nabla ^{(i)} \left( \frac t {2^k}, x_0, x_1 \right), \chi(x_0, x_1) P(x_0, x_1) \right\rangle _{E_{x_0} \otimes E_{x_1}^*} \dots \\ \dots \left\langle h_\nabla ^{(i)} \left( \frac t {2^k}, x_{2^k - 1}, x_{2^k} \right), \chi(x_{2^k - 1}, x_{2^k}) P(x_{2^k - 1}, x_{2^k}) \right\rangle _{E_{x_{2^k-1}} \otimes E_{x_{2^k}}^*} \| \eta (x_{2^k}) \| ^2 _{E_{x_{2^k}}} = \\ = \frac 1 r \| \omega \| ^2 _{E_{x_0} ^*} \left[ \left( R_{\frac t {2^k}} ^{(i)} \right) ^{2^k} \| \eta \| ^2 \right] (x_0) \ , \end{gather*} which converges to $\frac 1 r \| \omega \| ^2 _{E_{x_0} ^*} (\mathrm e ^{-t L^{(i)}} \| \eta \| ^2) (x_0)$ uniformly with respect to $t$ from bounded subsets of $(0, \infty)$ according to theorem \ref{application of Chernoff's theorem}. Using first the diamagnetic inequality, then the Cauchy-Schwarz inequality in the fiber $\mathcal E_c ^*$, the second term in the above right-hand side may be bounded as such: \begin{gather*} |W_{t, \omega, \eta} ^{(i)} ((1 - \chi_k) P_{t, \omega, \eta, k} ^*)| = \left| \int _{\mathcal C_t (\overline {U_i})} \rho_{t, \omega, \eta} ^{(i)} (c) [(1 - \chi_k (c)) \, P_{t, \omega, \eta, k} ^* (c)] \, \mathrm d w_t ^{(i)} (c) \right| = \\ = \left| \int _{U_i} \mathrm d x_1 \dots \int _{U_i} \mathrm d x _{2^k} \, [1 - \chi (x_0, x_1) \dots \chi (x_{2^k - 1}, x_{2^k})] \left\langle \omega \otimes h_\nabla ^{(i)} \left( \frac t {2^k}, x_0, x_1 \right) \otimes \dots \right. \right. \\ \left. \left. \dots \otimes h_\nabla ^{(i)} \left( \frac t {2^k}, x_{2^k - 1}, x_{2^k} \right) \otimes \eta(x_{2^k}), \, \omega \otimes P(x_0, x_1) \otimes \dots \otimes P(x_{2^k - 1}, x_{2^k}) \otimes \eta(x_{2^k}) \right\rangle \right| \le \\ \le \frac 1 r \int _{U_i} \mathrm d x_1 \, h^{(i)} \left( \frac t {2^k}, x_0, x_1 \right) \dots \int _{U_i} \mathrm d x _{2^k} \, h^{(i)} \left( \frac t {2^k}, x_{2^k - 1}, x_{2^k} \right) \\ [1 - \chi (x_0, x_1) \dots \chi (x_{2^k - 1}, x_{2^k})] \| \omega \| _{E_{x_0} ^*} ^2 \| \eta(x_{2^k}) \| _{E_{x_{2^k}}} ^2 = \\ = \frac 1 r \| \omega \| _{E_{x_0} ^*} ^2 \int _{\mathcal C_t (\overline {U_i})} (1 - \chi_k (c)) \| \eta(c(t)) \| _{E_{c(t)}} ^2 \, \mathrm d w_t ^{(i)} (c) = \frac 1 r \| \omega \| _{E_{x_0} ^*} ^2 (\mathrm e ^{-tL^{(i)}} \| \eta \| ^2) (x_0) - \\ - \frac 1 r \| \omega \| _{E_{x_0} ^*} ^2 \int _{U_i} \mathrm d x_1 \, h^{(i)} \left( \frac t {2^k}, x_0, x_1 \right) \chi(x_0, x_1) \dots \\ \dots \int _{U_i} \mathrm d x_{2^k} \, h^{(i)} \left( \frac t {2^k}, x_{2^k - 1}, x_{2^k} \right) \chi(x_{2^k - 1}, x_{2^k}) \| \eta (x_{2^k}) \| ^2 _{E_{x_{2^k}}} = \\ = \frac 1 r \| \omega \| _{E_{x_0} ^*} ^2 (\mathrm e ^{-tL^{(i)}} \| \eta \| ^2) (x_0) - \frac 1 r \| \omega \| _{E_{x_0} ^*} ^2 \left[ \left( S_{\frac t {2^k}} ^{(i)} \right) ^{2^k} \| \eta \| ^2 \right] (x_0) \ , \end{gather*} which converges to $0$ uniformly with respect to $t$ from bounded subsets of $(0, \infty)$ according to corollary \ref{second application of Chernoff's theorem}. Passing to the limit, we obtain that \[ \lim _{k \to \infty} \langle \rho_{t, \omega, \eta} ^{(i)}, P_{t, \omega, \eta, k} \rangle _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline {U_i})})} = \frac 1 r \| \omega \| ^2 _{E_{x_0} ^*} \lim _{k \to \infty} \left\langle \delta_{x_0}, \left( R_{\frac t {2^k}} ^{(i)} \right) ^{2^k} \| \eta \| ^2 \right\rangle = \frac 1 r \| \omega \| ^2 _{E_{x_0} ^*} (\mathrm e ^{-t L^{(i)}} \| \eta \| ^2) (x_0) \] uniformly with respect to $t$ from bounded subsets of $(0, \infty)$, whence it follows that \[ \varlimsup _{k \to \infty} \| \rho_{t, \omega, \eta} ^{(i)} - P_{t, \omega, \eta, k} \| _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline {U_i})})} ^2 \le 0 \] uniformly with respect to $t$ from bounded subsets of $(0, \infty)$, the conclusion being now immediate. \end{proof} We already knew that $\rho_{t, \omega, \eta} ^{(i)} \in \Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline {U_i})})$, but the convergence that we have just proved allows us to obtain an even stronger conclusion, which will be useful later on, in particular in proving the Feynman-Kac formula in fiber bundles. \begin{corollary} \label{pointwise norm above regular domains} $\rho_{t, \omega, \eta} ^{(i)} \in \Gamma^\infty (\mathcal E ^* | _{\mathcal C_t (\overline {U_i})})$ și $\| \rho_{t, \omega, \eta} ^{(i)} (c) \| _{\mathcal E ^* _c} = \frac 1 {\sqrt r} \, \| \omega \| _{E_{x_0}^*} \, \| \eta (c(t)) \| _{E_{c(t)}}$ for almost every $c \in \mathcal C_t (\overline {U_i})$ (with respect to the measure $w_t ^{(i)}$). \end{corollary} \begin{proof} We know that $P_{t, \omega, \eta, k} | _{\mathcal C_t (\overline {U_i})} \to \rho_{t, \omega, \eta} ^{(i)}$ in $\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline {U_i})})$. After choosing a measurable representative of $\rho_{t, \omega, \eta} ^{(i)}$, which we shall denote $\rho_{t, \omega, \eta} ^{(i)}$ again, for simplicity, there exists a subsequence $(k_l) _{l \in \mathbb N} \subseteq \mathbb N$ and a co-null subset $C \subseteq \mathcal C_t (\overline{U_i})$ such that $P_{t, \omega, \eta, k_l} (c) \to \rho_{t, \omega, \eta} ^{(i)} (c)$ for every $c \in C$ (the proof of this fact is almost identical to the proof of the completeness of $\Gamma^2 (\mathcal E)$). Reusing the argument in lemma \ref{approximation of continuous curves}, for each curve $c \in C$ there exists $k_c \in \mathbb N$ such that $P(c(\frac {jt} {2^k}), c(\frac {(j+1)t} {2^k})) \ne 0$ (therefore it is precisely the parallel transport between the two points on $c$) for all $k \ge k_c$ and $0 \le j \le 2^k-1$. It follows that \[ \| P_{t, \omega, \eta, k} (c) \| _{\mathcal E ^* _c} = \frac 1 {\sqrt r} \, \| \omega \| _{E_{x_0}^*} \, \| \eta (c(t)) \| _{E_{c(t)}} \] for $c \in C$ and $k \ge k_c$, whence it follows that \[ \| \rho_{t, \omega, \eta} ^{(i)} (c) \| _{\mathcal E ^* _c} = \lim _{l \to \infty} \| P_{t, \omega, \eta, k_l} (c) \| _{\mathcal E ^* _c} = \frac 1 {\sqrt r} \, \| \omega \| _{E_{x_0}^*} \, \| \eta (c(t)) \| _{E_{c(t)}} \le \frac 1 {\sqrt r} \, \| \omega \| _{E_{x_0}^*} \, \| \eta \| _{\Gamma_{cb} (E)} \ . \] \end{proof} So far we have worked on the spaces of curves $\mathcal C_t (\overline{U_i})$ associated to the relatively compact domains $U_i$ that exhaust $M$. We have done this purely for technical reasons, the compactness having to do with the need to bound certain continuous functions in the proof of theorem \ref{application of Chernoff's theorem} that were difficult to control otherwise. It is the appropriate moment now to get rid of this exhaustion and obtain global geometrical objects and global pieces of relationship among these objects. \begin{theorem} If $i \le j$ then $\rho_{t, \omega, \eta} ^{(j)} | _{\mathcal C_t (\overline{U_i})} = \rho_{t, \omega, \eta} ^{(i)}$. \end{theorem} \begin{proof} For all $k \in \mathbb N$ we have \begin{align*} & \| \rho_{t, \omega, \eta} ^{(j)} | _{\mathcal C_t (\overline{U_i})} - \rho_{t, \omega, \eta} ^{(i)} \| _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline{U_i})})} \le \\ & \le \| \rho_{t, \omega, \eta} ^{(j)} | _{\mathcal C_t (\overline{U_i})} - P_{t, \omega, \eta, k} | _{\mathcal C_t (\overline{U_i})} \| _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline{U_i})})} + \| P_{t, \omega, \eta, k} | _{\mathcal C_t (\overline{U_i})} - \rho_{t, \omega, \eta} ^{(i)} \| _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline{U_i})})} = \\ & = \sqrt{ \int _{\mathcal C_t (\overline{U_i})} \| \rho_{t, \omega, \eta} ^{(j)} (c) - P_{t, \omega, \eta, k} (c) \| _{\mathcal E ^* _c} ^2 \, \mathrm d w_t ^{(i)} (c) } + \| P_{t, \omega, \eta, k} | _{\mathcal C_t (\overline{U_i})} - \rho_{t, \omega, \eta} ^{(i)} \| _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline{U_i})})} \le \\ & \le \sqrt{ \int _{\mathcal C_t (\overline{U_i})} \| \rho_{t, \omega, \eta} ^{(j)} (c) - P_{t, \omega, \eta, k} (c) \| _{\mathcal E ^* _c} ^2 \, \mathrm d w_t ^{(j)} | _{\mathcal C_t (\overline{U_i})} (c) } + \| P_{t, \omega, \eta, k} | _{\mathcal C_t (\overline{U_i})} - \rho_{t, \omega, \eta} ^{(i)} \| _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline{U_i})})} \le \\ & \le \sqrt{ \int _{\mathcal C_t (\overline{U_j})} \| \rho_{t, \omega, \eta} ^{(j)} (c) - P_{t, \omega, \eta, k} (c) \| _{\mathcal E ^* _c} ^2 \, \mathrm d w_t ^{(j)} (c) } + \| P_{t, \omega, \eta, k} | _{\mathcal C_t (\overline{U_i})} - \rho_{t, \omega, \eta} ^{(i)} \| _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline{U_i})})} = \\ & = \| \rho_{t, \omega, \eta} ^{(j)} - P_{t, \omega, \eta, k} | _{\mathcal C_t (\overline{U_j})} \| _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline{U_j})})} + \| P_{t, \omega, \eta, k} | _{\mathcal C_t (\overline{U_i})} - \rho_{t, \omega, \eta} ^{(i)} \| _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline{U_i})})} \ , \end{align*} whence the conclusion is clear with the aid of theorem \ref{approximation on regular domains}. \end{proof} This compatibility relationship among the sections $(\rho _{t, \omega, \eta} ^{(j)}) _{j \in \mathbb N}$ insures that the global section defined by $\rho _{t, \omega, \eta} = \lim _{j \to \infty} \rho _{t, \omega, \eta} ^{(j)}$ is well defined, and that $\rho _{t, \omega, \eta} | _{\mathcal C_t (\overline {U_j})} = \rho _{t, \omega, \eta} ^{(j)}$. In defining $\rho _{t, \omega, \eta}$ as we have done it is understood that we work with measurable representatives of the equivalence classes $\rho _{t, \omega, \eta} ^{(j)} \in \Gamma^\infty (\mathcal E ^* | _{\mathcal C_t (\overline{U_j})}) \subseteq \Gamma^\infty (\mathcal E ^*)$, and that changing these representatives in turn changes the limit only on some null subset, therefore its equivalence class stays the same. \begin{theorem} \label{pointwise norm} The section $\rho _{t, \omega, \eta}$ so defined is measurable and essentially bounded. Furthermore, $\| \rho _{t, \omega, \eta} (c) \| _{\mathcal E^* _c} = \frac 1 {\sqrt r} \| \omega \| _{E_{x_0} ^*} \| \eta_{c(t)} \| _{E_{c(t)}}$ for almost all $c \in \mathcal C_t$. \end{theorem} \begin{proof} Since $M = \bigcup _{j \in \mathbb N} U_j$, it follows that $\mathcal C_t = \bigcup _{j \in \mathbb N} \mathcal C_t (\overline {U_j})$. Since $\rho _{t, \omega, \eta} | _{\mathcal C_t (\overline {U_j})} = \rho _{t, \omega, \eta} ^{(j)}$, it follows that if $S \subseteq \mathcal E^*$ is a measurable subset then \begin{align*} \rho _{t, \omega, \eta} ^{-1} (S) = \bigcup _{j \in \mathbb N} [\rho _{t, \omega, \eta} ^{-1} (S) \cap \mathcal C_t (\overline {U_j})] = \bigcup _{j \in \mathbb N} [\rho _{t, \omega, \eta} ^{(j)}]^{-1} (S) \end{align*} is measurable because each $\rho _{t, \omega, \eta} ^{(j)}$ is measurable. The value of the pointwise norm of $\rho_{t, \omega, \eta}$ is a consequence of corollary \ref{pointwise norm above regular domains}. \end{proof} We have seen in theorem \ref{approximation on regular domains} that $P_{t, \omega, \eta, k} | _{\mathcal C_t (\overline {U_j})} \to \rho _{t, \omega, \eta} | _{\mathcal C_t (\overline {U_j})}$ in $\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline {U_i})})$, for all $j \in \mathbb N$. We shall now prove that it is possible to remove the restriction to $\mathcal C_t (\overline {U_j})$ and obtain the convergence globally, on the whole $\mathcal C_t$. \begin{theorem} \label{approximation of rho} The sequence $(P_{t, \omega, \eta, k}) _{k \in \mathbb N}$ converges to $\rho_{t, \omega, \eta}$ în $\Gamma^2 (\mathcal E ^*)$, uniformly with respect to $t \in (0,T]$ for all $T>0$. \end{theorem} \begin{proof} If $\omega = 0$ the result is trivially true; we shall assume then that $\omega \ne 0$. Let $\varepsilon > 0$. Using the fact that $\rho _{t, \omega, \eta} | _{\mathcal C_t (\overline {U_j})} = \rho _{t, \omega, \eta} ^{(j)}$ for all $j \in \mathbb N$, we may write that \begin{align*} \| P_{t, \omega, \eta, k} - \rho_{t, \omega, \eta} \| _{\Gamma^2 (\mathcal E ^*)} ^2 & = \| P_{t, \omega, \eta, k} - \rho_{t, \omega, \eta} \| _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline {U_j})})} ^2 + \| P_{t, \omega, \eta, k} - \rho_{t, \omega, \eta} \| _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t \setminus \mathcal C_t (\overline {U_j})})} ^2 \le \\ & \le \| P_{t, \omega, \eta, k} - \rho_{t, \omega, \eta} \| _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline {U_j})})} ^2 + \frac 4 r \| \omega \| _{E_{x_0} ^*} ^2 \int \limits _{\mathcal C_t \setminus \mathcal C_t (\overline {U_j})} \| \eta(c(t)) \| _{E_{c(t)}} ^2 \, \mathrm d w_t (c) \end{align*} and the integral on the right-hand side is \begin{align*} \int _{\mathcal C_t \setminus \mathcal C_t (\overline {U_j})} \| \eta(c(t)) \| _{E_{c(t)}} ^2 \, \mathrm d w_t (c) & = \int _{\mathcal C_t} \| \eta(c(t)) \| _{E_{c(t)}} ^2 \, \mathrm d w_t (c) - \int _{\mathcal C_t (\overline {U_j})} \| \eta(c(t)) \| _{E_{c(t)}} ^2 \, \mathrm d w_t (c) \le \\ & \le \int _{\mathcal C_t} \| \eta(c(t)) \| _{E_{c(t)}} ^2 \, \mathrm d w_t (c) - \int _{\mathcal C_t (\overline {U_j})} \| \eta(c(t)) \| _{E_{c(t)}} ^2 \, \mathrm d w_t ^{(j)} (c) \ . \end{align*} On the other hand, \begin{align*} \| P_{t, \omega, \eta, k} & - \rho_{t, \omega, \eta} \| _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline {U_j})})} ^2 = \\ & = \int _{\mathcal C_t (\overline {U_j})} \| P_{t, \omega, \eta, k} (c) - \rho_{t, \omega, \eta} (c) \| _{\mathcal E ^* _c} ^2 \, \mathrm d w_t ^{(j)} (c) + \\ & + \int _{\mathcal C_t (\overline {U_j})} \| P_{t, \omega, \eta, k} (c) - \rho_{t, \omega, \eta} (c) \| _{\mathcal E ^* _c} ^2 \, \mathrm d (w_t - w_t ^{(j)}) (c) \le \\ & \le \| P_{t, \omega, \eta, k} - \rho_{t, \omega, \eta} \| _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline {U_j}), w_t ^{(j)}})} ^2 + \frac 4 r \| \omega \| _{E_{x_0} ^*} ^2 \int _{\mathcal C_t (\overline {U_j})} \| \eta(c(t)) \| _{E_{c(t)}} ^2 \, \mathrm d (w_t - w_t ^{(j)}) (c) \le \\ & \le \| P_{t, \omega, \eta, k} - \rho_{t, \omega, \eta} ^{(j)} \| _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline {U_j}), w_t ^{(j)}})} ^2 + \\ & + \frac 4 r \| \omega \| _{E_{x_0} ^*} ^2 \int _{\mathcal C_t} \| \eta(c(t)) \| _{E_{c(t)}} ^2 \, \mathrm d w_t (c) - \frac 4 r \| \omega \| _{E_{x_0} ^*} ^2 \int _{\mathcal C_t (\overline {U_j})} \| \eta(c(t)) \| _{E_{c(t)}} ^2 \, \mathrm d w_t ^{(j)} (c) \ . \end{align*} We conclude that \begin{align*} \| P_{t, \omega, \eta, k} - \rho_{t, \omega, \eta} \| _{\Gamma^2 (\mathcal E ^*)} ^2 & \le \| P_{t, \omega, \eta, k} - \rho_{t, \omega, \eta} ^{(j)} \| _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline {U_j}), w_t ^{(j)}})} ^2 + \\ & + \frac 8 r \| \omega \| _{E_{x_0} ^*} ^2 \left( \int \limits _{\mathcal C_t} \| \eta(c(t)) \| _{E_{c(t)}} ^2 \, \mathrm d w_t (c) - \int \limits _{\mathcal C_t (\overline {U_j})} \| \eta(c(t)) \| _{E_{c(t)}} ^2 \, \mathrm d w_t ^{(j)} (c) \right) . \end{align*} We shall choose $j$ by a careful examination of the difference between these two latter integrals: \begin{gather*} \int _{\mathcal C_t} \| \eta(c(t)) \| _{E_{c(t)}} ^2 \, \mathrm d w_t (c) - \int _{\mathcal C_t (\overline {U_j})} \| \eta(c(t)) \| _{E_{c(t)}} ^2 \, \mathrm d w_t ^{(j)} (c) = \\ = \int _M h(t, x_0, x) \, \| \eta(x) \| _{E_x} ^2 \, \mathrm d x - \int _{\overline{U_j}} h^{(j)} (t, x_0, x) \, \| \eta(x) \| _{E_x} ^2 \, \mathrm d x \ , \end{gather*} and since $h^{(j)} \to h$ pointwise and monotonically, for each $t>0$ there exists an $j_{\varepsilon, t} \in \mathbb N$ such that \begin{gather*} \left| \int _M h(t, x_0, x) \, \| \eta(x) \| _{E_x} ^2 \, \mathrm d x - \int _{U_j} h^{(j)} (t, x_0, x) \, \| \eta(x) \| _{E_x} ^2 \, \mathrm d x \right| = \\ = \left| \langle \delta_{x_0}, \mathrm e ^{-t L} \| \eta \| ^2 \rangle - \langle \delta_{x_0}, \mathrm e ^{-t L^{(j)}} (\| \eta \| | _{\overline {U_j}} ^2) \rangle \right| < \frac {r \varepsilon} {16 \| \omega \| _{E_{x_0} ^*} ^2} \end{gather*} for every $j \ge j_{\varepsilon, t}$, where $\langle \cdot, - \rangle$ denotes the duality pairing between the space of bounded continuous functions and its dual (to which $\delta_{x_0}$ belongs). Since the two heat semigroups seen above are strongly continuous, the above expression that contains them is continuous with respect to $t \in [0,T]$, therefore every $t$ from $[0,T]$ admits an open neighbourhood $V _{\varepsilon, t} \subseteq [0,T]$ such that \[ \left| \langle \delta_{x_0}, \mathrm e ^{-s L} \| \eta \| ^2 \rangle - \langle \delta_{x_0}, \mathrm e ^{-s L^{(j)}} (\| \eta \| | _{\overline {U_j}} ^2) \rangle \right| < \frac {r \varepsilon} {16 \| \omega \| _{E_{x_0} ^*} ^2} \] for every $j \ge j_{\varepsilon, t}$ uniformly with respect to $s \in V _{\varepsilon, t}$. Since $[0,T]$ is compact, we may cover it with a finite number of such neighbourhoods, $[0,T] = \bigcup _{i=1} ^{N_{\varepsilon}} V_{\varepsilon, t_i}$. Choosing $j_\varepsilon = \max \{ j_{\varepsilon, t_1}, \dots, j_{\varepsilon, t_N} \}$ we have \[ \left| \langle \delta_{x_0}, \mathrm e ^{-t L} \| \eta \| ^2 \rangle - \langle \delta_{x_0}, \mathrm e ^{-t L^{(j_\varepsilon)}} (\| \eta \| | _{\overline {U_j}} ^2) \rangle \right| < \frac {r \varepsilon} {16 \| \omega \| _{E_{x_0} ^*} ^2} \] for all $t \in (0,T]$. Using now theorem \ref {approximation on regular domains} on $U_{j_\varepsilon}$, we may find a $k_\varepsilon \in \mathbb N$ such that \[ \| P_{t, \omega, \eta, k} - \rho_{t, \omega, \eta} \| _{\Gamma^2 (\mathcal E ^* | _{\mathcal C_t (\overline {U_j}), w_t ^{(j)}})} ^2 < \frac \varepsilon 2 \] for all $k \ge k_\varepsilon$, uniformly with respect to $t \in (0,T]$. Combining all these upper bounds we obtain that \[ \| P_{t, \omega, \eta, k} - \rho_{t, \omega, \eta} \| _{\Gamma^2 (\mathcal E ^*)} ^2 < \varepsilon \] for all $k \ge k_\varepsilon$, uniformly with respect to $t \in (0,T]$, whence the conclusion is clear. \end{proof} \begin{remark} The section $\rho_{t, \omega, \eta}$ does not depend on the exhaustion with regular domains used to construct it, because it is the limit of the sequence of sections $(P_{t, \omega, \eta, k}) _{k \in \mathbb N}$ that does not depend on any exhaustion. \end{remark} \section{Getting rid of the auxiliary section} Let us remember now that one the aims of this article is to give a new construction of the stochastic parallel transport. Whatever this object may be, it is clear that the stochastic parallel transport of a vector $v \in E_{x_0}$ should not depend on any section $\eta \in \Gamma_{cb} (E)$. Indeed, if one looks back at the proof of theorem \ref{approximation on regular domains}, one sees that $\eta$ was needed only for technical reasons, in order for us to be able to use theorem \ref{application of Chernoff's theorem} and corollary \ref{second application of Chernoff's theorem}, which in turn were based upon Chernoff's theorem. Since $\eta$ is seen to play an exclusively auxiliary role, in the following we shall concentrate our efforts on eliminating it from our results. In order to achieve this, we shall need a number of useful auxiliary results. Let us begin by showing that $\rho_{t, \omega, \eta}$, which so far has been constructed under the hypothesis that $\eta \in \Gamma_{cb} (E)$, can be extended to a significantly larger class of sections $\eta$. More precisely, let \[ \Gamma _t ' = \left\{ \eta : M \to E \text{ measurable section } \mid \int _M h(t, x_0, x) \, \| \eta_x \| _{E_x} ^2 \, \mathrm d x < \infty \right\} \] and let $\Gamma_t$ be the quotient of $\Gamma_t '$ under equality almost everywhere. It is easy to show that $\Gamma_t$ is a Hilbert space, the Hermitian product being \[ \langle \eta, \eta' \rangle _{\Gamma_t} = \int _M h(t, x_0, x) \, \langle \eta_x, \eta'_x \rangle _{E_x} \, \mathrm d x = \int _{\mathcal C_t} \langle \eta_{c(t)}, \eta'_{c(t)} \rangle _{E_{c(t)}} \, \mathrm d w_t (c) \ . \] It is also clear that $\Gamma^\infty (E) \subseteq \Gamma_t$. \begin{theorem} The space $\Gamma_{cb} (E)$ is dense in $\Gamma_t$. \end{theorem} \begin{proof} It is obvious that $\Gamma_{cb} (E) \subseteq \Gamma_t$. Let $\eta \in \Gamma_{cb} (E) ^\perp$; we shall show that $\eta=0$. If $f \in C_b (M)$ and $\eta' \in \Gamma_{cb} (E)$, then $f \eta' \in \Gamma_{cb} (E) \subseteq \Gamma_t$ and \[ 0 = \langle f \eta', \eta \rangle _{\Gamma_t} = \int _M f(x) \, \langle \eta'_x, \eta_x \rangle _{E_x} \, \mathrm d x \ . \] Since $f$ is arbitrary, we conclude that there exists a co-null subset $C_{\eta'} \subseteq M$ such that $\langle \eta'_x, \eta_x \rangle _{E_x} = 0$ for every $x \in C_{\eta'}$ (it is understood that we work with some measurable representative of $\eta$). Since $M$ is separable, we may cover it with a countable family of trivialization open domains $(V_i)_{i \in \mathbb N}$; by possibly shrinking them we may assume that each $\overline {V_i}$ is a (closed) domain of trivialization. Choose an orthonormal frame in $E | _{\overline {V_i}}$ and use Tietze's theorem to extend it continuously to the whole $M$; let $\{\eta_i ^1, \dots, \eta_i ^r\}$ be the resulting continuous global frame in $E$; the sections making it up will belong to $\Gamma_{cb} (E)$. Fix $i \in \mathbb N$. There exists a co-null $C_{i,j} \subseteq M$ such that $\langle \eta_i ^j (x), \eta_x \rangle _{E_x} = 0$ for all $x \in C_{i,j}$. Letting $C_i = \bigcap _{j=1} ^r C_{i,j} \cap V_i$, we immediately obtain that $\langle u, \eta_x \rangle _{E_x} = 0$ for all $x \in C_i$ and $u \in E_x$, whence $\eta |_{V_i} = 0$ almost everywhere and therefore $\eta = 0$ in $\Gamma_t$. \end{proof} If we integrate the result of theorem \ref{pointwise norm} with respect to $c \in \mathcal C_t$, we obtain that \[ \| \rho_{t, \omega, \eta} \| _{\Gamma^2 (\mathcal E^*)} \le \frac 1 {\sqrt r} \, \| \omega \| _{E_{x_0}} \| \eta \| _{\Gamma _t} \ , \] when $\eta \in \Gamma_t$; since we have just shown that $\Gamma_{cb} (E)$ is dense in $\Gamma_t$, the map $\Gamma_{cb} (E) \ni \eta \mapsto \rho_{t, \omega, \eta} \in \Gamma^2 (\mathcal E ^*)$ extends to a continuous linear map $\Gamma_t \ni \eta \mapsto \rho_{t, \omega, \eta} \in \Gamma^2 (\mathcal E ^*)$. \begin{lemma} If $\eta \in \Gamma_t$ then $\| \rho _{t, \omega, \eta} (c) \| _{\mathcal E ^* _c} = \frac 1 {\sqrt r} \, \| \omega \| _{E_{x_0}} \, \| \eta _{c(t)} \| _{E_{c(t)}}$ for almost all $c \in \mathcal C_t$. \end{lemma} \begin{proof} If $\eta \in \Gamma_t$, let $(\eta_k)_{k \in \mathbb N} \subset \Gamma_{cb} (E)$ be a sequence that converges to $\eta$ in $\Gamma_t$; it follows that $\rho_{t, \omega, \eta_k} \to \rho_{t, \omega, \eta}$ in $\Gamma^2 (\mathcal E^*)$, so there exists a subsequence $(k_l) _{l \in \mathbb N} \subseteq \mathbb N$ such that $\eta_{k_l} \to \eta$ almost everywhere on $M$ and $\rho_{t, \omega, \eta_{k_l}} \to \rho_{t, \omega, \eta}$ almost everywhere on $\mathcal C_t$, whence \[ \| \rho _{t, \omega, \eta} (c) \| _{\mathcal E ^* _c} = \frac 1 {\sqrt r} \, \| \omega \| _{E_{x_0}} \, \| \eta _{c(t)} \| _{E_{c(t)}} \] foe almost all $c \in \mathcal C_t$ (again, we have tacitly worked with some arbitrary measurable representative of $\eta$; if we choose another one, this will coincide with the former on a co-null subset of $M$, which does not change the conclusion of the lemma). \end{proof} Let $p_t : \mathcal C_t \to M$ be the projection given by $p_t (c) = c(t)$. For every $v \in E_{x_0}$ we shall denote by $v^\flat \in E_{x_0} ^*$ the linear form given by $v^\flat = \sqrt r \, \langle \cdot, v \rangle _{E_{x_0}} \in E_{x_0} ^*$. The notation $p_t ^* E$ will denote the fiber bundle above $\mathcal C_t$ obtained as the pull-back of $E \to M$ under $p_t$. Its fiber $(p_t ^* E) _c$ over the curve $c \in \mathcal C_t$ will be, by definition, $E_{c(t)}$, and we shall use the latter notation for its simplicity. In the following we shall construct, for every $p \in (1, \infty]$, a continuous conjugate-linear map $\Gamma^p (\mathcal E) \ni \xi \mapsto \mathcal P_{t,v} ^p (\xi) \in \Gamma^p (p_t ^* E)$ such that \[ [\rho_{t, v^\flat, \eta} (c)] \, [\xi(c)] = \langle \mathcal P_{t,v} ^p (\xi) (c), \, \eta(c(t)) \rangle _{E_{c(t)}} \] for every $\eta \in \Gamma^\infty (E)$. Let $M = \bigcup _{i \in \mathbb N} V_i '$ be a cover of $M$ with open trivialization domains for $E$. Let $V_0 = V_0 '$ and $V_i = V_i ' \setminus (V_0 \cup \dots \cup V_{i-1})$ for $i \ge 1$; these subsets will be measurable, pairwise disjoint, trivialization domains. Let $\{ \eta _i ^1, \dots, \eta _i ^r \}$ be a measurable orthonormal frame in $E | _{V_i}$. Defining $\eta ^l$ by $\eta ^l | _{V_i} = \eta _i ^l$ for all $1 \le l \le r$ and $i \in \mathbb N$, we obtain a global measurable orthonormal frame $\{ \eta ^1, \dots, \eta ^r \}$ in $E$ made of sections from $\Gamma^\infty (E) \subseteq \Gamma_t$, that is of sections $\eta ^l$ for which $\rho_{t, v^\flat, \eta ^l}$ is a well-defined object as we have seen above. Let $\{ \eta _1, \dots, \eta _r \}$ be the dual frame in $E^*$ defined by $\eta _k (\eta ^l) = \delta _k ^l$ (Kronecker's symbol). If $\sigma \in \Gamma^{\frac p {p-1}} (p_t ^* E^*)$, then \[ \sigma(c) = \sum _{l=1} ^r \{\sigma(c)\ \, [\eta ^l (c(t))]\} \, \eta _l (c(t)) \in E^* _{c(t)} \] for every $c \in \mathcal C_t$. We define then the functional $\mathcal F_{t,v,\xi} ^p : \Gamma^{\frac p {p-1}} (p_t ^* E^*) \to \mathbb C$ by \[ \mathcal F _{t,v,\xi} ^p (\sigma) = \sum _{l=1} ^r \int _{\mathcal C_t} \{[\sigma(c)] \, [\eta ^l (c(t))]\} \, \overline{ \{[\rho_{t, v^\flat, \eta ^l} (c)] \, [\xi (c)]\} } \, \mathrm d w_t (c) \] and it is obvious that it is linear. We have seen above that \[ \| \rho_{t, v^\flat, \eta ^l} (c) \| _{\mathcal E ^* _c} = \frac 1 {\sqrt r} \, \| v^\flat \| _{E_{x_0}^*} \, \| \eta ^l _{c(t)} \| _{E _{c(t)}} = \| v \| _{E_{x_0}} \ , \] whence we obtain that \begin{align*} |\mathcal F _{t,v,\xi} ^p (\sigma)| & \le \sum _{l=1} ^r \int _{\mathcal C_t} \| \sigma(c) \| _{E _{c(t)} ^*} \, \| \eta_l (c(t)) \| _{E_{c(t)} ^*} \, \| \xi(c) \| _{\mathcal E ^* _c} \, \| \rho_{t, v^\flat, \eta ^l} (c) \| _{\mathcal E^*_c} \, \mathrm d w_t (c) \le \\ & \le r \, \| v \| _{E_{x_0}} \int _{\mathcal C_t} \| \sigma(c) \| _{E _{c(t)} ^*} \, \| \xi(c) \| _{\mathcal E ^* _c} \, \mathrm d w_t (c) \le \\ & \le r \, \| v \| _{E_{x_0}} \, \| \xi \| _{\Gamma^p (\mathcal E)} \, \| \sigma \| _{\Gamma^{\frac p {p-1}} (p_t ^* E^*)} \ . \end{align*} We conclude that there exists a unique section $\mathcal P_{t,v} ^p (\xi) \in \Gamma^p (p_t ^* E)$ such that \[ \mathcal F _{t,v,\xi} ^p (\sigma) = \int _{\mathcal C_t} [\sigma (c)] \, [\mathcal P _{t,v} ^p (\xi) (c)] \, \mathrm d w_t (c) \] for all $\sigma \in \Gamma^{\frac p {p-1}} (p_t ^* E^*)$, and that \[ \| \mathcal P_{t,v} ^p (\xi) \| _{\Gamma^p (p_t ^* E)} \le r \, \| v \| _{E_{x_0}} \, \| \xi \| _{\Gamma^p (\mathcal E)} \ . \] The continuity and conjugate-linearity of $\xi \mapsto \mathcal P_{t,v} ^p (\xi)$ are obvious. \begin{corollary} With the notations above, $\langle \eta_{c(t)}, \, \mathcal P _{t,v} ^p (\xi) (c) \rangle _{E_{c(t)}} = [\rho_{t, v^\flat, \eta} (c)] \, [\xi (c)]$ for all $\eta \in \Gamma^\infty (E)$ and almost all $c \in \mathcal C_t$. \end{corollary} \begin{proof} Let $\eta \in \Gamma^\infty (E)$ and $f \in L^{\frac p {p-1}} (\mathcal C_t)$. Denote by $\eta^\flat \in \Gamma^\infty (E^*)$ the dual section, defined pointwise by $\eta^\flat _x (u) = \langle u, \eta_x \rangle _{E_x}$ for almost all $x \in M$ and all $u \in E_x$. With this notation, $f \eta^\flat \in \Gamma^{\frac p {p-1}} (E^*)$. Using the definitions of $\mathcal P_{t,v} ^p$ and of $\mathcal F _{t,v} ^p$ given above, \begin{gather*} \int _{\mathcal C_t} f(c) \, \langle \mathcal P _{t,v} ^p (\xi) (c), \, \eta_{c(t)} \rangle _{E_{c(t)}} \, \mathrm d w_t (c) = \int _{\mathcal C_t} [(f \, p_t ^* \eta^\flat) (c)] \, [\mathcal P _{t,v} ^p (\xi) (c)] \, \mathrm d w_t (c) = \mathcal F _{t,v} ^p (f \, p_t ^* \eta^\flat) = \\ = \sum _{l=1} ^r \int _{\mathcal C_t} \{ [(f \, p_t ^* \eta^\flat) (c)] \, [\eta ^l _{c(t)}] \} \, \overline{ \{[\rho_{t, v^\flat, \eta ^l} (c)] \, [\xi (c)]\} } \, \mathrm d w_t (c) = \\ = \int _{\mathcal C_t} f(c) \sum _{l=1} ^r \langle \eta ^l _{c(t)}, \eta_{c(t)} \rangle _{E_{c(t)}} \overline{ \{[\rho_{t, v^\flat, \eta ^l} (c)] \, [\xi (c)]\} } \, \mathrm d w_t (c) = \\ = \int _{\mathcal C_t} f(c) \, \overline{ \{[\rho_{t, v^\flat, \eta} (c)] \, [\xi (c)]\} } \, \mathrm d w_t (c) \ , \end{gather*} where for the last equality we have used the linearity of the map $\eta \mapsto \rho_{t, v^\flat, \eta}$ and the fact that \[ \eta_{c(t)} = \sum _{l=1} ^r \langle \eta_{c(t)}, \eta ^l (c(t)) \rangle _{E_{c(t)}} \, \eta ^l (c) \] for all $c \in \mathcal C_t$. Since $f$ is arbitrary, we conclude that \[ \langle \eta_{c(t)}, \, \mathcal P _{t,v} ^p (\xi) (c) \rangle _{E_{c(t)}} = \{[\rho_{t, v^\flat, \eta} (c)] \, [\xi (c)]\} \] for almost all $c \in \mathcal C_t$. \end{proof} The linearity of the map $\mathcal P_{t,v} ^p (\xi)$ with respect to $v \in E_{x_0}$ allows us to define a section $\mathcal P _t ^p (\xi) \in \Gamma^p (p_t ^* E) \otimes E_{x_0} ^*$ by requiring that $\mathcal P _t ^p (\xi) (c) (v) = \mathcal P _{t,v} ^p (\xi) (c)$ for all $v \in E_{x_0}$ and almost all $c \in \mathcal C_t$. Furthermore, \[ \| \mathcal P_t ^p (\xi) (c) \| _{E_{c(t)} \otimes E_{x_0} ^*} = \sup _{\| v \| _{E_{x_0}} = 1} \| \mathcal P_{t,v} ^p (\xi) (c) \| _{E_{c(t)}} \le r \, \| v \| _{E_{x_0}} \, \| \xi \| _{\Gamma^p (\mathcal E)} = r \, \| \xi \| _{\Gamma^p (\mathcal E)} \ . \] The map $\mathcal P_t ^p$ encloses a great deal of information regarding the differential geometry and the stochastic calculus associated to the bundle $E$. In the rest of this article we shall see just two of its uses, hopefully enough to convince the reader of its usefulness: the stochastic parallel transport and the Feynman-Kac formula. \section{The stochastic parallel transport} Let us begin by defining the sections $\mathcal P_{t,v,k}$ by the explicit formula \[ \mathcal P_{t,v,k} (c) = P \left( c(t), c \left( \frac {(2^k-1) t} {2^k} \right) \right) \dots P \left( c \left( \frac t {2^k} \right), c(0) \right) v \] where $v \in E_{x_0}$ is arbitrary, $c \in \mathcal C_t$ and $k \in \mathbb N$. Notice that $\mathcal P_{t,v,k} (c)$ belongs to the fiber $E_{c(t)} = (p_t ^* E) _c$. Since $P$ has been shown to be a measurable map, $\mathcal P_{t,v,k}$ will be a measurable section in $p_t ^* E$. Furthremore, since $\| \mathcal P_{t,v,k} \| _{E_{c(t)}} \le \| v \| _{E_{x_0}}$, we deduce that $\mathcal P_{t,v,k} \in \Gamma^\infty (p_t ^* E) \subseteq \Gamma^2 (p_t ^* E)$. Let us define the section $\operatorname{Id} : \mathcal C_t \to \mathcal E$ by $\operatorname{Id} (c) = \otimes _{s \in D_t} \operatorname{Id} _{E_{c(s)}} \in \mathcal E _c$; more precisely, $\operatorname{Id} (c)$ is the equivalence class (in the sense of the construction of the algebraic inductive limit as a space of equivalence classes), for instance, of the element $\operatorname{Id} _{E_{x_0}}$, and the map $\mathcal C_t \ni c \mapsto \operatorname{Id} _{E_{x_0}} \in \mathcal E _c$ is obviously continuous. Furthermore, it is obvious that $\| \operatorname{Id} (c) \| _{\mathcal E _c} = \| \operatorname{Id} (E_{x_0}) \| _{\operatorname{End} E_{x_0}} = 1$, so $\operatorname{Id} \in \Gamma^\infty (p_t ^* E) \subseteq \Gamma^2 (p_t ^* E)$. We notice then that \[ [P_{t, v^\flat, \eta, k} (c)] \, [\operatorname{Id} (c)] = [P_{t, v^\flat, \eta, k} (c)] \, [ \operatorname{Id}_{E_{c(0)}} \otimes \dots \otimes \operatorname{Id}_{E_{c(t)}}] = \langle \eta(c(t)), \, \mathcal P_{t,v,k} (c) \rangle _{E_{c(t)}} \ . \] \begin{theorem} $\mathcal P_{t, v, k} \to \mathcal P_{t, v} ^2 (\operatorname{Id})$ in $\Gamma^2 (p_t ^* E)$ for all $v \in E_{x_0}$, uniformly with respect to $t \in (0, T]$ for all $T>0$. \end{theorem} \begin{proof} Using again the global measurbale orthonormal frame $\{ \eta^1, \dots, \eta^r \}$ in $E$, \begin{align*} \sup _{t \in (0,T]} \| \mathcal P_{t,v} ^2 (\operatorname{Id}) & - \mathcal P_{t,v,k} \| _{\Gamma^2 (p_t ^* E)} ^2 = \\ & = \sup _{t \in (0,T]} \int _{\mathcal C_t} \| \mathcal P_{t,v} ^2 (\operatorname{Id}) (c) - \mathcal P_{t,v,k} (c) \| _{E _{c(t)}} ^2 \, \mathrm d w_t (c) = \\ & = \sup _{t \in (0,T]} \int _{\mathcal C_t} \sum _{l=1} ^r | \langle \eta ^l (c(t)), \, \mathcal P_{t,v} ^2 (\operatorname{Id}) (c) - \mathcal P_{t,v,k} (c) \rangle _{E_{c(t)}} | ^2 \, \mathrm d w_t (c) = \\ & = \sup _{t \in (0,T]} \sum _{l=1} ^r \int _{\mathcal C_t} | [\rho_{t, v^\flat, \eta ^l} (c) - P_{t, v^\flat, \eta ^l, k} (c)] \, [\operatorname{Id} (c)] | ^2 \, \mathrm d w_t (c) \le \\ & \le \sum _{l=1} ^r \sup _{t \in (0,T]} \int _{\mathcal C_t} \| \rho_{t, v^\flat, \eta ^l} (c) - P_{t, v^\flat, \eta ^l, k} (c) \| _{\mathcal E^* _c} ^2 \, \mathrm d w_t (c) \le \\ & \le \sum _{l=1} ^r \sup _{t \in (0,T]} \| \rho_{t, v^\flat, \eta ^l} - P_{t, v^\flat, \eta ^l, k} \| _{\Gamma^2 (\mathcal E^*)} ^2 \to 0 \ , \end{align*} which together with theorem \ref{approximation of rho} shows the desired convergence. \end{proof} Comparing this result with the one obtained by probabilistic techniques (\cite{Ito63}, \cite{Ito75a}, \cite{Ito75b}), we conclude that $\mathcal P_{t, v} ^2 (\operatorname{Id})$ is the stochastic parallel transport in $E$ of the vector $v \in E_{x_0}$. In particular, $\mathcal P _{t,v} ^2 (\operatorname{Id})$ does not depend on the choices made in its construction (the domains of trivialization, the orthonormal frames above them etc.), being the limit of a sequence of sections that do not depend on these choices. \begin{corollary} $\| \mathcal P _{t,v} ^2 (\operatorname{Id}) (c) \| _{E_{c(t)}} = \| v \| _{E_{x_0}}$ for almost every curve $c \in \mathcal C_t$. \end{corollary} \begin{proof} Since $\mathcal P_{t, v, k} \to \mathcal P_{t,v} ^2 (\operatorname{Id})$ in $\Gamma^2 (p_t ^* E)$, there exists a subsequence $(k_j) _{j \in \mathbb N} \subseteq \mathbb N$ such that $\mathcal P_{t, v, k_j} (c) \to \mathcal P_{t, v} ^2 (\operatorname{Id}) (c)$ in $E_{c(t)}$ for almost all $c \in \mathcal C_t$. Let $c$ be such a curve; using again the argument in lemma \ref{pointwise norm}, there exists a $l_c \in \mathbb N$ such that $\mathcal P_{t, v, k} (c)$ is the parallel transport of $v$ along a zig-zag line made of $2^k-1$ minimizing geodesic segments, for all $k \ge l_c$, so \[ \| \mathcal P_{t, v} ^2 (\operatorname{Id}) (c) \| _{E_{c(t)}} = \lim _{j \to \infty } \| \mathcal P_{t, v, k_j} (c) \| _{E_{c(t)}} = \| v \| _{E_{x_0}} \ . \] \end{proof} Since $\| \mathcal P _t ^2 (\operatorname{Id}) (v) (c) \| _{E_{c(t)}} = \| v \| _{E_{x_0}}$ for almost every curve $c \in \mathcal C_t$, it makes sense to talk about $\mathcal P _t ^2 (\operatorname{Id}) ^{-1}$. One sees immediately that this object will be a section in $p_t ^* E$ with values in $E_{x_0}$, or formally $\mathcal P _t ^2 (\operatorname{Id}) ^{-1} \in \Gamma^2 (p_t ^* E) \otimes E_{x_0}$. \section{The Feynman-Kac formula in vector bundles} In the following we shall state and prove an extension in Hermitian bundles of the Feynman-Kac formula. Consider a "potential" $V \in \Gamma^1 _{loc} (\operatorname{End} E)$ with the property $\operatorname{ess \, inf} _{x \in M} \min \operatorname{spec} V(x) = \beta > -\infty$ (for short: $V \ge \beta$), and with $V(x)$ self-adjoint for almost all $x \in M$. The quadratic form $\Gamma_0 (E) \ni\eta \mapsto \int _M \langle V(x) \eta_x, \eta_x \rangle _{E_x} \, \mathrm d x \in \mathbb R$ will give rise to a densely-defined self-adjoint operator in $\Gamma^2 (E)$, that we shall denote again by $V$, for simplicity. Indeed, the quadratic form is well-defined because \[ \left| \int _M \langle V(x) \eta_x, \eta_x \rangle _{E_x} \, \mathrm d x \right| \le \sup _{x \in M} \| \eta_x \| ^2 \int _{\operatorname{supp} \eta} \| V(x) \| _{op} \, \mathrm d x < \infty \ . \] It is also lower-bounded by $\beta$ because, if $\{ e_{1,x}, \dots, e_{r,x} \}$ is an orthonormal basis in $E_x$ made of eigenvectors of $V(x)$ with corresponding eigenvalues $\lambda_{1,x} \le \dots \le \lambda_{r,x} \subset [\beta, \infty)$ for every $x \in M$, and if $\eta_x = \sum _{i=1} ^r \alpha_{i,x} \, e_{i,x}$, we have that \begin{align*} \langle V(x) \eta_x, \eta_x \rangle _{E_x} & = \left< \sum _{i=1} ^r \alpha_{i,x} \lambda_{i,x} e_{i,x}, \sum _{j=1} ^r \alpha_{j,x} e_{j,x} \right> _{E_x} = \\ & = \sum _{i=1} ^r \lambda_{i,x} | \alpha_{i,x} | ^2 \ge \sum _{i=1} ^r \lambda_{1,x} | \alpha_{i,x} | ^2 = \lambda_{1,x} \| \eta \| ^2 _{E_x} \ge \beta \| \eta \| ^2 _{E_x} \ . \end{align*} One may construct the self-adjoint, densely-defined operator corresponding to the sum of $H_\nabla$ and $V$ in the same way, using quadratic forms. Of course, the same construction may be performed not only on $M$, but also on any relatively compact open subset with smooth boundary. When the starting point of the continuous curves is no longer the fixed point $x_0 \in M$, as until now, but some variable $x \in M$, all the objects that depend on it will gain it as a supplementary lower index; this means that the space of continuous curves starting at $x$ will be $\mathcal C _{t,x}$, on which we shall have the Wiener measure $w_{t,x}$, and all the objects constructed ini this article so far will also gain a supplementary lower index $x$, meaning that we shall have the sections $\rho_{t, \omega, \eta, x}$, $\mathcal P _{t,v,x}$ and $\mathcal P_{t,x}$ etc. For each $k \in \mathbb N$ denote by $\operatorname V_{t,x,k} \in \Gamma^\infty (\mathcal E)$ the section given by \[ \operatorname V_{t,x,k} (c) = \mathrm e ^{- \frac t {2^k} V \left( c \left( \frac {t} {2^k} \right) \right)} \otimes \dots \otimes \mathrm e ^{- \frac t {2^k} V(c (t))} \ . \] Since $V \ge \beta$ și $t \ge 0$, it is immediate that $\| \operatorname V_{t,x,k} (c) \| _{\mathcal E _c} \le \mathrm e^{-t \beta}$ for almost all $c \in \mathcal C_{t,x}$, whence we conclude with the Banach-Alaoglu theorem that there xists a subsequence $(k_l) _{l \in \mathbb N} \subseteq \mathbb N$ such that the subsequence $(\operatorname V_{t,x,k_l}) _{l \in \mathbb N}$ has a weak limit denoted $\operatorname V_{t,x} \in \Gamma^2 (\mathcal E)$. In particular, we conclude that the section $\mathcal P _{t,x} ^2 (\operatorname V_{t,x})$ exists in $\Gamma^2 (p_t ^* E) \otimes E_x ^*$. \begin{theorem}[The Feynman-Kac formula] If $\eta \in \Gamma^2(E)$ then \[ (\mathrm e ^{-t H_\nabla - t V} \eta) (x) = \int _{\mathcal C_{t,x}} [\mathcal P_{t,x} ^2 (\operatorname V_{t, x}) (c)] ^* \, \eta_{c(t)} \, \mathrm d w_{t,x} (c) \] for every $t>0$ and almost all $x \in M$. \end{theorem} \begin{proof} Let us consider an exhaustion $M = \bigcup _{j \ge 0} U_j$ with relatively compact connected open subsets with smooth boundary, as we have already done in this article, the notations being the ones already encountered. For every $x \in M$ there exists a $j_x \in \mathbb N$ such that $x \in U_j$ for all $j \ge j_x$. This means that for every $x \in M$ it makes sense to consider the spaces $\mathcal C_{t,x} (\overline{U_j})$ for large enough $j$, and this is enough because in the following we shall let $j \to \infty$. From theorem 4.2 in \cite{Simon78} applied to the exponential function we have that \[ \mathrm e ^{-t H_\nabla -t V} = \lim _{j \to \infty} \mathrm e ^{-t H_\nabla ^{(j)} -t V} \] strongly in $\Gamma^2(E)$, while from the Trotter-Kato formula (see \cite{Kato74}) we have that \[ \mathrm e ^{-t H_\nabla ^{(j)} -t V} = \lim _{l \to \infty} \left( \mathrm e ^{-\frac t {2^{k_l}} L_\nabla ^{(j)}} \, \mathrm e ^{-\frac t {2^{k_l}} V} \right) ^{2^{k_l}} \] strongly in $\Gamma^2(E |_{U_j})$, where $(k_l) _{l \in \mathbb N} \subseteq \mathbb N$ is the subsequence found right above this theorem. It follows that there exists a sub-subsequence $(k_{l_m}) _{m \in \mathbb N} \subseteq \mathbb N$ such that \[ [\mathrm e ^{-t H_\nabla ^{(j)} - t V} \, \eta] (x) = \lim _{m \to \infty} \left( \mathrm e ^{-\frac t {2^{k_{l_m}}} H_\nabla ^{(j)}} \, \mathrm e ^{-\frac t {2^{k_{l_m}}} V} \, \eta \right) ^{2^{k_{l_m}}} (x) \] for all $\eta \in \Gamma^2 (E)$ and almost all $x \in U_j$. It follows that if $\eta, \eta' \in \Gamma^2(E)$, then \begin{gather} \nonumber \langle \mathrm e ^{-t H_\nabla - t V} \eta, \eta' \rangle _{\Gamma^2(E)} = \lim _{j \to \infty} \langle \mathrm e ^{-t H_\nabla ^{(j)} -t V} \eta, \eta' \rangle _{\Gamma^2 (E | _{U_j})} = \\ \nonumber = \lim _{j \to \infty} \left< \lim _{m \to \infty} \left( \mathrm e ^{-\frac t {2^{k_{l_m}}} H_\nabla ^{(j)}} \, \mathrm e ^{-\frac t {2^{k_{l_m}}} V} \right) ^{2^{k_{l_m}}} \eta, \eta' \right> _{\Gamma^2 (E | _{U_j})} = \\ \nonumber = \lim _{j \to \infty} \int _{U_j} \mathrm d x \lim _{m \to \infty} \left< \int _{U_j} \mathrm d x_1 \, h_\nabla ^{(j)} \left( \frac t {2^{k_{l_m}}}, x, x_1 \right) \mathrm e ^{-\frac t {2^{k_{l_m}}} V(x_1)} \right. \dots \\ \nonumber \dots \left. \int _{U_j} \mathrm d x_{2^{k_{l_m}}} \, h_\nabla ^{(j)} \left( \frac t {2^{k_{l_m}}}, x_{2^{k_{l_m}} - 1}, x_{2^{k_{l_m}}} \right) \mathrm e ^{-\frac t {2^{k_{l_m}}} V(x_{2^{k_{l_m}}})} \eta(x_{2^{k_{l_m}}}) , \eta'_x \right> _{E_x} = \\ \nonumber = \lim _{j \to \infty} \int _{U_j} \mathrm d x \lim _{m \to \infty} W_{t, {\eta'_x} ^\flat, \eta, x} ^{(j)} \left( \operatorname V_{t, x, k_{l_m}} | _{\mathcal C_{t,x} (\overline{U_j})} \right) = \\ \nonumber = \lim _{j \to \infty} \int _{U_j} \mathrm d x \lim _{m \to \infty} \int _{\mathcal C_{t,x} (\overline{U_j})} [\rho_{t, {\eta'_x} ^\flat, \eta, x} ^{(j)} (c)] \, [\operatorname V_{t, x, k_{l_m}} (c)] \, \mathrm d w_{t,x} ^{(j)} (c) = \\ = \lim _{j \to \infty} \int _{U_j} \mathrm d x \lim _{m \to \infty} \int _{\mathcal C_{t,x} (\overline{U_j})} [\rho_{t, {\eta'_x} ^\flat, \eta, x} ^{(j)} (c)] \, [\operatorname V_{t, x, k_{l_m}} (c)] \, \mathrm d w_{t,x} (c) + \label{first summand} \\ + \lim _{j \to \infty} \int _{U_j} \mathrm d x \lim _{m \to \infty} \int _{\mathcal C_{t,x} (\overline{U_j})} [\rho_{t, {\eta'_x} ^\flat, \eta, x} ^{(j)} (c)] \, [\operatorname V_{t, x, k_{l_m}} (c)] \, \mathrm d [w_{t,x} ^{(j)} - w_{t,x}] (c) \ . \label{second summand} \end{gather} Due to the weak convergence of $\operatorname V_{t,x,{k_{l_m}}}$ to $\operatorname V_{t,x}$ in $\Gamma^2 (\mathcal E)$, and therefore in $\Gamma^2 (\mathcal E | _{\mathcal C_{t,x} (\overline{U_j})})$ (both spaces being considered with respect to the measure $w_{t,x}$), the term (\ref{first summand}) is \begin{gather*} \lim _{j \to \infty} \int _{U_j} \mathrm d x \int _{\mathcal C_{t,x} (\overline{U_j})} [\rho_{t, {\eta'_x} ^\flat, \eta, x} ^{(j)} (c)] \, [\operatorname V_{t, x} (c)] \, \mathrm d w_{t,x} (c) = \\ = \int _M \mathrm d x \int _{\mathcal C_{t,x}} [\rho_{t, {\eta'_x} ^\flat, \eta, x} (c)] \, [\operatorname V_{t, x} (c)] \, \mathrm d w_{t,x} (c) = \\ = \int _M \mathrm d x \int _{\mathcal C_{t,x}} \langle \eta_{c(t)}, \mathcal P_{t,x} ^2 (\operatorname V_{t, x}) (c) \, \eta'_x \rangle _{E_{c(t)}} \, \mathrm d w_{t,x} (c) = \\ = \int _M \mathrm d x \int _{\mathcal C_{t,x}} \langle [\mathcal P_{t,x} ^2 (\operatorname V_{t, x}) (c)] ^* \, \eta_{c(t)}, \eta'_x \rangle _{E_{c(t)}} \, \mathrm d w_{t,x} (c) = \\ = \int _M \mathrm d x \left\langle \int _{\mathcal C_{t,x}} [\mathcal P_{t,x} ^2 (\operatorname V_{t, x}) (c)] ^* \, \eta_{c(t)} \, \mathrm d w_{t,x} (c), \eta'_x \right\rangle _{E_x} \ . \end{gather*} In order to obtain the limit when $j \to \infty$ we have applied the dominated convergence theorem on $M$ to the limit \[ \lim _{j \to \infty} 1 _{U_j} (x) \int _{\mathcal C_{t,x} (\overline{U_j})} [\rho_{t, {\eta'_x} ^\flat, \eta, x} ^{(j)} (c)] [\operatorname V_{t,x,{k_l}} (c)] \, \mathrm d w_{t,x} (c) = \int _{\mathcal C_{t,x}} [\rho_{t, {\eta'_x} ^\flat, \eta, x} (c)] [\operatorname V_{t,x,{k_l}} (c)] \, \mathrm d w_{t,x} (c) \] valid for almost all $x \in M$, where $1_{U_j}$ is the characteristic function of $U_j$. The domination is insured by the fact that both $\| \rho_{t, {\eta'_x} ^\flat, \eta, x} ^{(j)} (c) \| _{\mathcal E ^* _c}$ and $\| \rho_{t, {\eta'_x} ^\flat, \eta, x} (c) \| _{\mathcal E ^* _c}$ are bounded by $\| \eta'_x \| _{E_x} \, \| \eta (c(t)) \| _{E_{c(t)}}$, and $\| \operatorname V_{t,x,{k_l}} (c) \| _{\mathcal E _c}$ is bounded by $\mathrm e ^{-t \beta}$, for almost all $c$, hence \begin{gather*} \left| 1 _{U_j} (x) \int _{\mathcal C_{t,x} (\overline{U_j})} [\rho_{t, {\eta'_x} ^\flat, \eta, x} ^{(j)} (c)] [\operatorname V_{t,x,{k_l}} (c)] \, \mathrm d w_{t,x} (c) \right| \le \mathrm e ^{-t \beta} \, \| \eta'_x \| _{E_x} \int _{\mathcal C_{t,x} (\overline{U_j})} \| \eta (c(t)) \| _{E_{c(t)}} \, \mathrm d w_{t,x} (c) \le \\ \le \mathrm e ^{-t \beta} \, \| \eta'_x \| _{E_x} \int _{\mathcal C_{t,x}} \| \eta (c(t)) \| _{E_{c(t)}} \, \mathrm d w_{t,x} (c) = \mathrm e ^{-t \beta} \, \| \eta' _x \| _{E_x} \int _M h(t,x,y) \, \| \eta_y \| _{E_y} \, \mathrm d y \le \\ \le \mathrm e ^{-t \beta} \, \| \eta' _x \| _{E_x} \, (\mathrm e ^{-t H_{\mathrm d, 0}} \| \eta \|) (x) \ , \end{gather*} and the latter function is finite at every $x \in M$ and has the integral \[ \int _M \| \eta' _x \| _{E_x} \, (\mathrm e ^{-t H} \| \eta \|) (x) \, \mathrm d x = \langle \| \eta' \|, \, \mathrm e ^{-t H} \| \eta \| \rangle _{L^2 (M)} \le \| \eta' \| _{\Gamma^2 (E)} \, \| \eta \| _{\Gamma^2 (E)} < \infty \ . \] Using the same majorizations as above, and using that $w_{t,x} ^{(j)} \le w_{t,x}$, the term (\ref{second summand}) is $0$ because \begin{gather*} \lim _{j \to \infty} \left| \int _{U_j} \mathrm d x \lim _{m \to \infty} \int _{\mathcal C_{t,x} (\overline{U_j})} [\rho_{t, {\eta'_x} ^\flat, \eta, x} ^{(j)} (c)] \, [\operatorname V_{t, x, k_{l_m}} (c)] \, \mathrm d [w_{t,x} ^{(j)} - w_{t,x}] (c) \right| \le \\ \le \lim _{j \to \infty} \int _{U_j} \mathrm d x \lim _{m \to \infty} \int _{\mathcal C_{t,x} (\overline{U_j})} \| \rho_{t, {\eta'_x} ^\flat, \eta, x} ^{(j)} (c) \| _{\mathcal E ^* _c} \, \| \operatorname V_{t, x, k_{l_m}} (c) \| _{\mathcal E _c} \, \mathrm d [w_{t,x} - w_{t,x} ^{(j)}] (c) \le \\ \le \mathrm e ^{-t \beta} \lim _{j \to \infty} \int _{U_j} \mathrm d x \, \| \eta'_x \| _{E_x} \int _{\mathcal C_{t,x} (\overline{U_j})} \| \eta_{c(t)} \| _{E_{c(t)}} \, \mathrm d [w_{t,x} - w_{t,x} ^{(j)}] (c) \le \\ \le \mathrm e ^{-t \beta} \lim _{j \to \infty} \int _{U_j} \mathrm d x \, \| \eta'_x \| _{E_x} \left[ \int _{\mathcal C_{t,x}} \| \eta_{c(t)} \| _{E_{c(t)}} \, \mathrm d w_{t,x} (c) - \int _{\mathcal C_{t,x} (\overline{U_j})} \| \eta_{c(t)} \| _{E_{c(t)}} \, \mathrm d w_{t,x} ^{(j)} (c) \right] \le \\ \le \| \eta' \| _{\Gamma^2 (E)} \, \lim _{j \to \infty} \left\| \mathrm e ^{-t H} \| \eta \| - \mathrm e ^{-t H ^{(j)}} \| \eta \| \right\| _{L^2 (M)} = 0 \ . \end{gather*} We conclude that \[ \langle \mathrm e ^{-t H_\nabla - t V} \eta, \eta' \rangle _{\Gamma^2(E)} = \int _M \mathrm d x \left\langle \int _{\mathcal C_{t,x}} [\mathcal P_{t,x} ^2 (\operatorname V_{t, x}) (c)] ^* \, \eta_{c(t)} \, \mathrm d w_{t,x} (c), \eta'_x \right\rangle _{E_x} \ , \] whence \[ (\mathrm e ^{-t H_\nabla - t V} \eta) (x) = \int _{\mathcal C_{t,x}} [\mathcal P_{t,x} ^2 (\operatorname V_{t, x}) (c)] ^* \, \eta_{c(t)} \, \mathrm d w_{t,x} (c) \] for every $\eta \in \Gamma^2(E)$ and almost all $x \in M$. \end{proof} Notice that if we define the map $\mathcal V_{t,x} : \mathcal C_{t,x} \to \operatorname{End} E_x$ by \[ \mathcal V _{t,x} (c) = [\mathcal P_{t,x} ^2 (\operatorname V_{t, x}) (c)] ^* \, [\mathcal P_{t,x} ^2 (\operatorname {Id}) (c)] \] we may trivially rewrite the Feynman-Kac formula under the equivalent form \[ (\mathrm e ^{-t H_\nabla - t V} \eta) (x) = \int _{\mathcal C_{t,x}} [\mathcal V_{t,x} (c)] \, [\mathcal P_{t,x} ^2 (\operatorname {Id}) (c)] ^{-1} \, \eta_{c(t)} \, \mathrm d w_{t,x} (c) \ ; \] so rewritten, the Feynman-Kac formula in bundles has been obtained by other authors, too, but in other contexts and under different assumptions: \begin{itemize}[wide] \item the authors of \cite{BP08} use functional-analytic techniques (again based on Chernoff's theorem) but the potential $V$ is assumed smooth and $M$ is a closed manifold; \item the authors of \cite{DT01} use probabilistic techniques to give abstract conditions in proposition 4.5 under which the Feynman-Kac formula is valid, after which proposition 5.1 shows that these conditions are met when $M$ is closed; the potential (denoted therein by $\mathcal R$) is assumed smooth (p.48); \item in \cite{Guneysu10} the Feynman-Kac formula is proved using functional-analytic techniques, but assuming the existence of the stochastic parallel transport, under the assumption that the manifold is both metrically and stochastically complete, and under very generous hypotheses on the potential (in theorem 3.1 it is assumed essentially bounded, and in theorem 3.3 the result is extended to the more general situation when the potential is locally square-integrable); in remark 1.4 therein the author sketches the modifications to be made to the proof in order for the assumption of metric completeness to be dropped, but does not give further details; \item in \cite{BG20} (arXiv preprint, submitted for publication but still unpublished at the date of writing of this text), the potential $V$, which may be understood as a differential operator of order $0$, is assumed now to be a differential operator of order $0$ or $1$ acting on the smooth sections in $E$ (in particular, this means that $V$ has smooth coefficients), such that the operator $\nabla ^* \nabla + V$ should be sectorial; such a potential gives rise naturally to a stochastic differential equation, the unique solution of which is assumed to be locally square-integrable in a certain uniform way with respect to $x \in M$ (in our notations); this hypothesis guarantees that the equality in the Feynman-Kac formula will be valid everywhere, not just almost everywhere. No restrictions are placed on the manifold $M$. \end{itemize} It can be seen, by way of comparison with the cited previous work, that the Feynman-Kac formula presented here seems to be the most general one currently existing in the literature. \begin{corollary} If $V : M \to \mathbb R$ is continuous and lower-bounded, then the above Feynman-Kac formula reduces to \[ (\mathrm e ^{-t H_\nabla -t V} \eta) (x) = \int _{\mathcal C_{t,x}} \mathrm e ^{- \int _0 ^t V(c(s)) \, \mathrm d s} \, [\mathcal P_{t,x} ^2 (\operatorname {Id}) (c)] ^{-1} \, \eta_{c(t)} \, \mathrm d w_{t,x} (c) \ . \] \end{corollary} \begin{proof} When $V$ is a continuous scalar function, \begin{align*} \operatorname V_{t,x,k} (c) & = \mathrm e ^{- \frac t {2^k} V \left( c \left( \frac {t} {2^k} \right) \right)} \otimes \dots \otimes \mathrm e ^{- \frac t {2^k} V(c (t))} = \mathrm e ^{- \frac t {2^k} \sum _{j=1} ^{2^k} V \left( c \left( \frac {jt} {2^k} \right) \right)} \, \operatorname{Id} _{E_{c(\frac t {2^k})}} \otimes \dots \otimes \operatorname{Id} _{E_{c(\frac {2^k t} {2^k})}} = \\ & = \mathrm e ^{- \frac t {2^k} \sum _{j=1} ^{2^k} V \left( c \left( \frac {jt} {2^k} \right) \right)} \, \operatorname{Id} \to \mathrm e ^{- \int _0 ^t V(c(s)) \, \mathrm d s} \, \operatorname{Id} \ , \end{align*} the convergence being valid for all $c \in \mathcal C_{t,x}$, therefore also weakly in $\Gamma^2 (\mathcal E)$, using the dominated convergence theorem. It follows that \[ \mathcal P _{t,x} ^2 (\operatorname V_{t,x}) = \mathrm e ^{- \int _0 ^t V(c(s)) \, \mathrm d s} \, \mathcal P _{t,x} ^2 (\operatorname {Id}) \] and the conclusion is immediate. \end{proof} \begin{remark} Comparing the results herein with the ones obtained by the author in \cite{Mustatea22}, we notice that if $E = M \times \mathbb C$ and $\nabla = \mathrm d + \mathrm i \alpha$, then \[ \mathcal P_{t,x} ^2 (\operatorname{Id}) = \mathrm e^{- \mathrm i \operatorname{Strat} _{t,x} (\alpha)} \] for almost all $x \in M$. This shows once more that the Stratonovich stochastic integral is the "most geometrically-flavoured" of all the stochastic integrals considered therein, since its exponential (including the negative imaginary unit) is the stochastic parallel transport, in perfect analogy with how the parallel transport along some smooth curve $c$ with respect to $\nabla$ is $\mathrm e ^{- \mathrm i \int _c \alpha}$. \end{remark} When $V=0$, the Feynman-Kac formula and the disintegration theorem for measures allow us to derive a formula expressing the heat kernel $h_\nabla$ in the bundle $E$ in terms of the heat kernel $h$ acting on functions and the stochastic parallel transport in $E$. In order to state it, we shall need to introduce some notations. Let us endow the manifold $M$ with the measure $h(t, x, \cdot) \, \mathrm d x$, where $\mathrm d x$ is the natural measure on $M$. The map $p_t : \mathcal C_{t,x} \to M$ satisfies the hypotheses of the disintegration theorem (p.78-III and following of \cite{DM78}), therefore there exists a family $(\nu_{t,x,y}) _{y \in M}$ of Borel regular probabilities on $\mathcal C_{t,x}$, uniquely determined for almost all $y \in M$, such that $\nu_{t,x,y}$ is concentrated on $p_t ^{-1} (\{y\}) = \{ c \in \mathcal C_{t,x} \mid c(t) = y \}$ for almost all $y \in M$, and \[ \int _{\mathcal C_{t,x}} f \, \mathrm d w_{t,x} = \int _M h(t, x, y) \left( \int _{p_t ^{-1} (\{y\})} f \, \mathrm d \nu_{t,x,y} \right) \mathrm d y \] for all $f \in L^1 (\mathcal C_{t,x})$. \begin{corollary} \[ h_\nabla (t,x,y) = h(t,x,y) \int_{p_t ^{-1} (\{y\})} [\mathcal P_{t,x} ^2 (\operatorname{Id})] ^{-1} \, \mathrm d \nu_{t,x,y} \in E_x \otimes E_y ^* \] for all $t>0$, all $x \in M$ and almost all $y \in M$. \end{corollary} \begin{proof} Choosing $V=0$ in the Feynman-Kac formula in bundles, we obtain \begin{gather*} \omega \left[ \int _M h_\nabla (t,x,y) \, \eta_y \, \mathrm d y \right] = \omega \{ [\mathrm e ^{-t H_\nabla} \eta] (x) \} = \omega \left[ \int _{\mathcal C_{t,x}} [\mathcal P_{t,x} ^2 (\operatorname {Id}) (c)] ^{-1} \, \eta_{c(t)} \, \mathrm d w_{t,x} (c) \right] = \\ = \omega \left[ \int _M \mathrm d y \, h(t,x,y) \int _{p_t ^{-1} (\{y\})} [\mathcal P_{t,x} ^2 (\operatorname {Id}) (c)] ^{-1} \, \eta_y \, \mathrm d \nu_{t,x,y} (c) \right] \end{gather*} for all $\eta \in \Gamma^2 (E)$ and $\omega \in E_x ^*$, whence the conclusion is clear. \end{proof} \begin{remark} Since $h_\nabla (t,x,y) ^* = h_\nabla (t,y,x)$, the above equality may be rewritten, equivalently, as \[ h_\nabla (t,x,y) = h(t,x,y) \int_{p_t ^{-1} (\{x\})} [\mathcal P_{t,y} ^2 (\operatorname{Id})] \, \mathrm d \nu_{t,y,x} \in E_x \otimes E_y ^* \] for all $t>0$ and $y \in M$, and for almost all $x \in M$. \end{remark} If $M = \bigcup _{j \in \mathbb N} U_j$, we already know that $h_j \to h$ pointwise; as a final application of all the results obtained above, we shall show that $h_\nabla ^{(j)} \to h_\nabla$ pointwise, too. It is clear that the disintegration theorem may be used, analogously, on each space $\mathcal C_{t,x} (\overline{U_j})$ endowed with the measure $w_{t,x} ^{(j)}$, in order to obtain \[ \int _{\mathcal C_{t,x} (\overline{U_j})} f \, \mathrm d w_{t,x} ^{(j)} = \int _{U_j} h ^{(j)} (t, x, y) \left( \int _{p_t ^{-1} (\{y\})} f \, \mathrm d \nu_{t,x,y} ^{(j)} \right) \mathrm d y \] for all $f \in L^1 (\mathcal C_{t,x} (\overline{U_j}), w_{t,x} ^{(j)}) \subseteq L^1 (\mathcal C_{t,x}, w_{t,x})$. \begin{lemma} \[ \lim _{j \to \infty} \int _{p_t ^{-1} (\{y\})} f \, \mathrm d \nu_{t,x,y} ^{(j)} = \int _{p_t ^{-1} (\{y\})} f \, \mathrm d \nu_{t,x,y} \] for all $f \in L^1 (\mathcal C_{t,x})$, all $t>0$ and $x \in M$, and almost all $y \in M$. \end{lemma} \begin{corollary} $h_\nabla (t,x,y) = \lim _{j \to \infty} h_\nabla ^{(j)} (t,x,y)$ for all $t>0$ and $x,y \in M$. \end{corollary} \begin{proof} We know that $h(t,x,y) = \lim _{j \to \infty} h^{(j)} (t,x,y)$ for all $t>0$ și $x,y \in M$, whence, by corroborating the preceding results, we obtain that there exists a co-null $C \subseteq M$ such that $h_\nabla (t,x,y) = \lim _{j \to \infty} h_\nabla ^{(j)} (t,x,y)$ for all $t>0$, $x \in M$ and all $y \in C$. Since $h_\nabla$ and $h_\nabla ^{(j)}$ are smooth on $(0, \infty) \times M \times M$, and since $C$ is dense in $M$ by virtue of being co-null, an elementary argument shows that $C=M$. \end{proof}
1,116,691,501,074
arxiv
\section{Introduction} \label{sec::intro} Let $f$ be a smooth, real-valued function defined on a compact set $\K\in \R^d$. In this paper, $f$ will be a regression function or a density function. The Morse-Smale complex of $f$ is a partition of $\K$ based on the gradient flow induced by $f$. Roughly speaking, the complex consists of sets, called \emph{crystals} or \emph{cells}, comprised of regions where $f$ is increasing or decreasing. Figure \ref{Fig::ex_MS} shows the Morse-Smale complex for a two-dimensional function. The cells are the intersections of the basins of attractions (under the gradient flow) of the function's maxima and minima. The function $f$ is piecewise monotonic over cells with respect to some directions. In a sense, the Morse-Smale complex provides a generalization of isotonic regression. Because the Morse-Smale complex represents a multivariate function in terms of regions on which the function has simple behavior, the Morse-Smale complex has useful applications in statistics, including in clustering, regression, testing, and visualization. For instance, when $f$ is a density function, the basins of attraction of $f$'s modes are the (population) clusters for density-mode clustering (also known as mean shift clustering \citep{fukunaga1975estimation, chacon2015population}), each of which is a union of cells from the Morse-Smale complex. Similarly, when $f$ is a regression function, the cells of the Morse-Smale complex give regions on which $f$ has simple behavior. Fitting $f$ over the Morse-Smale cells provides a generalization of nonparametric, isotone regression; \cite{gerber2013morse} proposes such a method. The Morse-Smale representation of a multivariate function $f$ is a useful tool for visualizing $f$'s structure, as shown by \cite{gerber2010visual}. In addition, suppose we want to compare two multi-dimensional datasets $X=(X_1,\ldots, X_n)$ and $Y=(Y_1,\ldots, Y_m)$. We start by forming the Morse-Smale complex of $\hat p-\hat q$ where $\hat p$ is density estimate from $X$ and $\hat q$ is density estimate from $Y$. Figure \ref{Fig::ex0} shows a visualization built from this complex. The circles represent cells of the Morse-Smale complex. Attached to each cell is a pie-chart showing what fraction of the cell has $\hat p$ significantly larger than $\hat q$. This visualization is a multi-dimensional extension of the method proposed for two or three dimensions in \cite{duong2013local}. \begin{figure} \center \subfigure[Descending manifold]{ \includegraphics[trim=0in 0in 0in 2.5in, clip,width=2.2in]{figures/ex01_1} } \subfigure[Ascending manifold]{ \includegraphics[trim=0in 0in 0in 2.5in,clip, width=2.2in]{figures/ex01_2} } \subfigure[$d$-cell]{ \includegraphics[trim=0in 0in 0in 2.5in,clip, width=2.2in]{figures/ex01_3} } \subfigure[Morse-Smale complex]{ \includegraphics[trim=0in 0in 0in 2.5in,clip, width=2.2in]{figures/ex01_4} } \caption{An example of a Morse-Smale complex. The green dots are local minima; the blue dots are local modes; the violet dots are saddle points. Panels (a) and (b) give examples of descending $d$-manifolds (blue region) and an ascending $0$-manifold (green region). Panel (c) shows the corresponding $d$-cell (yellow region). Panel (d) is shows all $d$-cells. } \label{Fig::ex_MS} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.3]{figures/cells04-crop} \end{center} \caption{Graft-versus-Host Disease (GvHD) dataset \citep{brinkman2007high}. This is a $d=4$ dimensional dataset. We estimate the density difference based on the kernel density estimator and find regions where the two densities are significantly different. Then we visualize the density difference using the Morse-Smale complex. Each green circle denotes a $d$-cell, which is a partition for the support $\K$. The size of circle is proportional to the size of cell. If two cells are neighborhors, we add a line connecting them; the thickness of the line denotes the amount of boundary they share. The pie charts show the ratio of the regions within each cell where the two densities are significantly different from each other. See Section \ref{sec::two} for more details.} \label{Fig::ex0} \end{figure} For all these applications, the Morse-Smale complex needs to be estimated. To the best of our knowledge, no theory has been developed for this estimation problem, prior to this paper. We have three goals in this paper: to show that many existing problems can be cast in terms of the Morse-Smale complex, to develop some new statistical methods based on the Morse-Smale complex, and to develop the statistical theory for estimating the complex. \vspace{1cm} \noindent \emph{Main results.} The main results of this paper are: \begin{enumerate} \item \emph{Consistency of the Morse-Smale Complex}. We prove the stability of the Morse-Smale complex (Theorem~\ref{thm::Haus}) in the following sense: if $B$ and $\tilde{B}$ are boundaries of the descending $d$-manifolds (or ascending $0$-manifolds) of $p$ and $\tilde{p}$ (defined in Section \ref{sec::morse}), then $$ \Haus(B,\tilde{B}) = O\left(\|\nabla p -\nabla \tilde{p}\|_\infty\right). $$ \item \emph{Risk Bound for Mode clustering (mean-shift clustering; section~\ref{sec::mode})}: We bound the risk of mode clustering in Theorem \ref{thm::mode}. \item \emph{Morse-Smale regression (section~\ref{sec::MSR})}: In Theorems \ref{thm::MSR} and \ref{thm::MSR2}, we bound the risk of Morse-Smale regression, a multivariate regression method proposed in \cite{gerber2010visual, gerber2011data, gerber2013morse} that synthesizes nonparametric regression and linear regression. \item \emph{Morse-Smale signatures (section~\ref{sec::MSS})}: We introduce a new visualization method for densities and regression functions. \item \emph{Morse-Smale two-sample testing (section~\ref{sec::two})}: We develop a new method for multivariate two-sample testing that can have good power. \end{enumerate} \emph{Related work.} The mathematical foundations for the Morse-Smale complex are from Morse theory \citep{morse1925relations, morse1930foundations, milnor1963morse}. Morse theory has many applications including computer vision \citep{paris2007topological}, computational geometry \citep{cohen2007stability} and topological data analysis \citep{chazal2014robust}. Previous work on the stability of the Morse-Smale complex can be found in \cite{chen2016comprehensive} and \cite{chazal2014robust} but they only consider critical points rather than the whole Morse-Smale complex. \cite{arias2016estimation} prove pointwise convergence for the gradient ascent curves but this is not sufficient for proving the stability of the complex because the convergence of complexes requires convergence of multiple curves and the constants in the convergence rate derived from \cite{arias2016estimation} vary from points to points and some constants diverge when we are getting closer to the boundaries of complexes. Thus, we cannot obtain a uniform convergence of gradient ascent curves directly based on their results. Morse-Smale regression and visualization were proposed in \cite{gerber2010visual, gerber2011data, gerber2013morse}. The R code (Algorithm \ref{Alg::MSS}, \ref{Alg::vis}, and \ref{Alg::two}) used in this paper can be found at \url{https://github.com/yenchic/Morse_Smale}. \section{Morse Theory} \label{sec::morse} \begin{figure} \center \includegraphics[width=1.5 in]{figures/ex_G1} \includegraphics[width=1.5 in]{figures/ex_G2} \includegraphics[width=1.5 in]{figures/ex_G4} \caption{A one dimensional example. The blue dots are local modes and the green dots are local minima. Left panel: the basins of attraction for two local modes are colored by brown and orange. Middle panel: the basin of attraction (negative gradient) for the local minima are colored by red, purple and violet. Right panel: The intersection of the basins, which are called $d$-cells. } \label{Fig::ex_1d} \end{figure} To motivate formal definitions, we start with the simple, one-dimensional example depicted in Figure \ref{Fig::ex_1d}. The left panel shows the sets associated with each local maximum (i.e. the basins of attraction of the maxima). The middle panel shows the sets associated with each local minimum. The right panel show the intersections of these basins, which gives the Morse-Smale complex defined by the function. Each interval in the complex, called a cell, is a region where the function is increasing or decreasing. Now we give a formal definition. Let $f:\mathbb{K}\subset\mathbb{R}^d\mapsto \mathbb{R}$ be a function with bounded third derivatives that is defined on a compact set $\K$. Let $g(x)= \nabla f(x)$ and $H(x) = \nabla \nabla f(x)$ be the gradient and Hessian matrix of $f$, respectively, and let $\lambda_j(x)$ be the $j$th largest eigenvalue of $H(x)$. Define $\cC = \{x\in\mathbb{K}: g(x)=0\}$ to be the set of all $f$'s critical points, which we call the \emph{critical set}. Using the signs of the eigenvalues of the Hessian, the critical set $\cC$ can be partitioned into $d+1$ distinct subsets $C_0,\cdots,C_d$, where \begin{equation} C_k = \{x\in\mathbb{K}: g(x)=0, \lambda_k(x)>0, \lambda_{k+1}(x)<0\}, \quad k=1,\cdots, d-1. \end{equation} We define $C_0, C_d$ to be the sets of all local maxima and minima (corresponding to all eigenvalues being negative and positive respectively). The set $C_k$ is called $k-$th order critical set. A smooth function $f$ is called a \emph{Morse function} \citep{morse1925relations, milnor1963morse} if its Hessian matrix is non-degenerate at each critical point. That is, $|\lambda_j(x)|>0, \forall x\in \cC$ for all $j$. In what follows we assume $f$ is a Morse function (actually, later we will assume further that $f$ is a Morse-Smale function). \begin{figure} \center \subfigure[]{ \includegraphics[trim=1in 0in 1in 2.5in, clip,width=2.2in]{figures/ex04_2} } \subfigure[]{ \includegraphics[trim=1in 0in 1in 2.5in, clip,width=2.2in]{figures/ex04_1} } \subfigure[]{ \includegraphics[trim=1in 0in 1in 2.5in, clip,width=2.2in]{figures/ex04_3} } \subfigure[]{ \includegraphics[trim=1in 0in 1in 2.5in, clip,width=2.2in]{figures/ex04_4} } \caption{Two-dimensional examples of critical points, descending manifolds, ascending manifolds, and $2$-cells. This is the same function as Figure~\ref{Fig::ex_MS}. (a): The set $C_k$ for $k=0,1,2$. The four blue dots are $C_0$, the collection of local modes (each of them is $c_{0,j}$ some $j=1,\cdots, 4$). The four orange dots are $C_1$, the collection of saddle points (each of them is $c_{1,j}$ for some $j=1,\cdots, 4$). The green dots are $C_2$, the collection of local minima (each green dot is $c_{2,j}$ for some $j=1,\cdots,9$). (b): The set $D_k$ for $k=0,1,2$. The yellow area is $D_2$ (each subregion separated by blue curves are $D_{2,j}, j=1,\cdots, 4$). The two blue curves are $D_1$ (each of the 4 blue segments are $D_{1,j}, j=1,\cdots, 4$). The green dots are $D_0$ (also $C_2$), the collection of local minima (each green dot is $D_{0,j}$ for some $j=1,\cdots,9$). (b): The set $A_k$ for $k=0,1,2$. The yellow area is $A_0$ (each subregion separated by red curves are $A_{0,j}, j=1,\cdots, 9$). The two red curves are $A_1$ (each of the 4 red segments are $A_{1,j}, j=1,\cdots, 4$). The blue dots are $A_2$ (also $C_0$), the collection of local modes (each green dot is $A_{0,j}$ for some $j=1,\cdots,4$). (d): Example for $2$-cells. The thick blue curves are $D_1$ and thick red curves are $A_1$. } \label{Fig::ex_D} \end{figure} Given any point $x\in\K$, we define the gradient ascent flow starting at $x$, $\pi_x: \mathbb{R}^+ \mapsto \K$, by \begin{equation} \begin{aligned} \pi_x(0)& = x\\ \pi'_x(t) & = g(\pi(t)). \end{aligned} \label{eq::GS} \end{equation} A particle on this flow moves along the gradient from $x$ towards a ``destination'' given by $$ \dest(x) \equiv \lim_{t\rightarrow \infty} \pi_x(t). $$ It can be shown that $\dest(x) \in \cC$ for $x\in\K$. We can thus partition $\K$ based on the value of $\dest(x)$. These partitions are called \emph{descending manifolds} in Morse theory \citep{morse1925relations, milnor1963morse}. Recall $C_k$ is the $k$-th order critical points, we assume $C_k=\{c_{k,1},\cdots,c_{k,m_k}\}$ contains $m_k$ distinct elements. For each $k$, define \begin{equation} \begin{aligned} D_k &= \left\{x: \dest(x)\in C_{d-k}\right\}\\ D_{k,j} & = \left\{x: \dest(x)= c_{d-k,j}\right\}, \quad j=1,\cdots m_{d-k}. \end{aligned} \end{equation} That is, $D_k$ is the collection of all points whose gradient ascent flow converges to a $(d-k)$-th order critical point and $D_{k,j}$ is the collection of points whose gradient ascent flow converges to the $j$-th element of $C_{d-k}$. Thus, $D_k = \bigcup_{j=1}^{m_{d-k}}D_{k,j}$. From Theorem 4.2 in \cite{banyaga2004lectures}, each $D_k$ is a disjoint union of $k$-dimensional manifolds ($D_{k,j}$ is a $k$-dimensional manifold). We call $D_{k,j}$ a \emph{descending k-manifold} of $f$. Each descending k-manifold is a $k$-dimensional manifold such that the gradient flow from every point converges to the same $(d-k)$-th order critical point. Note that $\{D_0,\cdots,D_k\}$ forms a partition of $\K$. The top panels of Figure~\ref{Fig::ex_D} give an example of the descending manifolds for a two dimensional case. The \emph{ascending manifolds} are similar to descending manifolds but are defined through the gradient descent flow. More precisely, given any $x\in \K$, a gradient descent flow $\gamma: \mathbb{R}^+ \mapsto \K$ starting from $x$ is given by \begin{equation} \begin{aligned} \gamma_x(0)& = x\\ \gamma'_x(t) & = -g(\pi(t)). \end{aligned} \label{eq::GD} \end{equation} Unlike the ascending flow defined in \eqref{eq::GS}, $\gamma_x$ is a flow that moves along the gradient descent direction. The descent flow $\gamma_x$ shares similar properties to the ascent flow $\pi_x$; the limiting point $\lim_{t\rightarrow\infty}\gamma_x(t) \in \cC$ is also in critical set when $f$ is a Morse function. Thus, similarly to $D_k$ and $D_{k,j}$, we define \begin{equation} \begin{aligned} A_k &= \left\{x: \lim_{t\rightarrow \infty}\gamma_x(t)\in C_{d-k}\right\}\\ A_{k,j} &= \left\{x: \lim_{t\rightarrow \infty}\gamma_x(t)= c_{d-k,j}\right\}, \quad j=1,\cdots, m_{j-k}.\\ \end{aligned} \end{equation} $A_k$ and $A_{k,j}$ have dimension $d-k$ and each $A_{k,j}$ is a partition for $A_k$ and $\{A_0,\cdots,A_d\}$ consist of a partition for $\K$. We call each $A_{k,j}$ an \emph{ascending k-manifold} to $f$. A smooth function $f$ is called a \emph{Morse-Smale function} if it is a Morse function and any pair of the ascending and descending manifolds of $f$ intersect each other transversely (which means that pairs of manifolds are not parallel at their intersections); see e.g. \cite{banyaga2004lectures} for more details. In this paper, we also assume that $f$ is a Morse-Smale function. Note that by the Kupka-Smale Theorem (see e.g. Theorem 6.6 in \cite{banyaga2004lectures}), Morse-Smale functions are generic (dense) in the collection of smooth functions. For more details, we refer to Section 6.1 in \cite{banyaga2004lectures}. A \emph{k-cell} (also called Morse-Smale cell or crystal) is the non-empty intersection between any descending $k_1$-manifold and an ascending $(d-k_2)$-manifold such that $k = \min\{k_1,k_2\}$ (the ascending $(d-k_2)$-manifold has dimension $k_2$). When we simply say a cell, we are referring to the $d$-cell since $d$-cells consists of the majority of $\K$ (the totality of $k$-cells with $k<d$ has Lebesgue measure 0). The \emph{Morse-Smale complex} for $f$ is the collection of all $k$-cells for $k=0,\cdots, d$. The bottom panels of Figure~\ref{Fig::ex_D} give examples for the ascending manifolds and the $d$-cells for $d=2$. Another example is given in Figure~\ref{Fig::ex_MS}. The cells of a smooth function can be used to construct an additive decomposition that is useful in data analysis. For a Morse-Smale function $f$, let $E_1,\cdots, E_L$ be its associated cells. Then we can decompose $f$ into \begin{equation} f(x) = \sum_{\ell=1}^L f_\ell (x) 1(x\in E_\ell), \label{eq::additive} \end{equation} where each $f_\ell(x)$ behaves like a multivariate isotonic function \citep{barlow1972statistical,bacchetti1989additive}. Namely, $f(x) = f_\ell(x)$ when $x\in E_\ell$. This decomposition is because within each $E_\ell$, $f$ has exact a local mode and a local minimum on the boundary of $E_\ell$. The fact that $f$ admits such a decomposition will be used frequently in Section~\ref{sec::MSR} and \ref{sec::MSS}. Among all descending/ascending manifolds, the descending $d$-manifolds and the ascending $0$-manifolds are often of great interest. For instance, mode clustering \citep{Li2007, azzalini2007clustering} uses the descending $d$-manifolds to partition the domain $\K$ into clusters. Morse-Smale regression \citep{gerber2011data, gerber2013morse} fits a linear regression individually over each $d$-cell (non-empty intersection of pairs of descending $d$-manifolds and ascending $0$-manifolds). Regions outside descending $d$-manifolds or ascending $0$-manifolds have Lebesgue measure $0$. Thus, later in our theoretical analysis, we will focus on the stability of the set $D_d$ and $A_0$ (see Section~\ref{sec::thm::stability}). We define boundaries of $D_d$ as \begin{equation} B\equiv\partial D_d = D_{d-1}\cup\cdots \cup D_{0}. \end{equation} The set $B$ will be used frequently in Section \ref{sec::thm}. \section{Applications in Statistics} \subsection{Mode Clustering} \label{sec::mode} \begin{figure} \center \subfigure[Basins of attraction]{ \includegraphics[width=1.45in]{figures/EX3l} } \subfigure[Gradient ascent]{ \includegraphics[width=1.45in]{figures/EX3-5l} } \subfigure[Mode clustering]{ \includegraphics[width=1.45in]{figures/EX3-6l} } \caption{An example of mode clustering. (a): Basin of attraction for each local mode (red $+$). Black dots are data points. (b): Gradient flow (blue lines) for each data point. The gradient flow starts at one data point and ends at one local modes. (c): Mode clustering; we use the destination for gradient flow to cluster data points. } \label{Fig::ex_Modeclustering} \end{figure} Mode clustering \citep{Li2007,azzalini2007clustering, Chacon2012,arias2016estimation,chacon2015population, chen2016comprehensive} is a clustering technique based on the Morse-Smale complex and is also known as mean-shift clustering \citep{fukunaga1975estimation,cheng1995mean,comaniciu2002mean}. Mode clustering uses the descending $d$-manifolds of the density function $p$ to partition the whole space $\K$. (Although the $d$-manifolds do not contain all points in $\K$, the regions outside $d$-manifolds have Lebesgue measure $0$). See Figure~\ref{Fig::ex_Modeclustering} for an example. Now, we briefly describe the procedure of mode clustering. Let $\cX = \{X_1,\cdots,X_n\}$ be a random sample from density $p$ defined on a compact set $\K$ and assumed to be a Morse function. Recall that $\dest(x)$ is the destination of the gradient ascent flow starting from $x$. Mode clustering partitions the sample based on $\dest(x)$ for each point; specifically, it partitions $\cX = \cX_1\bigcup\cdots\bigcup\cX_K$ such that $$ \cX_\ell = \{X_i\in\cX: \dest(X_i)= m_\ell\}, $$ where each $m_\ell$ is a local mode of $p$. We can also view mode clustering as a clustering technique based on the $d$-descending manifolds. Let $D_d = D_{d,1}\bigcup\cdots\bigcup D_{d,L}$ be the $d$-descending manifolds of $p$, assuming that $L$ is the number of local modes. Then each cluster $\cX_\ell = \cX\bigcap D_{d,\ell}$. In practice, however, we do not know $p$ so we have to use a density estimator $\hat{p}_n$. A common density estimator is the kernel density estimator (KDE): \begin{equation} \hat{p}_n(x) = \frac{1}{nh^d}\sum_{i=1}^n K\left(\frac{x-X_i}{h}\right), \end{equation} where $K$ is a smooth kernel function and $h>0$ is the smoothing parameter. Note that mode clustering is not limited to the KDE; other density estimators also give us a sample-based mode clustering. Based on the KDE, we are able to estimate gradient $\hat{g}_n(x)$, the gradient flows $\hat{\pi}_x(t)$, and the destination $\hat{\dest}_n(x)$ (note that the mean shift algorithm is an algorithm to perform these tasks). Thus, we can estimate the $d$-descending manifolds by the plug-in from $\hat{p}_n$. Let $\hat{D}_d = \hat{D}_{d,1}\bigcup\cdots\bigcup \hat{D}_{d,\hat{L}}$ be the $d$-descending manifolds of $\hat{p}_n$, where $\hat{L}$ is the number of local modes of $\hat{p}_n$. The estimated clusters will be $\hat{\cX}_1,\cdots,\hat{\cX}_{\hat{L}}$, where each $\hat{\cX}_\ell = \cX \bigcap \hat{D}_{d,\ell}$. Figure~\ref{Fig::ex_Modeclustering} displays an example of mode clustering using the KDE. A nice property of mode clustering is that there is a clear population quantity that our estimator (clusters based on the given sample) is estimating: the population partition of the data points. Thus we can consider properties of the procedure such as consistency, which we discuss in detail in Section \ref{sec::thm::MC}. \subsection{Morse-Smale Regression} \label{sec::MSR} Let $(X,Y)$ be a random pair where $Y\in \mathbb{R}$ and $X_i \in\mathbb{K}\subset \mathbb{R}^d$. Estimating the regression function $m(x) = \mathbb{E}[Y|X=x]$ is challenging for $d$ of even moderate size. A common way to address this problem is to use a simple regression function that can be estimated with low variance. For example, one might use an additive regression of the form $m(x) = \sum_j m_j(x_j)$ which is a sum of one-dimensional smooth functions. Although the true regression function is unlikely to be of this form, it is often the case that the resulting estimator is useful. A different approach, \emph{Morse-Smale regression} (MSR), is suggested in \cite{gerber2013morse}. This takes advantage of the (relatively) simple structure of the Morse-Smale complex and the isotone behavior of the function on each cell. Specifically, MSR constructs a piecewise linear approximation to $m(x)$ over the cells of the Morse-Smale complex. We first define the population version of the MSR. Let $m(x) = \E(Y|X=x)$ be the regression function and is assumed to be a Morse-Smale function. Let $E_1,\cdots E_L$ be the $d$-cells for $m$. The Morse-Smale Regression for $m$ is a piecewise linear function within each cell $E_\ell$ such that \begin{equation} m_{\MSR}(x) = \mu_\ell+\beta_\ell^Tx, \mbox{ for }x \in E_\ell, \end{equation} where $(\mu_\ell,\beta_\ell)$ are obtained by minimizing mean square error: \begin{equation} \begin{aligned} (\mu_\ell, \beta_\ell)&= \underset{\mu,\beta}{\sf \mathop{\mathrm{argmin}}}\,\, \mbox{ }\mathbb{E}\left((Y-m_{\MSR}(X))^2|X\in E_\ell\right)\\ &= \underset{\mu,\beta}{\sf \mathop{\mathrm{argmin}}}\,\, \mbox{ }\mathbb{E}\left((Y- \mu-\beta^TX)^2|X\in E_\ell\right) \end{aligned} \label{eq::LSE1} \end{equation} That is, $m_{\MSR}$ is the best linear piecewise predictor using the $d$-cells. One can also view MSR as using a linear function to approximate $f_\ell$ in the additive model \eqref{eq::additive}. Note that $m_{\MSR}$ is well defined except on the boundaries of $E_\ell$ that have Lebesgue measure $0$. Now we define the sample version of the MSR. Let $(X_1,Y_1),\cdots,(X_n,Y_n)$ be the random sample from the probability measure $\P_X\times \P_Y$ such that $X_i\in\K\subset\R^d$ and $Y_i\in \R$. Throughout section \ref{sec::MSR}, we assume the density of covariates $X$ is bounded, positive and has a compact support $\K$ and the response $Y$ has finite second moment. Let $\hat{m}_n$ be a smooth nonparametric regression estimator for $m$. We call $\hat{m}_n$ the pilot estimator. For instance, one may use the kernel regression \cite{nadaraya1964estimating} $\hat{m}_n(x) = \frac{\sum_{i=1}^n Y_i K\left(\frac{x-X_i}{h}\right)}{\sum_{i=1}^n K\left(\frac{x-X_i}{h}\right)}$ as the pilot estimator. We define $d$-cells for $\hat{m}_n$ as $\hat{E}_1,\cdots,\hat{E}_{\hat{L}}$. Using the data $(X_i,Y_i)$ within each estimated $d$-cell, $\hat{E}_\ell$, the MSR for $\hat{m}_n$ is given by \begin{equation} \hat{m}_{n, \MSR}(x) = \hat{\mu}_\ell+\hat{\beta}_\ell^Tx, \mbox{ for }x \in \hat{E}_\ell, \end{equation} where $(\hat{\mu}_\ell,\hat{\beta}_\ell)$ are obtained by minimizing the empirical squared error: \begin{equation} \begin{aligned} (\hat{\mu}_\ell, \hat{\beta}_\ell)= \underset{\mu,\beta}{\sf \mathop{\mathrm{argmin}}} \mbox{ }\sum_{i: X_i\in \hat{E}_\ell}(Y_i- \mu-\beta^TX_i)^2 \end{aligned} \label{eq::LSE2} \end{equation} This MSR is slightly different from the original version in \cite{gerber2013morse}. We will discuss the difference in Remark \ref{rm::MSR}. Computing the parameters of MSR is not very difficult--we only need to compute the cell labels of each observation (this can be done by the mean shift algorithm or some fast variants such as the quick-shift algorithm \citealt{vedaldi2008quick}) and then fit a linear regression within each cell. MSR may give low prediction error in some cases; see \cite{gerber2013morse} for some concrete examples. In Theorem~\ref{thm::MSR2}, we prove that we may estimate $m_{\MSR}$ at a fast rate. Moreover, the regression function may be visualized by the methods discussed later. \begin{remark} \label{rm::MSR} The original version of Morse-Smale regression proposed in \cite{gerber2013morse} does not use $d$-cells of a pilot nonparametric estimate $\hat{m}_n$. Instead, they directly find local modes and minima using the original data points $(X_i,Y_i)$. This saves computational effort but comes with a price: there is no clear population quantity being estimated by their approach. That is, when the sample size increases to infinity, there is no guarantee that their method will converge. In our case, we apply a consistent pilot estimate for $m$ and construct $d$-cells on this pilot estimate. As is shown in Theorem~\ref{thm::MSR}, our method is consistent for this population quantity. \end{remark} \subsection{Morse-Smale Signatures and Visualization} \label{sec::MSS} In this section we define a new method for visualizing multivariate functions based on the Morse-Smale complex, called \emph{Morse-Smale signatures}. The idea is very similar to the Morse-Smale regression but the signatures can be applied to any Morse-Smale function. Let $E_1,\cdots, E_K$ be the $d$-cells (nonempty intersection of a descending $d$-manifold and an ascending $0$-manifold) for a Morse-Smale function $f$ that has a compact support $\K$. The function $f$ depends on the context of the problem. For density estimation, $f$ is the density $p$ or its estimator $\hat{p}_n$. For regression problem, $f$ is the regression function $m$ or a nonparametric estimator $\hat{m}_n$ . For two sample test, $f$ is the density difference $p_1-p_2$ or the estimated density difference $\hat{p}_1-\hat{p}_2$. Note that $E_1,\cdots, E_K$ form a partition for $\K$ except a Lebesgue measure $0$ set. Each cell corresponds to a unique pair of a local mode and a local minimum. Thus, the local modes and minima along with $d$-cells form a \emph{bipartite} graph which we call it \emph{signature graph}. The signature graph contains geometric information about $f$. See Figure~\ref{fig::ex::MSS} and \ref{Fig::ex::MSvis} for examples. The signature is defined as follows. We project the maxima and minima of the function into $\mathbb{R}^2$ using multidimensional scaling. We connect a maximum and minimum by an edge if there exists a cell that connects them. The width of the edge is proportional to the norm of the linear coefficients of the linear approximation to the function within the cell. The linear approximation is \begin{equation} f_{\MS}(x) = \eta^\dagger_\ell+\gamma^{\dagger T}_\ell x, \quad \mbox{for }x \in E_\ell, \end{equation} where $\eta_\ell^\dagger\in\R$ and $\gamma_\ell^\dagger \in \R^d$ are parameters from \begin{equation} (\eta_\ell^\dagger, \gamma_\ell^\dagger) = \underset{\eta, \gamma}{\sf argmin} \int_{E_\ell} \left(f(x)-\eta-\gamma^Tx\right)^2 dx. \label{eq::MSS1} \end{equation} This is again a linear approximation for $f_\ell$ in the additive model \eqref{eq::additive}. Note that $f_{\MS}$ may not be continuos when we move from one cell to another. The summary statistics for the edge associated with cell $E_\ell$ are the parameters $(\eta_\ell^\dagger, \gamma_\ell^\dagger)$. We call the function $f_{\MS}$ the \emph{(Morse-Smale) approximation function}; it is the best piecewise-linear representation for $f$ (piecewise linear within each cell) under $\cL_2$ error given the $d$-cells. This function is well-defined except on a set of Lebesgue measure $0$ (the boundaries of each cell). See Figure~\ref{fig::ex::MSS} for a example on the approximation function. The details are in Algorithm \ref{Alg::MSS}. \begin{figure} \center \subfigure[Original function]{ \includegraphics[width=1.7 in]{figures/exsig_01_2} } \subfigure[Approximation function]{ \includegraphics[width=1.7 in]{figures/exsig_02_1} } \subfigure[Signature graph]{ \includegraphics[width=1.7 in]{figures/Bgraph3} } \caption{Morse-Smale signatures for a smooth function. (a): The original function. The blue dots are local modes, the green dots are local minima and the pink dot is a saddle point. (b): The Morse-Smale approximation to (a). This is the best piecewise linear approximation to the original function given $d$-cells. (c): This bipartite graph has nodes that are local modes and minima and edges that represent the $d$-cells. Note that we can summarize the smooth function (a) by the signature graph (c) and the parameters for constructing approximation function (b). The signature graph and parameters for approximation function define the Morse-Smale signatures. } \label{fig::ex::MSS} \end{figure} \begin{figure} \center \includegraphics[width=2.5in]{figures/cells04_3} \caption{Morse-Smale Signature visualization (Algorithm \ref{Alg::MSS}) of the density difference for GvHD dataset (see Figure~\ref{Fig::ex0}). The blue dots are local modes; the green dots are local minima; the brown lines are $d$-cells. These dots and lines form the signature graph. The width indicates the $\cL_2$ norm for the slope of regression coefficients. i.e. $\norm{\gamma^\dagger_\ell}$. The location for modes and minima are obtained by multidimensional scaling so that the relative distance is preserved. } \label{Fig::ex::MSvis} \end{figure} {\bf Example.} Figure~\ref{Fig::ex::MSvis} is an example using the GvHD dataset. We first conduct multidimensional scaling \citep{kruskal1964multidimensional} on the local modes and minima for $f$ and plot them on the 2-D plane. In Figure~\ref{Fig::ex::MSvis}, the blue dots are local modes and the green dots are local minima. These dots act as the nodes for the signature graph. Then we add edges, representing the cells for $f$ that connect pairs of local modes and minima, to form the signature graph. Lastly, we adjust the width for the edges according to the strength ($\cL_2$ norm) of regression function within each cell (i.e. $\norm{\gamma^\dagger_\ell}$). Algorithm \ref{Alg::MSS} provides a summary for visualizing a general multivariate function using what we described in this paragraph. \begin{algorithm} \caption{Visualization using Morse-Smale Signatures} \label{Alg::MSS} \begin{algorithmic} \State \textbf{Input:} Grid points $x_1,\cdots,x_N$ and the functional evaluations $f(x_1),\cdots,f(x_N)$. \State 1. Find local modes and minima of $f$ on the discretized points $x_1,\cdots, x_N$. Let $M_1,\cdots M_K$ and $m_1,\cdots, m_S$ denote the grid points for modes and minima. \State 2. Partition $\{x_1,\cdots, x_N\}$ into $\mathcal{X}_1,\cdots\mathcal{X}_L$ according to the $d$-cells of $f$ (1. and 2. can be done by using a k-nearest neighbor gradient ascent/descent method; see Algorithm 1 in \cite{gerber2013morse}). \State 3. For each cell $\mathcal{X}_\ell$, fit a linear regression with $(X_i, Y_i) = (x_i, f(x_i))$, where $x_i \in \mathcal{X}_\ell$. Let the regression coefficients (without intercept) be $\beta_\ell$. \State 4. Apply multidimensional scaling to modes and minima jointly. Denote their 2 dimensional representation points as $$ \{M^*_1,\cdots M^*_K, m^*_1,\cdots, m^*_S\}. $$ \State 5. Plot $\{M^*_1,\cdots M^*_K, m^*_1,\cdots, m^*_S\}$. \State 6. Add edge to a pair of mode and minimum if there exist a cell that connects them. The width of the edge is in proportional to $\norm{\beta_\ell}$ (for cell $\mathcal{X}_\ell$). \end{algorithmic} \end{algorithm} \subsection{Two Sample Comparison} \label{sec::two} The Morse-Smale complex can be used to compare two samples. There are two ways to do this. The first one is to test the difference in two density functions locally and then use the Morse-Smale signatures to visualize regions where the two samples are different. The second approach is to conduct a nonparametric two sample test within each Morse-Smale cell. The advantage of the first approach is that we obtain a visual display on where the two densities are different. The merit of the second method is that we gain additional power in testing the density difference by using the shape information. \subsubsection{Visualizing the Density Difference} Let $X_1,\ldots X_n$ and $Y_1,\ldots, Y_m$ be two random sample with densities $p_X$ and $p_Y$. In a two sample comparison, we not only want to know if $p_X=p_Y$ but we also want to find the regions that they significantly disagree. That is, we are doing the local tests \begin{equation} H_0(x): p_X(x)= p_Y(x) \end{equation} simultaneously for all $x\in\K$ and we are interested in the regions where we reject $H_0(x)$. A common approach is to estimate the density for both sample by the KDE and set a threshold to pickup those regions that the density difference is large. Namely, we first construct density estimates \begin{equation} \hat{p}_X(x) = \frac{1}{nh^d}\sum_{i=1}^nK\left(\frac{x-X_i}{h}\right), \quad \hat{p}_Y(x) = \frac{1}{mh^d}\sum_{i=1}^mK\left(\frac{x-Y_i}{h}\right) \end{equation} and then compute $\hat{f}(x) = \hat{p}_X(x)-\hat{p}_Y(x)$. The regions \begin{equation} \Gamma(\lambda) = \left\{x\in\K: |\hat{f}(x)|> \lambda\right\} \end{equation} are where we have strong evidence to reject $H_0(x)$. The threshold $\lambda$ can be picked by quantile values of the bootstrapped $\cL_{\infty}$ density deviation to control type 1 error or can be chosen by controlling the false discovery rate \citep{duong2013local}. Unfortunately, $\Gamma(\lambda)$ is hard to visualize when $d > 3$. So we use the Morse-Smale complex for $\hat{f}$ and visualize $\Gamma(\lambda)$ by its behavior on the $d$-cells of the complex. Algorithm \ref{Alg::vis} gives a method for visualizing density differences like $\Gamma(\lambda)$ in the context of comparing two independent samples. \begin{algorithm} \caption{Visualization For Two Sample Test} \label{Alg::vis} \begin{algorithmic} \State \textbf{Input:} Sample 1: $\{ X_1,...X_n\}$, Sample 2: $\{Y_1,\cdots, Y_m\}$, threshold $\lambda$ and radius constant $r_0$ \State 1. Compute the density estimates $\hat{p}_X$ and $\hat{p}_Y$. \State 2. Compute the difference function $\hat{f} = \hat{p}_X-\hat{p}_Y$ and the significant regions \begin{equation} \Gamma^+(\lambda) =\left \{x\in\K: \hat{f}(x)>\lambda\right\},\quad \Gamma^-(\lambda) = \left\{x\in\K: \hat{f}(x)< -\lambda\right\} \end{equation} \State 3. Find the $d$-cells for $\hat{f}$, denoted as $E_1,\cdots, E_L$. \State 4. For cell $E_\ell$, do (4-1) and (4-2): \State 4-1. compute the cell center $e_\ell$, cell size $V_\ell = \Vol(E_\ell)$, \State 4-2. compute the positive significant ratio and negative significant ratio \begin{equation} r^+_\ell = \frac{\Vol(E_\ell \cap \Gamma^+(\lambda))}{\Vol (E_\ell)}, \quad r^-_\ell = \frac{\Vol(E_\ell \cap \Gamma^-(\lambda))}{\Vol (E_\ell)}. \end{equation} \State 5. For every pair of cell $E_j$ and $E_\ell$ $(j\neq \ell)$, compute the shared boundary size: \begin{equation} B_{j\ell} = \Vol_{d-1} (\bar{E}_j\cap \bar{E}_\ell), \end{equation} where $\Vol_{d-1}$ is the $d-1$ dimensional Lebesgue measure. \State 6. Do multidimensional scaling \citep{kruskal1964multidimensional} to $e_1,\cdots, e_L$ to obtain low dimensional representation $\tilde{e}_1,\cdots,\tilde{e}_L$. \State 7. Place a ball center at each $\tilde{e}_\ell$ with radius $r_0\times\sqrt{V_\ell}$. \State 8. If $r^+_\ell + r^-_\ell>0$, add a pie chart center at $\tilde{e}_\ell$ with radius $r_0\times\sqrt{V_\ell}\times(r^+_\ell + r^-_\ell)$. The pie chart contains two groups, each with ratio $\left(\frac{r^+_\ell }{r^+_\ell + r^-_\ell}, \frac{ r^-_\ell}{r^+_\ell + r^-_\ell}\right)$. \State 9. Add a line to connect two nodes $\tilde{e}_j$ and $\tilde{e}_\ell$ if $B_{j\ell}>0$. We may adjust the thickness of the line according to $B_{j\ell}$. \end{algorithmic} \end{algorithm} An example for Algorithm \ref{Alg::vis} is in Figure~\ref{Fig::ex0}, in which we apply the visualization algorithm for the the GvHD dataset by using kernel density estimator. We choose the threshold $\lambda$ by bootstrapping the $\cL_{\infty}$ difference for $\hat{f}$ i.e. $\sup_x |\hat{f}^*(x)-\hat{f}(x)|$, where $\hat{f}^*$ is the density difference for the bootstrap sample. We pick $\alpha=95\%$ upper quantile value for the bootstrap deviation as the threshold. The radius constant $r_0$ is defined by the user. It is a constant for visualization and does not affect the analysis. Algorithm \ref{Alg::vis} preserves the relative position for each cell and visualizes the cell according to its size. The pie-chart provides the ratio of regions where the two densities are significantly different. The lines connecting two cells provide the geometric information about how cells are connected to each other. By applying Algorithm \ref{Alg::vis} to the GvHD dataset (Figure~\ref{Fig::ex0}), we find that there are 6 cells and one cell much larger than the others. Moreover, in most regions, the blue regions are larger than the red areas. This indicates that compared to the density of the control group, the density of the GvHD group seem to concentrates more so that the regions above the threshold are larger. \subsubsection{Morse-Smale Two-Sample Test} Here we introduce a technique combining the energy test \citep{baringhaus2004new,szekely2004testing,szekely2013energy} and the Morse-Smale complex to conduct a two sample test. We call our method the \emph{Morse-Smale Energy test (MSE test)}. The advantage of the MSE test is that it is a nonparametric test and its power can be higher than the energy test; see Figure~\ref{Fig::ex_GMM}. Moreover, we can combine our test with the visualization tool proposed in the previous section (Algorithm \ref{Alg::vis}); see Figure~\ref{Fig::p_values} for an example for displaying p-values from MSE test when visualizing the density difference. Before we introduce our method, we first review the ordinary energy test. Given two random variables $X\in\R^d$ and $Y\in\R^d$, the energy distance is defined as \begin{equation} \cE(X,Y) = 2\E\norm{X-Y} -\E\norm{X-X'}-\E\norm{Y-Y'}, \end{equation} where $X'$ and $Y'$ are iid copies of $X$ and $Y$. The energy distance has several useful applications such as the goodness-of-fit testing \citep{szekely2005new}, two sample testing \citep{baringhaus2004new,szekely2004testing,szekely2013energy}, clustering \citep{szekely2005hierarchical}, and distance components \citep{rizzo2010disco} to name but few. We recommend an excellent review paper in \citep{szekely2013energy}. For the two sample test, let $X_1,\cdots,X_n$ and $Y_1,\cdots,Y_m$ be the two samples we want to test. The sample version of energy distance is \begin{equation} \hat{\cE}(X,Y) = \frac{2}{nm}\sum_{i=1}^n \sum_{j=1}^m \norm{X_i-Y_j} - \frac{1}{n^2}\sum_{i=1}^n \sum_{j=1}^n \norm{X_i-X_j} - \frac{1}{m^2}\sum_{i=1}^m \sum_{j=1}^m \norm{Y_i-Y_j}. \end{equation} If $X$ and $Y$ are from the sample population (the same density), $\hat{\cE}(X,Y)\overset{P}{\rightarrow} 0$. Numerically, we use the permutation test for computing the p-value for $\hat{\cE}(X,Y)$. This can be done quickly in the R-package `energy' \citep{rizzo2008energy}. Now we formally introduce our testing procedure: the MSE test (see Algorithm \ref{Alg::two} for a summary). Our test consists of three steps. First, we split the data into two halves. Second, we use one half of the data (contains both samples) to do a nonparametric density estimation (e.g. the KDE) and then compute the Morse-Smale complex ($d$-cells). Last, we use the other half of the data to conduct the energy distance two sample test `within each $d$-cell'. That is, we partition the second half of the data by the $d$-cells. Within each cell, we do the energy distance test. If we have $L$ cells, we will have $L$ p-values from the energy distance test. We reject $H_0$ if any one of the $L$ p-values is smaller than $\alpha/L$ (this is from Bonferroni correction). Figure~\ref{Fig::p_values} provides an example for using the above procedure (Algorithm \ref{Alg::two}) along with the visualization method proposed in Algorithm \ref{Alg::vis}. Data splitting is used to avoid using the same data twice, which ensures we have a valid test. \begin{algorithm} \caption{Morse-Smale Energy Test (MSE test)} \label{Alg::two} \begin{algorithmic} \State \textbf{Input:} Sample 1: $\{ X_1,...X_n\}$, Sample 2: $\{Y_1,\cdots, Y_m\}$, smoothing parameter $h$, significance level $\alpha$ \State 1. Randomly split the data into halves $\cD_1$ and $\cD_2$; both contain equal number of $X$ and $Y$ (assuming $n$ and $m$ are even). \State 2. Compute the KDE $\hat{p}_X$ and $\hat{p}_Y$ by the first sample $\cD_1$. \State 3. Find the $d$-cells for $\hat{f}=\hat{p}_X-\hat{p}_Y$, denoted as $E_1,\cdots, E_L$. \State 4. For cell $E_\ell$, do 4-1 and 4-2: \State 4-1. Find $X$ and $Y$ in the second sample $\cD_2$, \State 4-2. Do the energy test for two sample comparison. Let the p-value be $p(\ell)$ \State 5. Reject $H_0$ if $p(\ell)<\alpha/L$ for some $\ell$. \end{algorithmic} \end{algorithm} {\bf Example.} Figure~\ref{Fig::ex_GMM} shows a simple comparison for the proposed MSE test to the usual Energy test. We consider a $K=4$ Gaussian mixture model in $d=2$ with standard deviation of each component being the same $\sigma=0.2$ and the proportion for each component is $(0.2, 0.5, 0.2, 0.1)$. The left panel displays a sample with $N=500$ from this mixture distribution. We draw the first sample from this Gaussian mixture model. For the second sample, we draw a similar Gaussian mixture model except that we change the deviation of one component. In the middle panel, we change the deviation to the third component (C3 in left panel, which contains $20\%$ data points). In the right panel, we change the deviation to the fourth component (C4 in left panel, which contains $10\%$ data points). We use significance level $\alpha=0.05$ and for MSE test, we consider the Bonferroni correction and the smoothing bandwidth is chosen using Silverman's rule of thumb \citep{Silverman1986}. Note that in both the middle and the right panels, the left most case (added deviation equals $0$) is where $H_0$ should not be rejected. As can be seen from Figure~\ref{Fig::ex_GMM}, the MSE test has much stronger power compared to the usual Energy test. The original energy test has low power while the MSE test has higher power. This is because the two distributions only differ at a small portion of the regions so that a global test like energy test requires large sample sizes to detect the difference. On the other hand, the MSE test partitions the space according to the density difference so that it is capable of detecting the local difference. \begin{figure} \center \includegraphics[width=1.5 in]{figures/GMM_ex500} \includegraphics[width=1.5 in]{figures/GMM_C3_02} \includegraphics[width=1.5 in]{figures/GMM_C4_02} \caption{An example comparing the Morse-Smale Energy test to the original Energy test. We consider a $d=2$, $K=4$ Gaussian mixture model. Left panel: an instance for the Gaussian mixture. We have four mixture components, denoting as C1, C2, C3 and C4. They have equal standard deviation ($\sigma=0.2$) and the proportions for each components are $(0.2, 0.5, 0.2, 0.1)$. Middle panel: We changed the standard deviations of component C3 to $0.3, 0.4$ and $0.5$ and compute the power for the MSE test and the usual Energy test at sample size $N=500$ and $1000$. (Standard deviation equals $0.2$ is where $H_0$ should not be rejected.) Right panel: We add the variance of component C4 (the smallest component) and do the same comparison as in the middle panel. We pick the significance level $\alpha=0.05$ (gray horizontal line) and in the MSE test, we reject $H_0$ if the minimal p-value is less than $\alpha/L$, where $L$ is the number of cells (i.e. we are using the Bonferroni correction). } \label{Fig::ex_GMM} \end{figure} {\bf Example.} In addition to the higher power, we may combine the MSE test with the visualization tool in Algorithm~\ref{Alg::vis}. Figure~\ref{Fig::p_values} displays an example where we visualize the density difference and simultaneously indicate the p-values from the Energy test within each cell using the GvHD dataset. This provides us more information about how two distributions differ from each other. \begin{figure} \center \includegraphics[scale=0.3]{figures/cells_E01} \caption{ An example using both Algorithm \ref{Alg::vis} and \ref{Alg::two} to the GvHD dataset introduced in Figure~\ref{Fig::ex0}. We use data splitting as described in Algorithm \ref{Alg::two}. For the first part of the data, we compute the cells and visualize the cells using Algorithm \ref{Alg::vis}. Then we apply the energy distance two sample test for each cell as described in Algorithm \ref{Alg::two} and we annotate each cell with a p-value. Note that the visualization is slightly different to Figure~\ref{Fig::ex0} since we use only half of the original dataset in this case. } \label{Fig::p_values} \end{figure} \section{Theoretical Analysis} \label{sec::thm} We first define some notation for the theoretical analysis. Let $f$ be a smooth function. We define $\|f\|_{\infty} = \sup_x |f(x)|$ to be the $\cL_\infty$-norm of $f$. In addition, let $\norm{f}_{j,\max}$ denote the elementwise $\cL_{\infty}$-norm for $j$-th derivatives of $f$. For instance, $$ \norm{f}_{1,\max} = \max_i \|g_i(x)\|_\infty, \quad\norm{f}_{2,\max} = \max_{i,j}\|H_{ij}(x)\|_\infty. $$ We also define $\|f\|_{0,\max} = \|f\|_{\infty}$. We further define \begin{equation} \norm{f}^*_{\ell, \max} = \max\left\{\norm{f}_{j,\max}: j=0,\cdots, \ell\right\}. \label{eq::norm} \end{equation} The quantity $\norm{f-h}^*_{\ell,\max}$ measures the difference between two functions $f$ and $h$ up to $\ell$-th order derivative. For two sets $A,B$, the Hausdorff distance is \begin{equation} \Haus(A,B) = \inf\{r: A\subset B\oplus r, B\subset A\oplus r\}, \label{eq::Haus} \end{equation} where $A\oplus r = \{y: \min_{x\in A}\norm{x-y}\leq r\}$. The Hausdorff distance is like the $\cL_\infty$ distance for sets. Let $\tilde{f}:\mathbb{K}\subset\mathbb{R}^d\mapsto \mathbb{R}$ be a smooth function with bounded third derivatives. Note that as long as $\norm{\tilde{f}-f}^*_{3,\max}$ is small, $\tilde{f}$ is also a Morse function by Lemma~\ref{lem::critical}. Let $\tilde{D}$ denote the boundaries of the descending $d$-manifolds of $\tilde{f}$. We will show if $\norm{f-\tilde{f}}^*_{3,\max}$is sufficiently small, then $\Haus(\tilde{D},D) = O(\norm{\tilde{f}-f}_{1,\max})$. \subsection{Stability of the Morse-Smale Complex} \label{sec::thm::stability} Before we state our theorem, we first derive some properties of descending manifolds. Recall that we are interested in $B=\partial D_d$, the boundary of the descending $d$-manifolds (and $B$ is also the union of all $j$-descending manifolds for $j<d$). Since each $D_j$ is a collection of smooth $j$-dimensional manifolds embedded in $\mathbb{R}^d$, for every $x\in D_j$, there exists a basis $v_{1}(x),\cdots,v_{d-j}(x)$ such that each $v_k(x)$ is perpendicular to $D_j$ at $x$ for $k=1,\cdots d-j$ \citep{bredon1993topology,helgason1979differential}. That is, $v_{1}(x),\cdots,v_{d-j}(x)$ span the normal space to $D_j$ at $x$. For simplicity, we write \begin{equation} V(x) = (v_{1}(x),\cdots,v_{d-j}(x))\in\mathbb{R}^{d\times (d-j)} \end{equation} for $x\in D_j$. Note the number of columns $d-j\equiv d-j(x)$ in $V(x)$ depends on which $D_j$ the point $x$ belongs to. We use $j$ rather than $j(x)$ to simplify the notation. For instance, if $x\in D_1$, $V(x) \in\mathbb{R}^{d\times (d-1)} $ and if $x\in D_{d-1}$, $V(x)\in \mathbb{R}^{d\times 1}$. We also let \begin{equation} \mathbb{V}(x) = {\rm span}\{v_{1}(x),\cdots,v_{d-j}(x)\} \end{equation} denote the normal space to $B$ at $x$. One can view $\mathbb{V}(x)$ as the normal map of the manifold $D_j$ at $x\in D_j$. For each $x\in B$, define the projected Hessian \begin{equation} H_V(x) = V(x)^TH(x)V(x), \end{equation} which is the Hessian matrix of $p$ by taking gradients along column space of $V(x)$. If $x\in D_j$, $H_V(x)$ is a $(d-j)\times (d-j)$ matrix. The eigenvalues of $H_V(x)$ determine how the gradient flows are moving away from $B$. We let $\lambda_{\min}(M)$ be the smallest eigenvalue for a symmetric matrix $M$. If $M$ is a scalar (just one point), then $\lambda_{\min}(M)=M$. \vspace{1cm} \noindent {\bf Assumption (D):} We assume that $H_{\min} = \min_{x\in B} \lambda_{\min} (H_V(x))>0.$ \vspace{1cm} This assumption is very mild; it requires that the gradient flow moves away from the boundary of ascending manifolds. In terms of mode clustering, this requires the gradient flow to move away from the boundaries of clusters. For a point $x\in D_{d-1}$, let $v_1(x)$ be the corresponding normal direction. Then the gradient $g(x)$ is normal to $v_1(x)$ by definition. That is, $v_1(x)^T g(x) =v_1(x)^T\nabla p(x)=0$, which means that the gradient along $v_1(x)$ is $0$. Assumption (D) means that the the second derivative along $v_1(x)$ is positive, which implies that the density along direction $v_1(x)$ behaves like a local minimum at point $x$. Intuitively, this is how we expect the density to behave around the boundaries: gradient flows are moving away from the boundaries (except for those flows that are already on the boundaries). \begin{thm}[Stability of descending $d$-manifolds] Let $f,\tilde{f}:\mathbb{K}\subset\mathbb{R}^d\mapsto \mathbb{R}$ be two smooth functions with bounded third derivatives defined as above and let $B,\tilde{B}$ be the boundaries of the associated ascending manifolds. Assume $f$ is a Morse function satisfying condition {\bf (D)}. When $\norm{f-\tilde{f}}^*_{3,\max}$ is sufficiently small, \begin{equation} \Haus(\tilde{B},B) = O(\norm{\tilde{f}-f}_{1,\max}). \end{equation} \label{thm::Haus} \end{thm} This theorem shows that the boundaries of descending $d$-manifolds for two Morse functions are close to each other and the difference between the boundaries is controlled by the rate of the first derivative difference. Similarly to descending manifolds, we can define all the analogous quantities for ascending manifolds. We introduce the following assumption: \vspace{1cm} {\bf Assumption (A):} We assume $H_{\max} = \max_{x\in \partial A_0} \lambda_{\max} (H_V(x)) < 0.$ \vspace{1cm} Note that $\lambda_{\max}(M)$ denotes the largest eigenvalue of a matrix $M$. If $M$ is a scalar, $\lambda_{\max}(M)=M$. Under assumption (A), we have a similar stability result (Theorem~\ref{thm::Haus}) for ascending manifolds. Assumptions (A) and (D) together imply the stability of $d$-cells. Theorem~\ref{thm::Haus} can be applied to nonparametric density estimation. Our goal is to estimate the boundary of the descending $d$-manifolds, $B$, of the unknown population density function $p$. Our estimator is $\hat{B}_n$, the boundary of the descending $d$-manifolds to a nonparametric density estimator e.g. the kernel density estimate $\hat{p}_ n$. Then under certain regularity condition, their difference is given by $$ \Haus \left(\hat{B}_n, B\right) = O\left(\norm{\hat{p}_n-p}_{1,\max}\right). $$ We will see this result in the next section when we discuss mode clustering. Similar reasoning works for the nonparametric regression case. Assume that we are interested in $B$, the boundary of descending $d$-manifolds, for the regression function $m(x) = \mathbb{E}(Y|X=x)$. And our estimator $\hat{B}$ is again a plug-in estimate based on $\hat{m}_n(x)$, a nonparametric regression estimator (e.g., kernel estimator). Then under mild regularity conditions, $$ \Haus \left(\hat{B}_n, B\right) = O\left(\norm{\hat{m}_n-m}_{1,\max}\right). $$ \subsection{Consistency of Mode Clustering}\label{sec::thm::MC} A direct application of Theorem~\ref{thm::Haus} is the consistency of mode clustering. Let $K^{(\alpha)}$ be the $\alpha$-th derivative of $K$ and let $\mathbf{BC}^r$ denote the collection of functions with bounded continuously derivatives up to the $r$-th order. We consider the following two assumptions on the kernel function: \begin{itemize} \item[\bf(K1)] The kernel function $K\in\mathbf{BC}^3$ and is symmetric, non-negative and $$\int x^2K^{(\alpha)}(x)dx<\infty,\qquad \int \left(K^{(\alpha)}(x)\right)^2dx<\infty $$ for all $\alpha=0,1,2,3$. \item[\bf(K2)] The kernel function satisfies condition $K_1$ of \cite{Gine2002}. That is, there exists some $A,v>0$ such that for all $0<\epsilon<1$, $\sup_Q N(\mathcal{K}, L_2(Q), C_K\epsilon)\leq \left(\frac{A}{\epsilon}\right)^v,$ where $N(T,d,\epsilon)$ is the $\epsilon-$covering number for a semi-metric space $(T,d)$ and $$ \mathcal{K} = \Biggl\{u\mapsto K^{(\alpha)}\left(\frac{x-u}{h}\right) : x\in\R^d, h>0,|\alpha|=0,1,2\Biggr\}. $$ \end{itemize} (K1) is a common assumption; see \cite{wasserman2006all}. (K2) is a weak assumption guarantee the consistency for KDE under $L_{\infty}$ norm; this assumption first appeared in \cite{Gine2002} and has been widely assumed \citep{Einmahl2005,rinaldo2010generalized, genovese2012geometry, rinaldo2012stability,genovese2014nonparametric, chen2015asymptotic}. \begin{thm}[Consistency for mode clustering] Let $p,\hat{p}_n$ be the density function and the KDE. Let $B$ and $\hat{B}_n$ be the boundaries of clusters by mode clustering over $p$ and $\hat{p}_n$ respectively. Assume (D) for $p$ and (K1--2), then when $\frac{\log n }{nh^{d+6}}\rightarrow 0, h\rightarrow 0$, $$ \Haus\left(\hat{B}_n,B\right) = O(\norm{\hat{p}_n-p}_{1,\max}) = O(h^2) + O_{\P}\left(\sqrt{\frac{\log (n)}{nh^{d+2}}}\right). $$ \label{thm::mode} \end{thm} The proof is simply to combine Theorem~\ref{thm::Haus} and the rate of convergence for estimating the gradient of density using KDE (Theorem~\ref{thm::KDE}). Thus, we omit the proof. Theorem~\ref{thm::mode} gives a bound for the rate of convergence for the boundaries for mode clustering. The rate can be decomposed into two parts, the bias $O(h^2)$ and the (square root of) variance $O_{\P}\left(\sqrt{\frac{\log (n)}{nh^{d+2}}}\right)$. This rate is the same for the $\cL_{\infty}$-loss of estimating the gradient of a density function, which makes sense since the mode clustering is completely determined by the gradient of density. Another way to describe the consistency for mode clustering is to show that the proportion of data points that are \emph{incorrectly clustered (mis-clustered)} converges to $0$. This can be quantified by the use of Rand index \citep{rand1971objective, hubert1985comparing, vinh2009information}, which measures the similarity between two partitions of the data points. Let $\dest(x)$ and $\hat{\dest}_n(x)$ be the destination of gradient of the true density function $p(x)$ and the KDE $\hat{p}_n(x)$. For a pair of points $x,y$, we define \begin{equation} \Psi(x,y) = \left\{ \begin{array}{l l} 1& \quad \text{if $\dest(x)=\dest(y)$}\\ 0 & \quad \text{if $\dest(x)\neq \dest(y)$} \end{array} \right. ,\quad \hat{\Psi}_n(x,y) = \left\{ \begin{array}{l l} 1& \quad \text{if $\hat{\dest}_n(x)=\hat{\dest}_n(y)$}\\ 0 & \quad \text{if $\hat{\dest}_n(x)\neq \hat{\dest}_n(y)$} \end{array} \right. \label{eq::rand1} \end{equation} Thus, $\Psi(x,y)=1$ if $x,y$ are in the same cluster and $0$ if they are not. The Rand index for mode clustering using $p$ versus using $\hat{p}_n$ is \begin{equation} \rand\left(\hat{p}_n,p\right) = 1 - {n \choose 2}^{-1}\sum_{i\neq j}\left|\Psi(X_i,X_j)-\hat{\Psi}_n(X_i,X_j)\right|, \label{eq::rand2} \end{equation} which is the proportion of pairs of data points that the two clustering results disagree on. If two clusterings output the same partition, the Rand index will be $1$. \begin{thm}[Bound on Rand Index] Assume (D) for $p$ and (K1--2). Then when $\frac{\log n }{nh^{d+6}}\rightarrow 0, h\rightarrow 0$, the adjusted Rand index $$ \rand\left(\hat{p}_n,p\right)= 1-O(h^2) - O_{\P}\left(\sqrt{\frac{\log (n)}{nh^{d+2}}}\right). $$ \label{thm::number} \end{thm} Theorem~\ref{thm::number} shows that the Rand index converges to $1$ in probability, which establishes the consistency of mode clustering in an alternative way. Theorem~\ref{thm::number} shows that the proportion of data points that are incorrectly assigned (compared with mode clustering using population $p$) is bounded by the rate $O(h^2) + O_{\P}\left(\sqrt{\frac{\log (n)}{nh^{d+2}}}\right)$ asymptotically. \cite{azizyan2015risk} also derived the convergence rate of the mode clustering for the rand index. Here we briefly compare our results to theirs. \cite{azizyan2015risk} consider a low-noise condition that leads to a fast convergence rate when clusters are well-separated. Their approach can even be applied to the case of increasing dimensions. In our case (Theorem~\ref{thm::number}), we consider a fixed dimension scenario but we do not assume the low-noise condition. Thus, the main difference between Theorem~\ref{thm::number} and the result in \cite{azizyan2015risk} is the assumptions being made so our result complements the findings in \cite{azizyan2015risk}. \subsection{Consistency of Morse-Smale Regression}\label{sec::thm::MSR} In what follows, we will show that $\hat{m}_{n, \MSR}(x)$ is a consistent estimator of $m_{\MSR}(x)$. Recall that \begin{equation} m_{\MSR}(x) = \mu_{ \ell} + \beta_{\ell}^T x, \mbox{ for }x \in E_{\ell}, \end{equation} where $E_{\ell}$ is the $d$-cell defined on $m$ and the parameters are \begin{equation} \begin{aligned} (\mu_{\ell}, \beta_{\ell}) &= \underset{\mu,\beta}{\sf \mathop{\mathrm{argmin}}} \mbox{ }\mathbb{E}\left((Y- \mu-\beta^TX)^2|X\in E_{\ell}\right). \end{aligned} \label{eq::LSE3} \end{equation} And $\hat{m}_{n,\MSR}$ is the two-stage estimator to $m_{\MSR}(x)$ defined by \begin{equation} \hat{m}_{n, \MSR}(x) = \hat{\mu}_\ell+\hat{\beta}_\ell^Tx, \mbox{ for }x \in \hat{E}_\ell, \end{equation} where $\{\hat{E}_\ell: \ell=1,\cdots, \hat{L}\}$ are the collection of cells of the pilot nonparametric regression estimator $\hat{m}_n$ and $\hat{\mu}_\ell, \hat{\beta}_\ell$ are the regression parameters from equation \eqref{eq::LSE2}: \begin{equation} \begin{aligned} (\hat{\mu}_\ell, \hat{\beta}_\ell)= \underset{\mu,\beta}{\sf\mathop{\mathrm{argmin}}} \mbox{ }\sum_{i: X_i\in \hat{E}_\ell}(Y_i- \mu-\beta^TX_i)^2. \end{aligned} \end{equation} \begin{thm}[Consistency of Morse-Smale Regression] Assume (A) and (D) for $m$ and assume $m$ is a Morse-Smale function. Then when $\frac{\log n }{nh^{d+6}}\rightarrow 0, h\rightarrow 0$, we have \begin{equation} \left|m_{\MSR}(x)-\hat{m}_{n,\MSR}(x)\right| = O_{\P}\left(\frac{1}{\sqrt{n}}\right) + O\left(\norm{\hat{m}_n-m}_{1,\max}\right) \label{eq::thm::MSR1} \end{equation} uniformly for all $x$ except for a set $\mathbb{N}_n$ with Lebesgue measure $O_\P(\norm{\hat{m}_n-m}_{1,\max})$, \label{thm::MSR} \end{thm} Theorem~\ref{thm::MSR} states that when we have a consistent pilot nonparametric regression estimator (such as the kernel regression), the proposed MSR estimator converges to the population MSR. Similarly as in Theorem~\ref{thm::MSS}, the set $\mathbb{N}_n$ are regions around the boundaries of cells where we cannot distinguish their host cell. Note that when we use the kernel regression as the pilot estimator $\hat{m}_n$, Theorem~\ref{thm::MSR} becomes $$ \left|m_{\MSR}(x)-\hat{m}_{n,\MSR}(x)\right| = O(h^2)+O_{\P}\left(\sqrt{\frac{\log n}{nh^{d+2}}}\right). $$ under regular smoothness conditions. Now we consider a special case where we may obtain parametric rate of convergence for estimating $m_{\MSR}$. Let $\mathcal{E} = \partial \left(E_1\bigcup\cdots\bigcup E_L\right)$ be the boundaries of all cells. We consider the following low-noise condition: \begin{equation} \P\left(X\in \mathcal{E}\oplus \epsilon\right) \leq A \epsilon^\beta, \label{eq::low_noise} \end{equation} for some $A,\beta>0$. Equation \eqref{eq::low_noise} is Tsybakov's low noise condition \citep{audibert2007fast} applied to the boundaries of cells. Namely, \eqref{eq::low_noise} states that it is unlikely to many observations near the boundaries of cells of $m$. Under this low-noise condition, we obtain the following result using kernel regression. \begin{thm}[Fast Rate of Convergence for Morse-Smale Regression] Let the pilot estimator $\hat{m}_n$ be the kernel regression estimator. Assume (A) and (D) for $m$ and assume $m$ is a Morse-Smale function. Assume also \eqref{eq::low_noise} holds for the covariate $X$ and (K1-2) for the kernel function. Also assume that $h = O\left(\left(\frac{\log n}{n}\right)^{1/(d+6)}\right)$. Then uniformly for all $x$ except for a set $\mathbb{N}_n$ with Lebesgue measure $O_\P\left(\left(\frac{\log n}{n}\right)^{2/(d+6)}\right)$, \begin{equation} \left|m_{\MSR}(x)-\hat{m}_{n,\MSR}(x)\right| = O_{\P}\left(\frac{1}{\sqrt{n}}\right) + O_{\P}\left(\left(\frac{\log n}{n}\right)^{2\beta/(d+6)}\right). \label{eq::thm::MSR2} \end{equation} Therefore, when $\beta>\frac{6+d}{4}$, we have \begin{equation} \left|m_{\MSR}(x)-\hat{m}_{n,\MSR}(x)\right| = O_{\P}\left(\frac{1}{\sqrt{n}}\right). \label{eq::thm::MSR3} \end{equation} \label{thm::MSR2} \end{thm} Theorem~\ref{thm::MSR2} shows that when the low noise condition holds, we obtain a fast rate of convergence for estimating $m_{\MSR}$. Note that the pilot estimator $\hat{m}_n$ does not ahve to be a kernel estimator; other approaches such as the local polynomial regression will also work. \subsection{Consistency of the Morse-Smale Signature}\label{sec::thm::MSS} Another application of Theorem~\ref{thm::Haus} is to bound the difference of two Morse-Smale signatures. Let $f$ be a Morse-Smale function with cells $E_1,\ldots, E_L$. Recall that the Morse-Smale signatures are the bipartite graph and summary statistics (locations, density values) for local modes, local minima, and cells. It is known in the literature (see, e.g., Lemma \ref{lem::critical}) that when two functions $\tilde{f},f$ are sufficiently close, then \begin{equation} \max_j\|\tilde{c}_j-c_j\| = O\left(\|\tilde{f}-f\|_{1,\max}\right),\quad \max_j\|\tilde{f}(\tilde{c}_j)-f(c_j)\| = O\left(\|\tilde{f}-f\|_{\infty}\right), \label{eq::critical} \end{equation} where $\tilde{c}_j, c_j$ are critical points $\tilde{f}$ and $f$ respectively. This implies the stability of local modes and minima. So what we need is the stability of the summary statistics $(\eta_\ell^\dagger, \gamma_\ell^\dagger)$ associated with the edges (cells). Recall that these summaries are defined through \eqref{eq::MSS1} \begin{equation*} (\eta_\ell^\dagger, \gamma_\ell^\dagger) = \underset{\eta, \gamma}{\sf argmin} \int_{E_\ell} \left(f(x)-\eta-\gamma^Tx\right)^2 dx. \end{equation*} For another function $\tilde{f}$, let $(\tilde{\eta}_\ell^\dagger, \tilde{\gamma}_\ell^\dagger)$ be its signatures for cell $\tilde{E}_\ell$. The following theorem shows that if two functions are close, their corresponding Morse-Smale signatures are also close. \begin{thm} Let $f$ be a Morse-Smale function satisfying assumptions A and D, and let $\tilde{f}$ be a smooth function. Then when $\frac{\log n }{nh^{d+6}}\rightarrow 0, h\rightarrow 0$, after relabeling the indices of cells of $\tilde{f}$, $$ \max_\ell\left\{\|\tilde{\eta}_\ell^\dagger-\eta_\ell^\dagger\|, \|\tilde{\gamma}_\ell^\dagger-\gamma_\ell^\dagger\|\right\} = O\left(\norm{\tilde{f}-f}^*_{1,\max}\right). $$ \label{thm::MSS} \end{thm} Theorem~\ref{thm::MSS} shows stability of the signatures $(\eta^\dagger_\ell, \gamma^\dagger_\ell)$. Note that Theorem~\ref{thm::MSS} also implies that the stability of piecewise approximation $$ |f_{\MS}(x)-\tilde{f}_{\MS}(x)| = O\left(\norm{\tilde{f}-f}^*_{1,\max}\right). $$ Together with the stability of critical points \eqref{eq::critical}, Theorem~\ref{thm::MSS} proves the stability of Morse-Smale signatures. \subsubsection{Example: Morse-Smale Density Estimation} As an example for Theorem~\ref{thm::MSS}, we consider density estimation. Let $p$ be the density of random sample $X_1,\cdots,X_n$ and recall that $\hat{p}_n$ is the kernel density estimator. Let $(\eta_\ell^\dagger, \gamma_\ell^\dagger)$ be the signature for $p$ under cell $E_\ell$ and $(\hat{\eta}_\ell^\dagger, \hat{\gamma}_\ell^\dagger)$ be the signature for $\hat{p}_n$ under cell $\hat{E}_\ell$. The following corollary guarantees the consistency of Morse-Smale signatures for the KDE. \begin{cor} Assume (A,D) holds for $p$ and the kernel function satisfies (K1--2). Then when $\frac{\log n }{nh^{d+6}}\rightarrow 0, h\rightarrow 0$, after relabeling we have $$ \max_\ell\left\{\|\hat{\eta}_\ell^\dagger-\eta_\ell^\dagger\|, \|\hat{\gamma}_\ell^\dagger-\gamma_\ell^\dagger\|\right\} = O(h^2)+O_{\P}\left(\sqrt{\frac{\log n}{nh^{d+2}}}\right). $$ \label{thm::density} \end{cor} The proof to Corollary~\ref{thm::density} is a simple application of Theorem~\ref{thm::MSS} with the rate of convergence for the first derivative of the KDE (Theorem~\ref{thm::KDE}). So we omit the proof. The optimal rate in Corollary~\ref{thm::density} is $O_\P\left(\left(\frac{\log n}{n}\right)^{\frac{2}{d+6}}\right)$ when we choose $h$ to be of order $O\left(\left(\frac{\log n}{n}\right)^{\frac{1}{d+6}}\right)$. \begin{remark} When we compute the Morse-Smale approximation function, we may have some numerical problem in low-density regions because the density estimate $\hat{p}_n$ may have unbounded support. In this case, some cells may be unbounded, and the majority of these cells may have extremely low density value, which makes the approximation function $0$. Thus, in practice, we will restrict ourselves only to the regions whose density is above a pre-defined threshold $\lambda$ so that every cell is bounded. A simple data-driven threshold is $\lambda = 0.05 \sup_{x} \hat{p}_n(x)$. Note that Theorem~\ref{thm::density} still works in this case but with a slight modification: the cells are define on the regions $\{x: p_h(x)\geq 0.05\times \sup_x p_h(x)\}$. \end{remark} \begin{remark} Note that for a density function, local minima may not exist or the gradient flow may not lead us to a local minimum in some regions. For instance, for a Gaussian distribution, there is no local minimum and except for the center of the Gaussian, if we follow the gradient descent path, we will move to infinity. Thus, in this case we only consider the boundaries of ascending $0$-manifolds corresponding to well-defined local minima and assumptions (A) is only for the boundaries corresponding to these ascending manifolds. \end{remark} \begin{remark} When we apply the Morse-Smale complex to nonparametric density estimation or regression, we need to choose the tuning parameter. For instance, in the MSR, we may use kernel regression or local polynomial regression so we need to choose the smoothing bandwidth. For the density estimation problem or mode clustering, we need to choose the smoothing bandwidth for the kernel smoother. In the case of regression, because we have the response variable, we would recommend to choose the tuning parameter by cross-validation. For the kernel density estimator (and mode clustering), because the optimal rate depends on the gradient estimation, we recommend choosing the smoothing bandwidth using the normal reference rule for gradient estimation or the cross-validation method for gradient estimation \citep{duong2007ks,Chacon2011}. \end{remark} \section{Discussion} In this paper, we introduced the Morse-Smale complex and the summary signatures for nonparametric inference. We demonstrated that the Morse-Smale complex can be applied to various statistical problems such as clustering, regression and two sample comparisons. We showed that a smooth multivariate function can be summarized by a few parameters associated with a bipartite graph, representing the local modes, minima and the complex for the underlying function. Moreover, we proved a fundamental theorem about the stability of the Morse-Smale complex. Based on the stability theorem, we derived consistency for mode clustering and regression. The Morse-Smale complex provides a method to synthesize both parametric and nonparametric inference. Compared to parametric inference, we have a more flexible model to study the structure of the underlying distribution. Compared to nonparametric inference, the use of the Morse-Smale complex yields a visualizable representation for the underlying multivariate structures. This reveals that we may gain additional insights in data analysis by using geometric features. Although the Morse-Smale complex has many potential statistical applications, we need to be careful when applying it to a data set whose dimension is large (say $d>10$). When the dimension is large, the curse of dimensionality kicks in and the nonparametric estimators (in both density estimation problems or regression analysis) are not accurate so the errors of the estimated Morse-Smale complex can be huge. Here we list some possible extensions for future research: \begin{itemize} \item \emph{Asymptotic distribution.} We have proved the consistency (and the rate of convergence) for estimating the complex but the limiting distribution is still unknown. If we can derive the limiting distribution and show that some resampling method (e.g. the bootstrap \cite{Efron1979}) converges to the same distribution, we can construct confidence sets for the complex. \item \emph{Minimax theory.} Despite the fact that we have derived the rate of convergence for a plug-in estimator for the complex, we did not prove its optimality. We conjecture the minimax rate for estimating the complex should be related to the rate for estimating the gradient and the smoothness around complex \citep{audibert2007fast, singh2009adaptive}. \end{itemize} \section*{Acknowledgement} We thank the referees and the Associate Editor for their very constructive comments and suggestions.
1,116,691,501,075
arxiv
\section{INTRODUCTION} In geometrically frustrated magnet, a macroscopic degeneracy remains in the ground state at zero temperature, as long as the geometry is preserved. Such a situation contradicts the third law of thermodynamics and small perturbations, which can induce non-trivial quantum states, play an important role in avoiding the breakdown of the basic law \cite{Geo_Rev,Balents}. A classic example of the violation of the third law is given by a regular tetrahedron of $S = 1/2$ Heisenberg spins; this has a nonmagnetic ground state with a two-fold degeneracy. In nature, however, neither perfect isolation nor absence of coupling to other degrees of freedom is achieved and a non-degenerate state is induced by a perturbation. In the presence of spin-lattice coupling the lifting of the degeneracy is accompanied by the distortion of the tetrahedron, which is called the spin Jahn-Teller effect \cite{SpinJT}. In the case of a three-dimensional (3D) lattice of corner sharing tetrahedra, i.e., the pyrochlore lattice\cite{SpinJT,Tchernyshyov}, the distortion is cooperatively propagated over the crystal, causing a magnetostructural phase transition\cite{ZnV2O4::1,ZnV2O4::2}. For an isolated regular tetrahedral system, on the other hand, experimental study is rare for lack of model compound. The search for a simple and isolated system is a challenge to the third law, leading to discovery of new state of matter at very low temperatures. In the absence of spin-lattice coupling in the Heisenberg spin pyrochlore system the degeneracy is lifted by 3D spin coupling of the magnetic ground state. This leads to quantum spin liquid\cite{pyro::theo::1}, ordering of spin singlet state\cite{pyro::theo::2,Berg03}, or chiral order state\cite{pyro::theo::3,pyro::theo::4}. The breathing pyrochlore lattice, i.e., one consisting of arrays of alternating large and small tetrahedra, has been found for the $S = 3/2$ spinels Li$A$Cr$_4$O$_8$ ($A$ = In and Ga)\cite{br::pyro::1,Goran}. The lattice is an experimental realization of a theoretical perturbation expansion method used for the pyrochlore lattice\cite{pyro::theo::1,pyro::theo::2,pyro::theo::3}. Theory predicts a spin liquid ground state for this model\cite{BrPy::theo}; however, LiInCr$_4$O$_8$\cite{Goran} exhibits a magnetostructural transition due to the spin Jahn-Teller effect similar to that observed in conventional pyrochlore compounds\cite{ZnV2O4::1,ZnV2O4::2,ACr2O4::1,ACr2O4::2,ACr2O4::3}. Thus the material that preserves the breathing pyrochlore geometry at low temperature will be important. \BaYbZnO\ is an experimental realization of a breathing pyrochlore lattice formed by Yb$^{3+}$ ions\cite{br::pyro::2}, with both the small and large tetrahedra being regular. The oxygen ions surrounding the Yb$^{3+}$ ions are shared by the neighboring Yb$^{3+}$ ions in the small tetrahedra, while those are not shared in the large tetrahedra. This results in the small tetrahedra of Yb$_4$O$_{16}$ being surrounded by Zn$_{10}$O$_{20}$ supertetrahedra. This crystal structure suggests that intertetrahedra interaction is small and local distortion in the small tetrahedron, if it appears at low temperature, does not propagate to neighboring small tetrahedron. The magnetic susceptibility has been reported and can be explained by an $S$ = 1/2 tetrahedron model; no phase transitions were observed with $T \ge {\rm 0.38}$ K\cite{br::pyro::2}. Crystalline electric field (CEF) excitations have been measured by inelastic neutron scattering (INS)\cite{BYZO::CEF}; the data were explained by four Kramers doublets with a first eigenenergy of 38.2 meV. This means that the low energy excitations are dominated by the ground state doublet and the effective spin 1/2 is a good approximation. Furthermore the eigenfunction of the ground state was shown to exhibit an easy-plane type magnetic moment. Even including this anisotropy term the ground state of the tetrahedral spin system is a doublet in the absence of any intertetrahedron interaction or spin-lattice coupling. As such \BaYbZnO\ is a candidate for the classic example of frustrated magnets. In this communication we study low energy excitations to identify the effective spin Hamiltonian by INS experiment and macroscopic properties at very low temperatures to see how nature keeps the third law of thermodynamics. We demonstrate how the degeneracy of the ground state is lifted and unique quantum state is selected in \BaYbZnO . INS experiments were performed using the neutron spectrometer PELICAN\cite{PELICAN_Perf1} at ANSTO. We utilized setup I using an incident energy $E_i$ of 2.1 meV and setup II using an $E_i$ of 3.6 meV. Setup I afforded a resolution of 0.059 meV full width half maximum (FWHM) at the elastic line, while setup II gave 0.135 meV. \begin{figure}[htbp] \begin{center} \includegraphics[width=85mm]{./fig1.eps} \caption{\label{fig1} (a)- (d) INS spectra measured at 1.5 K (a), 6 K (b), 12 K (c), and 40 K (d) using setup I. (e) INS spectrum measured at 1.5 K using the setup II. (f) $Q$ dependence of the integrated intensity obtained from spectrum in (e). Red solid curve is the calculation (see text). } \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=85mm]{./fig2rev.eps} \caption{\label{fig2} (a), (b) The $\hbar \omega$ dependences of the neutron intensities at 1.5 K for (a) and 12 K for (b). (c) Temperature dependences of the intensities of the excitations in (a). (d) Those of the excitations additionally observed in high temperatures in (b). (e) $\hbar \omega$ dependence of the intensity at 1.5 K obtained using setup II. Throughout the panels red and black solid curves are the calculation using the same parameters in Eqs.~(3)-(6). (f) Energy level of $S = 1/2$ Heisenberg spin tetrahedron model in the previous study\cite{br::pyro::2} and that of $S = 1/2$ anisotropic spin tetrahedron in the present study. } \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=85mm]{./fig3.eps} \caption{\label{fig4} (a) Heat capacity measured reported in the previous study\cite{br::pyro::2}. Red solid curve is calculation. (b) Magnetic field dependence of magnetization at 0.5 K. Red solid curve is calculation. (c) Filled circles indicate heat capacity. Blue dashed curve is the calculation of the empirical model where the lift of the degeneracy of the ground state is introduced as a single energy gap. Red solid curve is the calculation of the model where the empirical energy gap has distribution. (d) Entropy change estimated from (c). The calculated entropy is shifted so that the calculation has the same value of the data at 0.94 K. } \end{center} \end{figure} The INS spectra measured using setup I are shown in Figs. \ref{fig1}(a)-\ref{fig1}(d). Three flat bands are observed at 1.5 K; the absence of dispersion suggests that these bands are approximately cluster excitations and the effect of any intercluster excitation is small and hidden in the instrument resolution. At 6 K the intensities of these three excitations are suppressed and additional flat bands are observed at different $\hbar \omega$'s. In all panels there are several streaks observed in the range $\hbar \omega \lesssim$ 0.4 meV which was ascribed to acoustic phonons. Figure \ref{fig1}(e) shows the INS spectrum obtained using setup II while panel (f) shows the $Q$ dependence of the intensity integrated over the range $0.25~{\rm meV} < \hbar \omega < 0.95~{\rm meV}$. The symbols in Figs. \ref{fig2}(a) and \ref{fig2}(b) shows the one dimensional energy cuts from the data presented in Figs.~\ref{fig1}(a) and \ref{fig1}(c), respectively. The peaks are fitted by Gaussian functions with FWHM restricted to that of instrumental resolution to estimate the peak energies and the intensities. The peak energies at 1.5 K are 0.39, 0.52, 0.73 and 0.78 meV. At 12 K additional peaks are observed in Fig.~\ref{fig2}(b). The temperature dependence of the four excitations observed at 1.5 K are shown in Fig.~\ref{fig2}(c), while the additional excitations at 12 K are shown in Fig.~\ref{fig2}(d). The former monotonically decrease with increasing temperature while the latter show the opposite behavior. This implies that those excitations at 1.5 K are ground state transitions and those at 12 K originate from excited states. Figure \ref{fig2}(e) shows one-dimensional energy cuts from the INS spectra at 1.5 K in Fig.~\ref{fig1}(e) using setup II. A peak is observed at 1.75 meV in addition to the peaks in setup I. The $Q$ dependence of the intensity in Fig.~\ref{fig1}(f) exhibits broad maximum at $Q_{\rm max} \sim 1.25 {\rm \AA}^{-1}$. This means that antiferromagnetic correlation between the spins, the characteristic length scale of which is ${\pi}/Q_{\rm max}$, is enhanced. The dispersionless excitations with the $Q$ dependent intensity means that the neutron spectrum is dominated by an antiferromagnetic cluster within the instrumental resolution. For analysis on INS spectra we assume a spin tetrahedron model. The number of the excitations observed at the base temperature is four, which is inconsistent with Heisenberg $S$ = 1/2 spin tetrahedron. We, therefore, consider the following general expression\cite{SL}: \begin{align} \mathcal{H} = - \sum_{i<j} \sum_{\nu\mu} J_{ij}^{\nu\mu} S_i^{\nu} S_j^{\mu}. \label{hamiltonian} \end{align} Here $i$ and $j$ are the labels of the spins on the tetrahedron and ${\nu}$ and ${\mu}$ represent Cartesian coordinates $x$, $y$, and $z$ which are defined along the crystallographic $a$, $b$ and $c$-axes, respectively. The position vectors for the spins are ${\bm r}_1 = d/\sqrt{3}(1,1,1)$, ${\bm r}_2 = d/\sqrt{3}(-1,-1,1)$, ${\bm r}_3 = d/\sqrt{3}(1,-1,-1)$, and ${\bm r}_4 = d/\sqrt{3}(-1,1,-1)$, where $2{\sqrt 2}d$ is the length of the side of the tetrahedron. The symmetry of the regular tetrahedron determines the form of the interaction tensor $\hat{J}_{ij}$\cite{SL}, for example in the case of $\hat{J}_{12}$ one gets, \begin{align} \hat{J}_{12} = \begin{pmatrix} J_{xx} & J_{yx}& J_{zx} \\ J_{xy} & J_{yy}& J_{zy} \\ J_{xz} & J_{yz}& J_{zz} \end{pmatrix} = \begin{pmatrix} J_{1} & J_{3}& -J_{4} \\ J_{3} & J_{1}& -J_{4} \\ J_{4} & J_{4}& J_{2} \end{pmatrix} \label{interaction}. \end{align} Using the Hamiltonian in Eq. (\ref{hamiltonian}) and the interaction tensor $\hat{J}_{ij}$, we calculate the neutron scattering cross section. We performed a numerical fitting to the peak energies and their intensities measured at $T$ = 1.5 K in setup I using a genetic algorithm. The fitting parameters are $J_1$, $J_2$, $J_3$, $J_4$ and a scale factor. Here $J_1$ is an $XY$ type interaction, $J_2$ is Ising type, $J_3$ is a pseudo-dipole-type and $J_4$ is Dzyaloshinskii-Moriya (DM) type. The magnitude of the Dzyaloshinskii correspond to $\sqrt{2}J_4$. The best fit parameters are: \begin{align} J_1 = -0.570 \pm 0.033 ~{\rm meV},\label{parameters1} \\ J_2 = -0.558 \pm 0.028 ~{\rm meV},\label{parameters2} \\ J_3 = 0.000 \pm 0.023 ~{\rm meV},\label{parameters3} \\ J_4 = 0.113 \pm 0.014 ~{\rm meV}.\label{parameters4} \end{align} These are consistent within error to those obtained from independent INS experiments reported recently by another group\cite{recent}. The results show that the system has an easy-plane $XXZ$ type anisotropy with the DM-type interaction about 30 \% of the main interaction. The predicted intensities have been convoluted with the instrumental resolution and are indicated by the solid red curves in Figs.~\ref{fig2}(a) and \ref{fig2}(b). Similarly the predicted temperature dependence is shown in Figs.~\ref{fig2}(c) and \ref{fig2}(d) by the solid black curves. All the data are reproduced by the calculations. The data collected using setup II in Fig.~\ref{fig1}(e) is also reproduced using the same set of parameters. Thus we can conclude that the neutron data are explained by $S = 1/2$ spin tetrahedron including DM interaction in the temperature range $1.5~{\rm K} \le T \le 40~{\rm K}$. The energy levels obtained in the INS experiment are shown in right hand panel of Fig.~\ref{fig2}(f). The eigenenergies are calculated using the parameters from Eqs.~(\ref{parameters1})-(\ref{parameters4}). For reference the energy level obtained by bulk measurements\cite{br::pyro::2} on the basis of Heisenberg model is shown in the left panel. The energy levels of the Heisenberg model is characterized by a total spin ${\bm S_{\rm total}}= {\bm S_1}+{\bm S_2}+{\bm S_3}+{\bm S_4}$. The nine-fold degeneracy of the first excited state of $S$ = 1 is lifted by the $XXZ$ anisotropy and DM interactions. Similarly the degeneracy of the $S$ = 2 state is lifted. The doublet of the ground state is, however, lifted neither by the anisotropy nor DM interaction in the framework of isolated tetrahedron Hamiltonian. In the present model, total spin $S_{\rm total}$ is no longer a good quantum number because DM interaction mixes the eigenstates in isotropic Hamiltonian. This modifies the selection rules of neutron scattering and allows finite matrix components for the transitions among all the eigenstates and leads to the observation of many excitations in the INS spectra. The anisotropic exchange parameters determined using INS was then used to calculate the thermodynamic properties of the Hamiltonian Eq.~(\ref{hamiltonian}) and a comparison to experiment was made. Figure \ref{fig4}(a) shows the heat capacity calculated using Eq. (\ref{hamiltonian}), together with that measured in a previous study\cite{br::pyro::2}. The data is reasonably reproduced by the calculation for $T \ge$ 1.5 K; a broad peak at $T \sim$ 2.5 K indicates the Schottky anomaly associated with the excited states in the range 0.39 meV $\le \hbar \omega \le$ 0.78 meV as shown in Fig. \ref{fig2}(f). An excellent agreement between experiment and calculation is also observed for the full magnetization curve at 0.5 K in Fig. \ref{fig4}(b). Here the best reproduction of the data was with an anisotropic $g$ tensors $g_{\perp}$ = 2.78 and $g_{\parallel}$ = 2.22, where the principle axis of the $g$ factor is taken along $\hat{r}_i$. This easy-plane anisotropy of the $g$-tensor is consistent with the previous study. The magnetization curve exhibits two pronounced steps at $H_{C1} \sim 3.5$ T and $H_{C2} \sim 8.8$ T; these correspond to the transitions from the doublet ground state to the first set of excited states in the range of 0.39 meV $\le \hbar \omega \le $0.78 meV and the first set to the second set at $\hbar \omega \sim$ 1.75 meV in Fig.~\ref{fig2}(f). The effect of the anisotropic exchange interactions manifests in the non-equivalent distance of the steps and the ramp-like structure rather than the stair-like structure. The former are not expected for a Heisenberg spin tetrahedron model. The anisotropic exchange interactions mix the $S_{\rm total}$ = 0 and $S_{\rm total}$ = 1 states, giving rise to a finite magnetic moment of the lowest energy doublet with $\braket{S_{\rm total}}_{\rm GS}= 0.13$, which is consistent with the finite slope at $H < 2.5$ T. Thus, the anisotropic $S = 1/2$ single tetrahedral model can account for multiple experimental data: the INS spectra at 1.5 K in the range of $\hbar \omega \gtrsim 0.15$ meV, the magnetization curve at 0.5 K, and heat capacity above 1.5 K. However, this model has a two-fold degeneracy of the lowest energy state, from which the real ground state should be selected by additional interactions that are not included in this model. Therefore we performed heat capacity measurements in the range 24 mK to 1 K, the results of this are shown in Fig. \ref{fig4}(c). Rather than a sharp peak indicative of a phase transition, the heat capacity exhibits a broad peak at $T \sim$ 63 mK. The entropy change per Yb$^{3+}$ ion is calculated to be 1.4 J/K $ \sim R/4 \ln(2)$; the calculation is shown as the solid line in Fig. \ref{fig4}(d). Such a change in entropy corresponds to the release of two degrees of freedom per spin-tetrahedron. Thus, the heat capacity measurements at low temperature demonstrate that a unique ground state is finally selected from the doublet ground state of the single tetrahedral model. To explain the heat capacity and entropy change, we firstly assume that the doublet ground state of all the spin tetrahedra is lifted by a single energy gap $E_g$. This corresponds to the assumption that all the Yb$_4$ tetrahedra are uniformly distorted. The dashed curve in Fig.~\ref{fig4}(c) is the calculation using $E_g$ = 0.012 meV; the calculated heat capacity is dominated by Schottky behavior from the two level system of the split doublet. The peak is much narrower than that of experiment and the model does not explain the data. Secondly, we assume that $E_g$ has distribution, which includes the possibilities of different $E_g$ for each spin tetrahedron and dispersive $E_g$ due to inter-tetrahedra interaction. We use Lorentzian function for the distribution of $E_g$ with the peak center $E_c$ = 0.010 meV and the FWHM $E_l$ = 0.016 meV. The calculated heat capacity and entropy change are indicated by red solid curves. The calculation reasonably reproduces the data. The doublet ground state of the isolated tetrahedral Hamiltonian identified within the instrumental resolution of INS experiment is, thus, lifted by a perturbation. Furthermore the energy gap exhibits distribution or dispersion. The ground state of \BaYbZnO\ is, therefore, not a solution of the spin Hamiltonian in Eq.~(\ref{hamiltonian}) but a non-trivial quantum state. The possible perturbations for lifting this degeneracy should be discussed; one possibility is that there is an interaction between spin tetrahedra, although this is small. The theory of Heisenberg spin tetrahedra predicts that a partial ordering is induced and the energy gap is estimated as $10^{-3}J_{\rm inter}^3/ 48 J_{\rm intra}^2$\cite{pyro::theo::2}, where $J_{\rm intra}$ and $J_{\rm inter}$ are the intra and inter tetrahedron interactions, respectively. Since no dispersion is observed within the instrumental resolution of 0.059 meV we can assume that the band width of the excitations is smaller than this value. This leads to the estimation of an upper bound for $J_{\rm inter}$ of 0.015 meV using the RPA approximation. Note that $J_{\rm intra} \sim$ 0.5 meV in Eq. (\ref{parameters1}), and the magnitude of the calculated gap is too small to explain the observed gap. However a large asymmetric interaction, $J_4$, is obtained, this is 20\% of $J_1$ and $J_2$. The theory considering DM interactions with a magnitude of a few percent of the main Heisenberg interaction suggests that the energy gap of the partial ordering of dimers is qualitatively enhanced\cite{DM_pyro}. Furthermore a chiral ordered phase is predicted at lower temperatures. The candidates for the quantum ground state are therefore either a partial ordering of dimers or a chiral ordered state induced by a combination of inter-tetrahedra interactions and a large DM interaction. Another possibility for the perturbation is spin lattice coupling. In a single spin $S = 1/2$ tetrahedron coupled to the lattice, the ground state doublet is lifted by the spin Jahn-Teller (JT) mechanism\cite{SpinJT}. In the spinel compounds ZnV$_2$O$_4$ and MgV$_2$O$_4$\cite{SpinJT,Tchernyshyov} a magnetostructural transitions was observed, which was due to a coupling of the interaction pathways and the three dimensional lattice. While in MgCr$_2$O$_4$ a precursor to the spin JT was observed above the transition due to the spin dynamical JT (DJT) effect\cite{Watanabe}. In contrast to the uniform array of tetrahedra in the pyrochlore lattice, the Yb$_4$O$_{16}$ tetrahedra in \BaYbZnO\ are surrounded by JT inactive Zn$_{10}$O$_{20}$ supertetradra and are comparatively isolated. This circumstance is quite similar to that in the honeycomb compound Ba$_3$CuSb$_2$O$_9$\cite{Zhou,Nakatsuji}, where the Cu$^{2+}$ tetrahedra are face-shared by JT-inactive SbO$_6$ octahedra and thus isolated; this suppresses the static JT distortion and a quantum spin liquid is induced by DJT\cite{Nasu,Han}. In analogy to Ba$_3$CuSb$_2$O$_9$ the spin DJT is a candidate for the microscopic mechanism of the suppression of the structural transition and the appearance of a quantum spin liquid in \BaYbZnO . However, no equivalent theory has been reported for the breathing pyrochlore lattice and the ground state is still an open question. \begin{acknowledgments} Prof. Tsuyoshi Kimura is greatly appreciated for helpful discussion. T. Haku was supported by the Japan Society for the Promotion of Science through the Program for Leading Graduate Schools (MERIT). This work was supported by JSPS KAKENHI Grant in Aid for Scientific Research (B) Grant No. 24340077 and 24340075. Travel expenses for the experiment performed using PELICAN at ANSTO, Australia, were supported by General User Program for Neutron Scattering Experiments, Institute for Solid State Physics, The University of Tokyo (proposal No. 15543), at JRR-3, Japan Atomic Energy Agency, Tokai, Japan. Magnetization measurement was carried out by the joint research in the Institute for Solid State Physics, the University of Tokyo. \end{acknowledgments} \providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1
1,116,691,501,076
arxiv
\section{INTRODUCTION} \label{sec:intro}\vspace{-0.2cm} Salient object detection (SOD) aims to locate image regions that attract much human visual attention. It is useful in many computer vision tasks, \emph{e.g.,} object segmentation \cite{SaliencyAwareVO}, tracking \cite{2019Non}, image/video compression \cite{2010A}. Though RGB SOD methods have made great progresses in recent years thanks to deep learning\cite{RGBsurvey}, they still encounter problems in challenging scenarios, \emph{e.g.,} similar foreground and background, cluttered/complex background, or low-contrast environment. With the increasing access to depth sensors, RGB-D SOD recently becomes a hot research topic \cite{BBSNet,JLDCF,UCNet,HDFNet}. Additional useful spatial information embedded in depth maps could somewhat help overcome the aforementioned challenges. Although a lot of advances \cite{RGBDsurvey} have been made in this field by exploring cross-modal complementarity\cite{PCF,HDFNet,UCNet,DRMA,JLDCF,SSF,cmMS,PGAR,PDNet,CPFP,BBSNet,MMCI,A2dele,CoNet,D3Net,DANet,LSSA}, we notice that existing models are still insufficient on extracting robust saliency features. As shown in Fig. \ref{class} (a) and (b), in their encoder stages, modality-aware features are usually extracted with \emph{no interactions} or \emph{unidirectional interactions}. For instance, in Fig.\ref{class} (a), parallel encoders\cite{PCF,DRMA,JLDCF,SSF,HDFNet,PDNet} are deployed to extract individual features of RGB and depth, and then cross-modal fusion is handled by the following decoder. In Fig. \ref{class} (b), tailor-maid sub-networks\cite{PDNet,BBSNet,CPFP} are adopted to inject depth cues into RGB as guidance/enhancement. The resulting features are then decoded to obtain the saliency map. We argue that both the above strategies may have ignored the quality issue of depth maps, since depth maps obtained from no matter depth sensors or existing datasets, are often noisy with low quality. It is obvious that in Fig. \ref{class} (a) and (b), if the input depth is inaccurate, the extracted/injected depth features will be easily affected and may degrade the final saliency map from the decoder. \begin{figure} \centering \centerline{\epsfig{figure=fig/structure3,width=0.48\textwidth}}\vspace{-0.3cm} \caption{Feature extraction strategies of existing RGB-D SOD models ((a) \cite{PCF,DRMA,JLDCF,SSF,HDFNet,PDNet} and (b)\cite{PDNet,BBSNet,CPFP}) as well as the proposed bi-directional strategy for the encoder (c).}\vspace{-0.3cm} \label{class} \end{figure} To address this issue, we propose to conduct progressive bi-directional interactions as early in the encoder, instead of late in the decoder stage. This idea is illustrated by Fig. \ref{class} (c). In this paper, we propose a novel bi-directional transfer-and-selection network, named BTS-Net, which is characterized by a new bi-directional transfer-and-selection (BTS) module applied to the encoder, enabling RGB and depth to mutually correct/refine each other as early as possible. Thus, the burden of the decoder can be well relieved. Our BTS is inspired by the attention mechanism \cite{CBAM} and cross attention \cite{CrossAtt}, and it makes features from different modalities refine each other to achieve purified features with less noise. In addition, thanks to the proposed early interaction strategy, the extracted robust hierarchical features enable us to design an effective light-weight group decoder to generate the final saliency map. The contributions of this paper are three-fold: \vspace{-0.2cm} \begin{itemize} \item We propose BTS-Net, which is the first RGB-D SOD model to introduce bi-directional interactions across RGB and depth during the encoder stage. \vspace{-0.2cm} \item To achieve bi-directional interactions, we design a transfer-and-selection (BTS) module based on spatial-channel attention. \vspace{-0.2cm} \item We design an effective light-weight group decoder to achieve accurate final prediction. \end{itemize} \section{RELATED WORK}\vspace{-0.2cm} The utilization of RGB-D data for SOD has been extensively explored for years. Traditional methods rely on hand-crafted features \cite{cheng2014Depth,2013An,2012Context,2015Exploiting}, while recently, deep learning-based methods have made great progress \cite{PCF,HDFNet,UCNet,DRMA,JLDCF,SSF,cmMS,PGAR,PDNet,CPFP,BBSNet,MMCI,A2dele,CoNet,D3Net,DANet}. Based on the scope of this paper, we divide existing deep-based models into two types according to how they extract RGB and depth features, namely: parallel independent encoders (Fig. \ref{class} (a)), and tailor-maid sub-networks from depth to RGB (Fig. \ref{class} (b)). \textbf{Parallel Independent Encoders}. This strategy, illustrated in Fig. \ref{class} (a), first extracts features from RGB and depth images parallelly, and then fuses them using decoders. Chen \textit{et al.} \cite{PCF} proposed a cross-modal complementarity-aware fusion module. Piao \textit{et al.} \cite{DRMA} fused RGB and depth features via residual connections and refined with a depth vector and recurrent attention module. Fu \textit{et al.} \cite{JLDCF} extracted RGB and depth features in a parallel manner but through a Siamese network. Features are then fused and refined in a densely connected manner. Zhang \textit{et al.} \cite{SSF} introduced a complimentary interaction module to select useful features. In \cite{cmMS}, Li \textit{et al.} enhanced feature representations by taking depth features as priors. Chen \textit{et al.} \cite{PGAR} proposed to extract depth features with a light-weight depth branch and conduct progressive refinement. Pang \textit{et al.} \cite{HDFNet} combined cross-modal features to generate dynamic filters, which were used to filter and enhance the decoder features. \textbf{Tailor-maid Sub-networks from Depth to RGB}. Recently, this unidirectional interaction from depth to RGB is introduced to the encoding stage (Fig. \ref{class} (b)), leveraging depth cues as guidance or enhancement. Zhu \textit{et al.} \cite{PDNet} used depth features extracted from a subsidiary network as a weight matrix to enhance RGB features. Zhao \textit{et al.} \cite{CPFP} computed a contrast-enhanced depth map and treated it as attention to enhance feature representations of the RGB stream. In \cite{BBSNet}, depth features were enhanced by an attention mechanism and then were fused with RGB features. Such fused features were later fed to an elaborately-designed cascaded decoder. Different from all above methods, our BTS-Net introduces progressive bi-directional interactions in the encoder (Fig. \ref{class}(c)) to enforce mutual correction and refinement across RGB and depth branches, yielding robust encoder features. \begin{figure*}[htb] \centering \centerline{\epsfig{figure=fig/fig2a,width=0.92\textwidth}}\vspace{-0.4cm} \caption{Block diagram of the proposed BTS-Net, which follows the typical encoder-decoder architecture. The encoder is shown on the left, whereas the decoder is shown on the right.}\vspace{-0.3cm} \label{blockdiagram} \end{figure*} \begin{figure}[htb] \centering \centerline{\epsfig{figure=fig/fig2b,width=0.45\textwidth}}\vspace{-0.3cm} \caption{Detailed structure of the proposed BTS (bi-directional transfer-and-selection) module.}\vspace{-0.3cm} \label{BTSdiagram} \end{figure} \section{PROPOSED METHOD}\label{sec:method} \vspace{-0.2cm} Fig. \ref{blockdiagram} shows the block diagram of the proposed BTS-Net. It follows the typical encoder-decoder architecture, where the encoder is equipped with several BTS (bi-directional transfer-and-selection) modules to enforce cross-modal interactions and compensation during encoder feature extraction, resulting in hierarchical modality-aware features. Meanwhile, the decoder is elaborately designed for group-wise decoding and ultimate saliency prediction. Specifically, the encoder consists of an RGB-related branch, a depth-related branch, and five BTS modules. These two master branches adopt the widely used ResNet-50\cite{ResNet} as backbones, leading to five feature hierarchies (note the stride of the last hierarchy is modified from 2 to 1). The input to an intermediate hierarchy in either branch is the corresponding output of the previous BTS module. Besides, in order to capture multi-scale semantic information, we add ASPP (atrous spatial pyramid pooling\cite{ASPP}) modules at the end of each branch. Let the feature outputs of the five RGB/depth ResNet hierarchies be denoted as $bf_{m}^{i} (m\in\{r,d\},i=0,...,4)$, and the enhanced cross-modal features from BTS modules as well as ASPPs be denoted as $f_{m}^{i} (m\in\{r,d\}, i=0,...,5)$. We regard $f_{m}^{i} (m\in\{r,d\}, i=0,1,2)$ as low-level features and $f_{m}^{i} (m\in\{r,d\}, i=3,4,5)$ as high-level features, and such low-/high-level features are then fed to the subsequent light-weight group decoder. In the followings, we describe the proposed BTS modules which can be utilized in the encoder, as well as the light-weight group decoder in detail. \textbf{Bi-directional Transfer-and-Selection (BTS)}. The detailed structure of BTS module is shown in Fig.~\ref{BTSdiagram}, which is inspired by the well-known spatial-channel attention mechanisms\cite{CBAM}. A BTS has two processing stages: bi-directional transfer, and feature selection. Note that the former is based on spatial attention, while the latter is associated with channel attention. The underlying rational of BTS is that using spatial attention first is able to tell ``where'' a salient object is, and channel attention then can select feature channels to tell ``what'' features matter. In detail, the bi-directional transfer stage performs cross-modal attention transfer, namely applying the derived spatial attention map from either modality to the other, as illustrated in Fig.~\ref{BTSdiagram}. Since BTS is designed in a symmetric manner, for brevity, below we only elaborate the operations of transferring RGB features to the depth branch. \begin{table*}[ht] \centering \caption{\small Quantitative RGB-D SOD results. $\uparrow$/$\downarrow$ denotes that a larger/smaller value is better. The best results are highlighted in \textbf{bold}.}\vspace{-0.2cm} \label{table:QuantitativeResults} \vspace{8pt} \footnotesize \renewcommand{\arraystretch}{0.7} \renewcommand{\tabcolsep}{0.25mm} \begin{tabular}{lr|cccccccccccccccc||c} \hline\toprule & \multirow{3}{*}{Metric}\centering & PCF & MMCI & CPFP & DMRA & D3Net &SSF &A2dele &UCNet &JL-DCF &cmMS &CoNet &PGAR &Cas-Gnn &DANet &HDFNet &BBS-Net & BTS-Net \\& &\scriptsize CVPR18 &\scriptsize PR19 &\scriptsize CVPR19 &\scriptsize ICCV19 &\scriptsize TNNLS20 &\scriptsize CVPR20 &\scriptsize CVPR20 &\scriptsize CVPR20 &\scriptsize CVPR20 &\scriptsize ECCV20 &\scriptsize ECCV20 &\scriptsize ECCV20 &\scriptsize ECCV20 &\scriptsize ECCV20 &\scriptsize ECCV20 &\scriptsize ECCV20& \scriptsize Ours\\ & &\cite{PCF}&\cite{MMCI}&\cite{CPFP} &\cite{DRMA} &\cite{D3Net} &\cite{SSF} &\cite{A2dele} &\cite{UCNet} &\cite{JLDCF} &\cite{cmMS} &\cite{CoNet} &\cite{PGAR} &\cite{Cas-Gnn} &\cite{DANet} &\cite{HDFNet} &\cite{BBSNet} &- \\ \specialrule{0em}{1pt}{0pt} \hline\hline \specialrule{0em}{0pt}{1pt} \multirow{4}{*}{\begin{sideways}\textit{NJU2K}\end{sideways}} & $S_{\alpha}\uparrow$ & 0.877 & 0.858 & 0.879 & 0.886 & 0.900 & 0.899 &0.868 &0.897 &0.903 &0.900 &0.895 &0.909 &0.912 &0.891 &0.908 &0.921 & \bf0.921\\ & $F_{\beta}^{\rm max}\uparrow$ & 0.872 & 0.852 & 0.877 & 0.886 & 0.950 &0.896 &0.872 &0.895 &0.903 &0.897 &0.892 &0.907 &0.916 &0.880 &0.910 &0.920 & \bf{0.924} \\ & $E_{\xi}^{\rm max}\uparrow$ & 0.924 & 0.915 & 0.926 & 0.927 &0.950 & 0.935 &0.914 &0.936 &0.944 &0.936 &0.937 &0.940 &0.948 &0.932 &0.944 &0.949 &\bf{0.954} \\ & $\mathcal{M}\downarrow$ & 0.059 & 0.079 & 0.053 & 0.051 & 0.041 &0.043 &0.052 &0.043 &0.043 &0.044 &0.047 &0.042 &0.036 &0.048 &0.039 &\bf0.035 &{0.036} \\ \midrule \multirow{4}{*}{\begin{sideways}\textit{NLPR}\end{sideways}} & $S_{\alpha}\uparrow$ & 0.874 & 0.856 &0.888 & 0.899 & 0.912 &0.914 &0.890 &0.920 &0.925 &0.915 &0.908 &0.930 &0.920 &0.915 &0.923 &0.930 &\bf{0.934} \\ & $F_{\beta}^{\rm max}\uparrow$ & 0.841 & 0.815 &0.867 & 0.879 & 0.897 &0.896 &0.875 &0.903 &0.916 &0.896 &0.887 &0.916 &0.906 &0.901 &0.917 &0.918 &\bf{0.923} \\ & $E_{\xi}^{\rm max}\uparrow$ & 0.925 & 0.913 &0.932 & 0.947 & 0.953 &0.953 &0.937 &0.956 &0.961 &0.949 &0.945 &0.961 &0.955 &0.953 &0.963 &0.961 &\bf {0.965} \\ & $\mathcal{M}\downarrow$ & 0.044 & 0.059 & 0.036 &0.031 &0.025 &0.026 &0.031 &0.025 &\bf0.022 &0.027 &0.031 &0.024 &0.025 &0.029 &0.023 &0.023 &{0.023} \\ \midrule \multirow{4}{*}{\begin{sideways}\textit{STERE}\end{sideways}} & $S_{\alpha}\uparrow$& 0.875 & 0.873 & 0.879 & 0.835 & 0.899 &0.893 &0.885 &0.903 &0.905 &0.895 &0.908 &0.907 &0.899 &0.892 &0.900 &0.908 &\bf{0.915} \\ & $F_{\beta}^{\rm max}\uparrow$ & 0.860 & 0.863 & 0.874 & 0.847 & 0.891 &0.890 &0.885 &0.899 &0.901 &0.891 &0.904 &0.898 &0.901 &0.881 &0.900 &0.903 &\bf{0.911} \\ & $E_{\xi}^{\rm max}\uparrow$& 0.925 & 0.927 & 0.925 & 0.911 & 0.938 &0.936 &0.935 &0.944 &0.946 &0.937 &0.948 &0.939 &0.944 &0.930 &0.943 &0.942 &\bf{0.949} \\ & $\mathcal{M}\downarrow$ & 0.064 & 0.068 & 0.051 & 0.066 & 0.046 &0.044 &0.043 &0.039 &0.042 &0.042 &0.040 &0.041 &0.039 &0.048 &0.042 &0.041 &\bf{0.038} \\ \midrule \multirow{4}{*}{\begin{sideways}\textit{RGBD135}\end{sideways}} & $S_{\alpha}\uparrow$ & 0.842 & 0.848 & 0.872 & 0.900 & 0.898 &0.905&0.884 &0.934 &0.929 &0.932 &0.910 &0.913 &0.899 &0.904 &0.926 &0.933 &\bf{0.943} \\ & $F_{\beta}^{\rm max}\uparrow$ & 0.804 & 0.822 & 0.846 & 0.888 & 0.885 &0.883 &0.873 &0.930 &0.919 &0.922 &0.896 &0.902 &0.896 &0.894 &0.921 &0.927 &\bf{0.940} \\ & $E_{\xi}^{\rm max}\uparrow$ & 0.893 & 0.928 & 0.923 & 0.943 & 0.946 &0.941 &0.920 &0.976 &0.968 &0.970 &0.945 &0.945 &0.942 &0.957 &0.970 &0.966 &\bf{0.979} \\ & $\mathcal{M}\downarrow$ & 0.049 & 0.065 & 0.038 & 0.030 & 0.031 &0.025 &0.030 &0.019 &0.022 &0.020 &0.029 &0.026 &0.026 &0.029 &0.022 &0.021 &\bf{0.018} \\ \midrule \multirow{4}{*}{\begin{sideways}\textit{LFSD}\end{sideways}} & $S_{\alpha}\uparrow$ & 0.786 & 0.787 & 0.828 & 0.839 & 0.825 &0.859 &0.834 &0.864 &0.854 &0.849 &0.862 &0.853 &0.847 &0.845 &0.854 &0.864 &\bf{0.867}\\ & $F_{\beta}^{\rm max}\uparrow$ & 0.775 & 0.771 & 0.826 & 0.852 & 0.810 &0.867 &0.832 &0.864 &0.862 &0.869 &0.859 &0.843 &0.847 &0.846 &0.862 &0.859 &\bf{0.874}\\ & $E_{\xi}^{\rm max}\uparrow$ & 0.827 & 0.839 & 0.863 & 0.893 & 0.862 &0.900 &0.874 &0.905 &0.893 &0.896 &0.906 &0.890 &0.888 &0.886 &0.896 &0.901 &\bf{0.906}\\ & $\mathcal{M}\downarrow$ & 0.119 & 0.132 & 0.088 & 0.083 & 0.095 &\bf 0.066 &0.077 &\bf0.066 &0.078 &0.074 &0.071 &0.075 &0.074 &0.083 &0.077 &0.072 &{0.070}\\ \midrule \multirow{4}{*}{\begin{sideways}\textit{SIP}\end{sideways}} & $S_{\alpha}\uparrow$ & 0.842 & 0.833 &0.850 & 0.806 & 0.860 &0.874 &0.829 &0.875 &0.879 &0.867 &0.858 &0.876 &0.842 &0.878 &0.886 &0.879 &\bf{0.896} \\ & $F_{\beta}^{\rm max}\uparrow$ & 0.838 & 0.818 &0.851 & 0.821 & 0.861 &0.880 &0.834 &0.879 &0.885 &0.871 &0.867 &0.876 &0.848 &0.884 &0.894 &0.883 &\bf{0.901} \\ & $E_{\xi}^{\rm max}\uparrow$ & 0.901 & 0.897 &0.903 & 0.875 & 0.909 &0.921 &0.889 &0.919 &0.923 &0.907 &0.913 &0.915 & 0.890 &0.920 &0.930 &0.922 &\bf{0.933}\\ & $\mathcal{M}\downarrow$ & 0.071 & 0.086 &0.064 & 0.085 & 0.063 &0.053 &0.070 &0.051 &0.051 &0.061 &0.063 &0.055 &0.068 &0.054 &0.048 &0.055 &\bf{0.044}\\ \bottomrule \hline \end{tabular} \vspace{-8pt} \end{table*} Given the RGB features $bf_{r}$ at a certain hierarchy, we first compute its corresponding spatial attention map $SA_{r}$ as: \begin{gather} SA_{r}=Sigmoid(Conv_{sr}(bf_{r})), \end{gather} where $Sigmoid$ means the sigmoid activation function, and $Conv_{sr}$ represents a ($3\times3, 1$) convolutional layer with single-channel output. Next, the resulting spatial attentive cue $SA_{r}$ is transferred to depth, which is mainly implemented by element-wise multiplication (denoted by mathematical symbol ``$\times$''). Before the multiplication, $SA_{r}$ is also added with a term $SA_{r} \times SA_{d}$, where $SA_{d}$ is the counterpart from the depth branch, to preserve certain modality individuality. Therefore, the depth features compensated by RGB information are formulated as: \begin{gather} cf_{d}=(SA_{r}+SA_{r}\times SA_{d})\times bf_{d}, \end{gather} where $bf_{d}$ are the corresponding depth features. After this stage, features of each modality are spatially compensated by the information from the other modality. Next, the obtained features $cf_{d}$ are selected along the channel dimension, which is implemented by a typical channel-attention operation\cite{CBAM}: \begin{gather} CA_{d}=Softmax(Conv_{cd}(GAP(cf_{d}))),\\ f_{d}=CA_{d}\times cf_{d}, \end{gather} where $f_{d}$ indicate features that BTS outputs as in Fig.~\ref{blockdiagram} and Fig.~\ref{BTSdiagram}. $CA_{d}$ denotes the channel weight vector, $GAP$ is the global average pooling operation, $Softmax$ denotes the softmax function, and $Conv_{cd}$ is a $1\times1$ convolution, of which the input and output channel numbers are equal. Note that after the entire transfer and selection stages, our BTS maintains the features channel and spatial dimensions. This makes the proposed BTS applicable in a ``plug-and-play'' manner in most parallel independent encoders (Fig. \ref{class} (a)). We also note that we choose not to use the widely adopted residual attention strategy \cite{ResidualAN}, which adds the attended features $f_d$ ($f_r$) with the original features $bf_d$ ($bf_r$). This is because the residual connection may limit the extent to which the complementary information can be transferred. Instead, our design allows the encoder to determine such extent adaptively. Ablation experiments in Section \ref{sec:AblationStduy} show that without this residual connection, more improvement can be obtained. \textbf{Group Decoder}. Our group decoder is characterized by feature grouping and three-way supervision. As well-known, deeper features from a convolutional neural network encode high-level knowledge that helps locate objects, whereas shallower features characterize low-level edge details. Our motivation of grouping is that the same-level features have better compatibility, which facilitate subsequent decoding. Therefore, after visualizing features which the encoder extracts, we roughly divide these 12 hierarchical features into four types, \emph{i.e.}, high-level RGB features ($f_{r}^{i}, i=3,4,5$), low-level RGB features ($f_{r}^{i}, i=0,1,2$), high-level depth features ($f_{d}^{i}, i=3,4,5$), and low-level depth features ($f_{d}^{i}, i=0,1,2$). During decoding, we first conduct feature merging within each group to save memory and computation cost. These 12 hierarchical features, denoted by $f_{m}^{i}$ with different channels, are first all transformed to unified $k$-channel features $f_{mt}^{i}$ (in practice $k=256$) by a process consisting of convolution, BatchNorm, and ReLU. Such a process is denoted by ``BConv'' in Fig.~\ref{blockdiagram}. They are then grouped together into four types according to their properties, \emph{i.e.}, low-/high-level features and modalities, which can be defined as:\begin{gather} f_{m}^{h}=f_{mt}^{3}+f_{mt}^{4}+f_{mt}^{5},\\ f_{m}^{l}=f_{mt}^{0}+Up(f_{mt}^{1})+Up(f_{mt}^{2}), \end{gather} where the subscript $m\in\{r,d\}$ indicates the RGB/depth modality, and $Up$ is the bilinear up-sampling operation. Then we utilize the grouped features $f_{m}^{h},f_{m}^{l}, m\in\{r,d\}$ to predict three ultimate saliency maps. To achieve fused saliency prediction $S_{c}$, we excavate cross-modal complementarity by multiplication and addition on different levels, which guarantees explicit information fusion across RGB and depth. The fused features at different levels are then concatenated and fed to a prediction head. The above operations can be summarized as: \begin{gather} f_{c}^{h}=BConv([f_{r}^{h}\times f_{d}^{h},f_{r}^{h}+f_{d}^{h}]),\\ f_{c}^{l}=BConv([f_{r}^{l}\times f_{d}^{l},f_{r}^{l}+f_{d}^{l}]),\\ S_{c}=P([Up(f_{c}^{h}),f_{c}^{l}]), \end{gather} where $P$ is a prediction head consisting of two ``BConv'', a ($1\times 1$, 1) convoluion, a Sigmoid layer, and an up-sampling operation, and $[\cdot]$ denotes the concatenation operation. $BConv$ is the ``BConv'' process mentioned before. Moreover, in order to enhance feature learning efficacy and avoid degradation in BTS, allowing both branches to fully play their roles, we impose extra supervision to both RGB and depth branches simultaneously. The two saliency maps, namely $S_{r}$ and $S_{d}$, are generated from individual branches by using their own features: \begin{gather} S_{r}=P([Up(f_{r}^{h}),f_{r}^{l}]),~~S_{d}=P([Up(f_{d}^{h}),f_{d}^{l}]), \end{gather} where $P$, $Up$ and $[\cdot]$ are the same defined as in Eq. (7)-(9). \textbf{Supervision}. Similar to previous works\cite{JLDCF,BBSNet,HDFNet,D3Net}, we use the standard cross-entropy loss to implement three-way supervision to $S_r$, $S_d$ and $S_c$, which is formulated as: \begin{gather} \mathcal{L}_{total}=\sum\limits_{m\in\{r,d,c\}}\lambda_{m}\mathcal{L}_{bce}(S_m,G) \end{gather} where $\mathcal{L}_{total}$ is the total loss, $\mathcal{L}_{bce}$ is the binary cross-entropy loss, $G$ denotes the ground truth, and $\lambda_{m}$ emphasizes each supervision. $\lambda_{c}=1$, $\lambda_{r}=\lambda_{d}=0.5$ are set in our experiments. During inference, $S_{c}$ is used as the final prediction result. \section{EXPERIMENTS} \vspace{-0.2cm} \subsection{Datasets, Metrics and Implementation Details}\vspace{-0.2cm} We test BTS-Net on six widely used RGB-D datasets, \emph{i.e.}, NJU2K, NLPR, STERE, RGBD135, LFSD, SIP. Following \cite{JLDCF,UCNet,CPFP}, we use the same 1500 samples from NJU2K and 700 samples from NLPR for training, and the remaining samples for testing. Four metrics are adopted for evaluation, including S-measure ($S_{\alpha}$), maximum E-measure ($E_{\xi}^{\rm max}$), maximum F-measure ($F_{\beta}^{\rm max}$), and mean absolute error (MAE, $\mathcal{M}$). We implemented BTS-Net by Pytorch, and an input RGB-depth pair is resized to $352\times 352$ resolution. The learning rate is set to 1e-4 for the Adam optimizer, and is later degraded by 10 at 60 epochs. Batch size is set as 10, and the model is trained in total with 100 epochs. \vspace{-0.2cm} \subsection{Comparison with State-of-the-Arts}\vspace{-0.2cm} To demonstrate the effectiveness of the proposed method, we compare it with 16 state-of-the-art (SOTA) methods, \emph{i.e.}: PCF \cite{PCF}, MMCI \cite{MMCI}, CPFP \cite{CPFP}, DMRA \cite{DRMA}, D3Net \cite{D3Net}, SSF \cite{SSF}, A2dele \cite{A2dele}, UCNet \cite{UCNet}, JLDCF \cite{JLDCF}, cmMS \cite{cmMS}, CoNet \cite{CoNet}, PGAR \cite{PGAR}, Cas-Gnn \cite{Cas-Gnn}, DANet \cite{DANet}, HDFNet \cite{HDFNet}, BBSNet \cite{BBSNet}. Quantitative results are shown in Table~\ref{table:QuantitativeResults}. It can be seen that our BTS-Net achieves superior performance over SOTAs consistently on almost all metrics. Fig.~\ref{visualcomparison} further shows several visual comparisons of BTS-Net with the latest representative models. From top to bottom, the quality of depth maps varies from poor to good: (a) the depth almost misses the entire object; (b) depth lacks details of the bird's head and feet; (c) depth has good contrast but non-salient regions are adjoined; (d) depth is relatively good but the RGB has low contrast. Our BTS-Net performs well and robustly in all the above cases, especially in (a) and (b), where the depth presents low quality and missing information. \begin{figure} \centering \centerline{\epsfig{figure=fig/visual_comparison,width=0.48\textwidth}} \vspace{-0.3cm} \caption{Visual comparisons with SOTA RGB-D SOD models.}\vspace{-0.3cm} \label{visualcomparison} \end{figure} \begin{table}[t] \centering \caption{Results of different interaction strategies. Details are in Section~\ref{sec:AblationStduy}: ``Interaction Directions of BTS''.}\vspace{-0.2cm} \label{table:AS1} \vspace{8pt} \footnotesize \renewcommand{\tabcolsep}{0.4mm} \begin{tabular}{c|c|c|ccc|ccc|ccc} \hline\toprule \multirow{2}{*}{\#} &\multirow{2}{*}{Direction} &\multirow{2}{*}{Res} &\multicolumn{3}{c|}{\textbf{NJU2K}} &\multicolumn{3}{c|}{\textbf{STERE}} &\multicolumn{3}{c}{\textbf{SIP}} \\ && &$S_{\alpha}$ & $F_{\beta}^{\rm max}$ &$\mathcal{M}$ &$S_{\alpha}$ & $F_{\beta}^{\rm max}$ &$\mathcal{M}$ &$S_{\alpha}$ & $F_{\beta}^{\rm max}$ &$\mathcal{M}$ \\ \midrule 1 &None & &0.868 &0.862 &0.064 &0.730 &0.685 &0.117 &0.873 &0.875 &0.060 \\ 2 &R$\leftarrow$D & &0.912 &0.914 &0.040 &0.892 &0.888 &0.047 &0.891 &0.896 &0.048 \\ 3 &R$\rightarrow$D & &0.920 &0.921 &\bf{0.035} &0.911 &0.905 &0.039 &0.890 &0.895 &0.047 \\ 4 &R$\leftrightarrow$D & &\bf{0.921} &\bf{0.924} &0.036 &\bf{0.915} &\bf{0.911} &\bf{0.038} &\bf{0.896} &\bf{0.901} &\bf{0.044} \\ 5 &R$\leftrightarrow$D &\checkmark &0.918 &0.918 &0.037 &0.912 &0.909 &0.038 &0.890 &0.896 &0.048 \\ \bottomrule \hline \end{tabular} \vspace{-8pt} \end{table} \vspace{-0.5cm} \subsection{Ablation Study} \label{sec:AblationStduy} \vspace{-0.1cm} \textbf{Interaction Directions of BTS.} To validate the rationality of the proposed BTS module, we set up five experiments with different settings. Notation ``R$\leftarrow$D'' means introducing depth to the RGB branch, and ``R$\rightarrow$D'' means the vice versa. ``R$\leftrightarrow$D'' means the proposed bi-directional interactions. ``Res'' means introducing residual connections into BTS for both branches as mentioned in Section \ref{sec:method}. For fair comparison, these settings are conducted by only switching connections inside BTS while keeping the main components (\emph{e.g.}, spatial and channel attention) maintained. Ablation results are shown in Table~\ref{table:AS1}, where row \#1 means no interaction exists between the two branches, leading to the worst results. Rows \#2 and \#3 are better than \#1, showing that uni-directional interaction is better than none. Specially, row \#3 shows much better results than \#2 on STERE dataset, indicating that transferring RGB to depth could mitigate the influence from inaccurate depth\footnote{According to our observation and also \cite{JLDCF}, the depth quality of STERE is relatively poor among the six datasets. Image (a) in Fig. \ref{visualcomparison} is from STERE.}, which rightly supports our claim. Comparing row \#4 (the default BTS) to \#2 and \#3, the improvement is consistent and notable. This validates the proposed bi-directional interaction strategy in BTS. Lastly, row \#5 leads to no boost over \#4. This may be caused by the limitation on transfer ability brought by the residual connection. Fig. \ref{heatmap} shows a comparative example between visualized features \#1 and \#4, where one can see \#4 results in more robust features as well as the final saliency map. \begin{figure} \centering \centerline{\epsfig{figure=fig/heatmap,width=0.48\textwidth}} \vspace{-0.3cm} \caption{Visualized features ($f_{r}^{3}$ and $f_{d}^{3}$ in BTS-Net) from setting \#4 (with BTS) and \#1 (w\textbackslash o BTS) in Table \ref{table:AS1}.}\vspace{-0.3cm} \label{heatmap} \end{figure} \begin{table}[t] \centering \caption{Results of different internal attention designs. Details are in Section~\ref{sec:AblationStduy}: ``Internal Attention Designs of BTS''.} \vspace{-0.2cm} \label{table:AS2} \vspace{8pt} \footnotesize \renewcommand{\tabcolsep}{0.8mm} \begin{tabular}{c|ccc|ccc|ccc} \hline\toprule \multirow{2}{*}{\textbf{Settings}} &\multicolumn{3}{c|}{\textbf{NJU2K}} &\multicolumn{3}{c|}{\textbf{STERE}} &\multicolumn{3}{c}{\textbf{SIP}} \\ &$S_{\alpha}$ & $F_{\beta}^{\rm max}$ &$\mathcal{M}$ &$S_{\alpha}$ & $F_{\beta}^{\rm max}$ &$\mathcal{M}$ &$S_{\alpha}$ & $F_{\beta}^{\rm max}$ &$\mathcal{M}$ \\ \midrule Only SA &0.914 &0.917 &0.039 &0.903 &0.900 &0.044 &0.887 &0.892 &0.050 \\ CA-SA &0.914 &0.915 &0.039 &0.901 &0.896 &0.044 &0.892 &0.899 &0.047 \\ SA-CA &\bf{0.921} &\bf{0.924} &\bf{0.036} &\bf{0.915} &\bf{0.911} &\bf{0.038} &\bf{0.896} &\bf{0.901} &\bf{0.044} \\ \bottomrule \hline \end{tabular} \vspace{-8pt} \end{table} \begin{table}[t] \centering \caption{Results from the U-net and our group decoder (GD). Details are in Section~\ref{sec:AblationStduy}: ``Light-weight Group Decoder''.}\vspace{-0.2cm} \label{table:AS3} \vspace{8pt} \footnotesize \renewcommand{\tabcolsep}{0.18mm} \begin{tabular}{c|c|ccc|ccc|ccc} \hline\toprule \multirow{2}{*}{Decoder} &\multirow{2}{*}{Parameters} &\multicolumn{3}{c|}{\textbf{NJU2K}} &\multicolumn{3}{c|}{\textbf{STERE}} &\multicolumn{3}{c}{\textbf{SIP}} \\ & &$S_{\alpha}$ & $F_{\beta}^{\rm max}$ &$\mathcal{M}$ &$S_{\alpha}$ & $F_{\beta}^{\rm max}$ &$\mathcal{M}$ &$S_{\alpha}$ & $F_{\beta}^{\rm max}$ &$\mathcal{M}$ \\ \midrule U-net &32.4M&0.913 &0.913 &0.041 &0.908 &0.902 &0.042 &0.889 &0.892 &0.049 \\ GD-D &1.8M&0.912 &0.910 &0.041 &0.904 &0.895 &0.044 &0.892 &0.895 &0.047 \\ GD-R &1.8M&0.917 &0.918 &0.038 &0.914 &0.909 &0.039 &0.892 &0.897 &0.047 \\ GD-C &4.1M &\bf{0.921} &\bf{0.924} &\bf{0.036} &\bf{0.915} &\bf{0.911} &\bf{0.038} &\bf{0.896} &\bf{0.901} &\bf{0.044} \\ \bottomrule \hline \end{tabular} \vspace{-8pt} \end{table} \textbf{Internal Attention Designs of BTS.} To validate the current attention design in BTS, we also set up three different experiments, whose results are shown in Table~\ref{table:AS2}. Notations ``Only SA'', ``CA-SA'', and ``SA-CA'' denote: only applying spatial attention without channel attention, changing the order of the spatial and channel attention (\emph{i.e.}, the latter comes first), and the default design of BTS-Net (\emph{i.e.}, the spatial attention comes first), respectively. Comparing the default design of BTS, namely SA-CA, to the other two variants, one can see it consistently achieves the best performance. Such comparative experiments show that the order of spatial-channel attention is crucial for introducing attention-aware interactions, whereas combining channel attention with bi-directional spatial attention transfer is effective. \textbf{Light-weight Group Decoder.} To validate our light-weight group decoder, we evaluate four results. Performance during inference is shown in Table~\ref{table:AS3}, where ``U-net'' denotes the results generated by a typical U-net decoder. Basically, we concatenate RGB and depth features at the same hierarchies first and then feed the concatenated 512-channel features to a typical U-net decoder consisting of progressive upsampling, concatenation and convolution, at the end of which the same prediction head is applied to obtain the saliency map $S_c$. Note that in this experiment, the three-way supervision was preserved. Notation ``GD-D'', ``GD-R'', and ``GD-C'' denote the decoders deployed to obtain results $S_d$, $S_r$ and $S_c$ in BTS-Net as shown in Fig. \ref{BTSdiagram}. Note that regarding GD-D/GD-R, their parameters come mainly from the prediction heads. From Table~\ref{table:AS3}, one can see that the proposed decoder, which consists of GD-D, GD-R and GD-C, has much fewer parameters and is more light-weight. Specially, GD-C's parameters are only $\sim$12.7\% of those of the U-net, and meanwhile, GD-C which outputs $S_c$ achieves the best performance. Also note that GD-D/GD-R are even lighter, since they only involve the prediction heads. We attribute the success of the proposed group decoder partly to the efficacy of BTS modules in the encoder (with BTS, the encoder parameters increase from 80.3M to 91.5M), as the resulted robust features from the two branches make it possible for using relatively simple decoding. Also, the superior performance of GD-C comparing to GD-D and GD-R shows that fusing encoder features of the two modalities is essential for better RGB-D SOD. \vspace{-0.2cm} \section{Conclusion}\vspace{-0.2cm} We introduce BTS-Net, the first RGB-D SOD model that adopts the idea of using bi-directional interactions between RGB and depth in the encoder. A light-weight group decoder is proposed to collaborate with the encoder in order to achieve high-quality saliency maps. Comprehensive comparisons to SOTA approaches as well as ablation experiments have validated the proposed bi-directional interaction strategy, internal designs of the BTS module, and also the group decoder. Since BTS can be applied in a ``plug-and-play'' fashion, it will be interesting to use BTS to boost existing models in the future. \vspace{-2pt} \small{\vspace{.1in}\noindent\textbf{Acknowledgments.}\quad This work was supported by the NSFC, under No. 61703077, 61773270, 61971005, the Chengdu Key Research and Development Support Program (2019-YF09-00129-GX), and SCU-Luzhou Municipal People's Government Strategic Cooperation Project (No. 2020CDLZ-10).} \footnotesize { \bibliographystyle{IEEEbib} \section{Introduction} \label{sec:intro} These guidelines include complete descriptions of the fonts, spacing, and related information for producing your proceedings manuscripts. Please follow them. \subsection{Language} All manuscripts must be in English. \subsection{Paper length} Papers should be no longer than 6 pages, including all text, figures, and references. \subsection{Supplemental material} Authors may optionally upload supplemental material. Typically, this material might include: \begin{itemize} \item a short presentation summarizing the paper, \item videos of results that cannot be included in the main paper, \item screen recording of the running program \item anonymized related submissions to other conferences and journals, and \item appendices or technical reports containing extended proofs and mathematical derivations that are not essential for understanding of the paper. \end{itemize} Note that the contents of the supplemental material should be referred to appropriately in the paper and that reviewers are not obliged to look at it. All supplemental material must be zipped into a single file. There is a 20MB limit on the size of this file. \subsection{Dual submission} By submitting a manuscript to ICME, the authors guarantee that it has not been previously published (or accepted for publication) in substantially similar form. Furthermore, no paper which contains significant overlap with the contributions of this paper either has been or will be submitted during the ICME 2021 review period to either a journal or a conference. If there are papers that may appear to violate any of these conditions, then it is the authors' responsibility to (1) cite these papers (preserving anonymity as described in Section 2 of this example paper), (2) argue in the body of your paper why your ICME paper is nontrivially different from these concurrent submissions, and (3) include anonymized versions of those papers in the supplemental material. \section{Blind Review} Many authors misunderstand the concept of anonymizing for blind review. Blind review does not mean that one must remove citations to one's own work-in fact it is often impossible to review a paper unless the previous citations are known and available. Blind review means that you do not use the words ``my'' or ``our'' when citing previous work. That is all. (But see below for technical reports) Saying ``this builds on the work of Lucy Smith [1]'' does not say that you are Lucy Smith, it says that you are building on her work. If you are Smith and Jones, do not say ``as we show in [7]'', say ``as Smith and Jones show in [7]''and at the end of the paper, include reference 7 as you would any other cited work. An example of a bad paper: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of our previous paper [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Removed for blind review \end{quote} An example of an excellent paper: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of the paper of Smith [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Smith, L and Jones, C. ``The frobnicatable foo filter, a fundamental contribution to human knowledge''. Nature 381(12), 1-213. \end{quote} If you are making a submission to another conference at the same time, which covers similar or overlapping material, you may need to refer to that submission in order to explain the differences, just as you would if you had previously published related work. In such cases, include the anonymized parallel submission~\cite{Authors12} as additional material and cite it as \begin{quote} 1. Authors. ``The frobnicatable foo filter'', ACM MM 2016 Submission ID 324, Supplied as additional material {\tt acmmm13.pdf}. \end{quote} Finally, you may feel you need to tell the reader that more details can be found elsewhere, and refer them to a technical report. For conference submissions, the paper must stand on its own, and not {\em require} the reviewer to go to a technical report for further details. Thus, you may say in the body of the paper ``further details may be found in~\cite{Authors12b}''. Then submit the technical report as additional material. Again, you may not assume the reviewers will read this material. Sometimes your paper is about a problem which you tested using a tool which is widely known to be restricted to a single institution. For example, let's say it's 1969, you have solved a key problem on the Apollo lander, and you believe that the ICME audience would like to hear about your solution. The work is a development of your celebrated 1968 paper entitled ``Zero-g frobnication: How being the only people in the world with access to the Apollo lander source code makes us a wow at parties'', by Zeus. You can handle this paper like any other. Don't write ``We show how to improve our previous work [Anonymous, 1968]. This time we tested the algorithm on a lunar lander [name of lander removed for blind review]''. That would be silly, and would immediately identify the authors. Instead write the following: \begin{quotation} \noindent We describe a system for zero-g frobnication. This system is new because it handles the following cases: A, B. Previous systems [Zeus et al. 1968] didn't handle case B properly. Ours handles it by including a foo term in the bar integral. ... The proposed system was integrated with the Apollo lunar lander, and went all the way to the moon, don't you know. It displayed the following behaviours which show how well we solved cases A and B: ... \end{quotation} As you can see, the above text follows standard scientific convention, reads better than the first version, and does not explicitly name you as the authors. A reviewer might think it likely that the new paper was written by Zeus, but cannot make any decision based on that guess. He or she would have to be sure that no other authors could have been contracted to solve problem B. FAQ: Are acknowledgements OK? No. Please omit them in the review copy and leave them for the final, camera ready copy. \section{Formatting Your Paper} All printed material, including text, illustrations, and charts, must be kept within a print area of 7 inches (178 mm) wide by 9 inches (229 mm) high. Do not write or print anything outside the print area. The top margin must be 1 inch (25 mm), except for the title page, and the left margin must be 0.75 inch (19 mm). All text must be in a two-column format. Columns are to be 3.39 inches (86 mm) wide, with a 0.24 inch (6 mm) space between them. Text must be fully justified. \section{Page Title Section} The paper title (on the first page) should begin 1.38 inches (35 mm) from the top edge of the page, centered, completely capitalized, and in Times 14-point, boldface type. The authors' name(s) and affiliation(s) appear below the title in capital and lower case letters. Papers with multiple authors and affiliations may require two or more lines for this information. \section{Type-Style and Fonts} To achieve the best rendering both in the proceedings and from the CD-ROM, we strongly encourage you to use Times-Roman font. In addition, this will give the proceedings a more uniform look. Use a font that is no smaller than nine point type throughout the paper, including figure captions. If you use the smallest point size, there should be no more than 3.2 lines/cm (8 lines/inch) vertically. This is a minimum spacing; 2.75 lines/cm (7 lines/inch) will make the paper much more readable. Larger type sizes require correspondingly larger vertical spacing. Please do not double-space your paper. True-Type 1 fonts are preferred. The first paragraph in each section should not be indented, but all following paragraphs within the section should be indented as these paragraphs demonstrate. \section{Major Headings} Major headings, for example, ``1. Introduction'', should appear in all capital letters, bold face if possible, centered in the column, with one blank line before, and one blank line after. Use a period (``.'') after the heading number, not a colon. \subsection{Subheadings} Subheadings should appear in lower case (initial word capitalized) in boldface. They should start at the left margin on a separate line. \subsubsection{Sub-subheadings} Sub-subheadings, as in this paragraph, are discouraged. However, if you must use them, they should appear in lower case (initial word capitalized) and start at the left margin on a separate line, with paragraph text beginning on the following line. They should be in italics. \section{Page Numbering} Please do {\bf not} paginate your paper. Page numbers, session numbers, and conference identification will be inserted when the paper is included in the proceedings. \begin{figure}[t] \begin{minipage}[b]{1.0\linewidth} \centering \vspace{1.5cm} \centerline{(a) Result 1}\medskip \end{minipage} \begin{minipage}[b]{.48\linewidth} \centering \vspace{1.5cm} \centerline{(b) Results 2}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.48\linewidth} \centering \vspace{1.5cm} \centerline{(c) Result 3}\medskip \end{minipage} \caption{Example of placing a figure with experimental results.} \label{fig:res} \end{figure} \section{Illustrations, Graphs, and Photographs} \label{sec:illust} Illustrations must appear within the designated margins. They may span the two columns. If possible, position illustrations at the top of columns, rather than in the middle or at the bottom. Caption and number every illustration. All halftone illustrations must be clear black and white prints. Do not use any colors in illustrations. Since there are many ways, often incompatible, of including images (e.g., with experimental results) in a \LaTeX document. Figure~\ref{fig:res} shows you an example of how to do this. \section{Tables and Equations} Tables and important equations must be centered in the column. Table~\ref{tab:cap} shows an example of a table while the equation \begin{eqnarray} y = ax^2+bx+c \nonumber \\ ~ = (x+p)(x+q) \end{eqnarray} shows an example of an equation layout. \begin{table}[t] \begin{center} \caption{Table caption} \label{tab:cap} \begin{tabular}{|c|c|c|} \hline Column One & Column Two & Column Three \\ \hline Cell 1 & Cell 2 & Cell 3 \\ Cell 4 & Cell 5 & Cell 6 \\ \hline \end{tabular} \end{center} \end{table} Large tables or long equations may span across both columns. Any table or equation that takes up more than one column width must be positioned either at the top or at the bottom of the page. \section{Footnotes} Use footnotes sparingly (or not at all!) and place them at the bottom of the column on the page on which they are referenced. Use Times 9-point type, single-spaced. To help your readers, avoid using footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence). \section{Citations and References} List and number all bibliographical references at the end of the paper. The references can be numbered in alphabetic order or in order of appearance in the document. When referring to them in the text, type the corresponding reference number in square brackets as shown at the end of this sentence~\cite{Morgan2005}. All citations must be adhered to IEEE format and style. Examples such as~\cite{Morgan2005},~\cite{cooley65} and~\cite{haykin02} are given in Section 12. \bibliographystyle{IEEEbib}
1,116,691,501,077
arxiv
\section{Introduction} \subsection{Setting and Main Theorem} We will study periodic finite-range Schr\"{o}dinger operators of the form \begin{equation} \label{eq:A+Vdef} H = A + V, \end{equation} acting in $\ell^2({\mathbbm{Z}}^d)$, where $V$ is periodic and $A$ is a Toeplitz operator given by \[[A\psi]_n = \sum_{m \in {\mathbbm{Z}}^d} a_{n-m} \psi_m.\] Here, $\{a_n\}_{n\in{\mathbbm{Z}}^d}$ is finitely supported, and $V$ will as usual denote both the potential $V:{\mathbbm{Z}}^d \to {\mathbbm{C}}$ and the corresponding multiplication operator $[V\psi]_n = V_n\psi_n$. We say that $V$ is $q$-periodic for $q=(q_1,\ldots,q_d) \in {\mathbbm{N}}^d$ if $V_{n+q_je_j} = V_n$ for all $n \in {\mathbbm{Z}}^d$ and each $1 \le j \le d$, where $e_j$ denotes the standard $j$th basis vector. In particular, let us note that the approach discussed herein does not rely on reality of the potential or self-adjointness of $A$. The case in which \[ a_n = \begin{cases} - 1 & n = \pm e_j \text{ for some } 1 \le j \le d \\ 0 & \text{otherwise} \end{cases} \] corresponds to $A = -\Delta$, the discrete Laplacian, considered in \cite{LiuPreprint:Irreducibility}. Our main result is irreducibility of the Bloch variety for all operators of the form \eqref{eq:A+Vdef} subject to a suitable condition on $A$. In particular, under mild assumptions on $A$, the result holds universally for all periodic $V$, including complex-valued potentials. Starting with $A$ we generate the Laurent polynomial \begin{equation}\label{eq:pdefinition} p(z) = p_A(z) = \sum_{n \in {\mathbbm{Z}}^d} a_n z^n, \end{equation} where we emply the standard multi-index notation $z^n = z_1^{n_1}\cdots z_d^{n_d}$. Let us state our assumptions here. For further definitions and details, we refer the reader to Section~\ref{sec:mainresult} (where we precisely define the component of lowest degree, the character action $\mu_n$, and the fundamental domain $W$). Let $h$ denote the lowest degree component of $p$ in the sense that $p(z)=h(z)+\text{higher order terms}$. Our main assumptions are the following: \begin{enumerate} \item[($A_1$)] The degree of $h$ is negative. \item[($A_2$)] The polynomials $h(\mu_n z)$, $n \in W$, are pairwise distinct (cf.~\eqref{eq:fundcelldef}, \eqref{action}, and \eqref{eq:characterActionDef}). \end{enumerate} \begin{theorem}\label{t:blochIrr} Let $q=(q_1,q_2,\ldots,q_d)$ be given and let $V$ be $q$-periodic. If $p_A$ satisfies Assumptions~{\rm\ref{assump1}} and {\rm\ref{assump2}}, then the Bloch variety of $H = A+V$ is irreducible {\rm(}modulo periodicity{\rm)}. \end{theorem} \begin{remark} Assumption \ref{assump1} only depends on $p_A$, whereas Assumption \ref{assump2} depends on $p_A$ and $q$ (via the character action). \end{remark} Theorem~\ref{t:blochIrr} is the main motivation for this work. It will follow from a more general result formulated in Theorem~\ref{mainthm} below. The above assumptions are satisfied and straightforward to verify in many cases of interest. To illustrate the variety of applications, we enumerate some corollaries. We first note that Theorem \ref{t:blochIrr} provides a direct proof of the irreduciblity of the Bloch variety for all discrete Schr\"odinger operators on ${\mathbbm{Z}}^d$. \begin{coro} \label{coro:square} If $A = -\Delta$ denotes the Laplacian on $\ell^2({\mathbbm{Z}}^d)$, then for any periodic $V$, the Bloch variety of $A+V$ is irreducible {\rm(}modulo periodicity{\rm)}. \end{coro} The above Corollary was first proved by Liu in \cite{LiuPreprint:Irreducibility} as a consequence of irreducibility of the Fermi variety in the case $A = -\Delta$ (away from one energy in $d=2$ and for all energies in $d \geq 3$). See Corollary~\ref{corbv1}. Thus, we supply an alternative argument, working directly on the Bloch variety. More significantly, Theorem \ref{t:blochIrr} also enables one to prove irreducibility of the Bloch variety for other lattice geometries in arbitrary dimension. To remain concrete, we present a couple of two dimensional examples but the reader may readily recognize from the proofs that many generalizations are possible. \begin{coro} \label{coro:ehm} If $A$ denotes the Laplacian on the extended Harper lattice, then for any periodic $V$, the Bloch variety of $A+V$ is irreducible {\rm(}modulo periodicity{\rm)}. \end{coro} \begin{coro} \label{coro:tri} If $A$ denotes the Laplacian on the triangular lattice, then for any periodic $V$, the Bloch variety of $A+V$ is irreducible {\rm(}modulo periodicity{\rm)}. \end{coro} Generally speaking, irreducibility of the Bloch variety is potentially sensitive to modifications in the hopping terms. To the best of our knowledge, even the results of Corollaries \ref{coro:ehm} and \ref{coro:tri} are new. For further details, including definitions of the triangular and extended Harper lattices, see Section~\ref{sec:examples}. To emphasize the distinction between the above models, we present the corresponding polynomials below, recalling that Equation~\eqref{eq:pdefinition} provides the dictionary between $A$ and $P_A$. \begin{enumerate}[label=(\roman*)] \item For the discrete Laplacian on ${\mathbbm{Z}}^d$, \begin{equation*}p_{-\Delta}(z)=-\left(z_1+\frac{1}{z_1}+z_2+\frac{1}{z_2}+\cdots+z_d+\frac{1}{z_d}\right) \end{equation*} \item For the extended Harper lattice \[p_{\rm EHM}(z)=-\left(z_1+\frac{1}{z_1}+z_2+\frac{1}{z_2}+\frac{z_1}{z_2}+\frac{z_2}{z_1}+z_1z_2+\frac{1}{z_1z_2}\right)\] \item For the triangular lattice, \begin{equation*} p_{\rm tri}(z)=-\left(z_1+\frac{1}{z_1}+z_2+\frac{1}{z_2}+\frac{z_1}{z_2}+\frac{z_2}{z_1}\right). \end{equation*} \end{enumerate} In particular, in dimension $d=2$, $p_{\rm EHM}(z)$ adds to $p_{-\Delta}(z)$ next nearest neighbour terms and is symmetric with respect to the map $z_j \mapsto z^{-1}_{j}$ for $j=1,2.$ The polynomial $p_{\rm tri}(z)$ does not posses this symmetry, nonetheless the corresponding variety still falls into the scope of Theorem \ref{t:blochIrr}. The triangular lattice is depicted in Figure~\ref{fig:trilat}. Applying a simple shear transformation reduces the triangular lattice to the square lattice with additional edges, as shown in Figure~\ref{fig:trishear}, and hence places the Laplacian on the triangular lattice into the context of the paper after a suitable change of coordinates. \begin{figure*}[h] \begin{minipage}{0.45\textwidth} \centering \begin{tikzpicture}[yscale=.84,xscale=.84] \filldraw[color=black, fill=black](-3,-3) circle (0.18); \filldraw[color=black, fill=black](-1,-3) circle (0.18); \filldraw[color=black, fill=black](1,-3) circle (0.18); \filldraw[color=black, fill=black](3,-3) circle (0.18); \filldraw[color=black, fill=black](-2,{sqrt(3)-3}) circle (0.18); \filldraw[color=black, fill=black](0,{sqrt(3)-3}) circle (0.18); \filldraw[color=black, fill=black](2,{sqrt(3)-3}) circle (0.18); \filldraw[color=black, fill=black](-3,{2*sqrt(3)-3}) circle (0.18); \filldraw[color=black, fill=black](-1,{2*sqrt(3)-3}) circle (0.18); \filldraw[color=black, fill=black](1,{2*sqrt(3)-3}) circle (0.18); \filldraw[color=black, fill=black](3,{2*sqrt(3)-3}) circle (0.18); \filldraw[color=black, fill=black](-2,{3*sqrt(3)-3}) circle (0.18); \filldraw[color=black, fill=black](0,{3*sqrt(3)-3}) circle (0.18); \filldraw[color=black, fill=black](2,{3*sqrt(3)-3}) circle (0.18); \draw [-,line width = .05cm] (-3,-3) -- (3,-3); \draw [-,line width = .05cm] (-3,{sqrt(3)-3}) -- (3,{sqrt(3)-3}); \draw [-,line width = .05cm] (-3,{2*sqrt(3)-3}) -- (3,{2*sqrt(3)-3}); \draw [-,line width = .05cm] (-3,{3*sqrt(3)-3}) -- (3,{3*sqrt(3)-3}); \draw [-,line width=.05cm] (-3,{2*sqrt(3)-3}) -- (-2,{3*sqrt(3)-3}); \draw [-,line width=.05cm] (-3,-3) -- (0,{3*sqrt(3)-3}); \draw [-,line width=.05cm] (-1,-3) -- (2,{3*sqrt(3)-3}); \draw [-,line width=.05cm] (1,-3) -- (3,{2*sqrt(3)-3}); \draw [-,line width=.05cm] (-3,{2*sqrt(3)-3}) -- (-1,-3); \draw [-,line width=.05cm] (-2,{3*sqrt(3)-3}) -- (1,-3); \draw [-,line width=.05cm] (0,{3*sqrt(3)-3}) -- (3,-3); \draw [-,line width=.05cm] (2,{3*sqrt(3)-3}) -- (3,{2*sqrt(3)-3}); \draw [->,line width=.05cm,color=blue] (-2,{sqrt(3)-3}) -- (-1,{2*sqrt(3)-3}); \draw [->,line width=.06cm,color=blue] (-2,{sqrt(3)-3}) -- (-1,{2*sqrt(3)-3}); \draw [->,line width=.06cm,color=blue] (-2,{sqrt(3)-3}) -- (0,{sqrt(3)-3}); \node [above] at (-2,{sqrt(3)+.3-3}) {\cold{$\bm{b}_2$}}; \node [below] at (-1.3,{sqrt(3)-.1-3}) {\cold{$\bm{b}_1$}}; \end{tikzpicture} \caption{A portion of the triangular lattice}\label{fig:trilat} \end{minipage} \hfill \begin{minipage}{0.45\textwidth} \centering \begin{tikzpicture}[yscale=.75,xscale=.75] \filldraw[color=black, fill=black](-3,-3) circle (0.18); \filldraw[color=black, fill=black](-1,-3) circle (0.18); \filldraw[color=black, fill=black](1,-3) circle (0.18); \filldraw[color=black, fill=black](3,-3) circle (0.18); \filldraw[color=black, fill=black](-3,-1) circle (0.18); \filldraw[color=black, fill=black](-1,-1) circle (0.18); \filldraw[color=black, fill=black](1,-1) circle (0.18); \filldraw[color=black, fill=black](3,-1) circle (0.18); \filldraw[color=black, fill=black](-3,1) circle (0.18); \filldraw[color=black, fill=black](-1,1) circle (0.18); \filldraw[color=black, fill=black](1,1) circle (0.18); \filldraw[color=black, fill=black](3,1) circle (0.18); \filldraw[color=black, fill=black](-3,3) circle (0.18); \filldraw[color=black, fill=black](-1,3) circle (0.18); \filldraw[color=black, fill=black](1,3) circle (0.18); \filldraw[color=black, fill=black](3,3) circle (0.18); \draw [-,line width = .06cm] (-3,-3) -- (3,-3); \draw [-,line width = .06cm] (-3,-3) -- (-3,3); \draw [-,line width=.06cm] (-3,-1) -- (3,-1); \draw [-,line width = .06cm] (-1,-3) -- (-1,3); \draw [-,line width=.06cm] (-3,1) -- (3,1); \draw [-,line width = .06cm] (1,-3) -- (1,3); \draw [-,line width=.06cm] (-3,3) -- (3,3); \draw [-,line width = .06cm] (3,-3) -- (3,3); \draw [-,line width = .06cm] (-3,3) -- (3,-3); \draw [-,line width = .06cm] (-1,-3) -- (-3,-1); \draw [-,line width = .06cm] (1,-3) -- (-3,1); \draw [-,line width = .06cm] (-1,3) -- (3,-1); \draw [-,line width = .06cm] (3,1) -- (1,3); \end{tikzpicture} \caption{The triangular lattice after shearing.}\label{fig:trishear} \end{minipage} \end{figure*} \subsection{Definitions and Context} Let us now give relevant definitions and context. Given $q_i\in {\mathbbm{N}}$, $i=1,2,\ldots,d$, let $\Gamma = \Gamma_q :=q_1{\mathbbm{Z}}\oplus q_2 {\mathbbm{Z}}\oplus\cdots\oplus q_d{\mathbbm{Z}}$. We say that a function $V: {\mathbbm{Z}}^d\to {\mathbbm{C}}$ is $q$-periodic ($\Gamma$-periodic, or just periodic) if $V_{n+\gamma} = V_n$ for all $n \in {\mathbbm{Z}}^d$ and all $\gamma\in \Gamma$. \begin{definition} Let ${\mathbbm{C}}^\star = {\mathbbm{C}}\setminus \{0\}$. For $z = (z_1,\ldots,z_d) \in ({\mathbbm{C}}^\star)^d$ and $q = (q_1,\ldots,q_d) \in {\mathbbm{N}}^d$, the space ${\mathscr{H}}(z,q)$ consists of those $\psi :{\mathbbm{Z}}^d \to {\mathbbm{C}}$ for which \begin{equation} \psi_{n+j\odot q} = z^j \psi_n, \forall n,j\in {\mathbbm{Z}}^d, \end{equation} where we write $j\odot q=(j_1q_1,\ldots,j_d q_d)$ and use the multi-index notation $z^j = z_1^{j_1} \cdots z_d^{j_d}$. Naturally, ${\mathscr{H}}(z,q)$ is a Hilbert space of finite dimension $Q:=q_1\cdots q_d$. If $V:{\mathbbm{Z}}^d \to {\mathbbm{C}}$ is $q$-periodic, the corresponding Bloch variety is given by \[ B=B(H) = \{(k,\lambda) \in {\mathbbm{C}}^{d+1} : H\psi = \lambda \psi \text{ enjoys a nonzero solution in } {\mathscr{H}}(e^{2\pi i k},q)\}, \] where we write $e^{2\pi i k} = (e^{2\pi i k_1},\ldots,e^{2\pi i k_d}) \in ({\mathbbm{C}}^\star)^d$. We employ here a standard abuse of notation in which $H$ represents both the self-adjoint operator in $\ell^2({\mathbbm{Z}}^d)$ and the difference operator acting in, say, $\ell^\infty({\mathbbm{Z}}^d)$. \end{definition} \begin{definition} Given $\lambda\in {\mathbbm{C}}$, the Fermi surface (variety) $F_{\lambda}(H)$ is defined as the level set of the Bloch variety: \begin{equation*} F_{\lambda}(H)=\{k\in{\mathbbm{C}}^d: (k,\lambda)\in B(H)\}. \end{equation*} \end{definition} We should mention that reducible Fermi and Bloch varieties are known to occur for periodic graph operators, e.g., ~\cite{shi1, fls}. One challenging problem in the study of periodic operators is to prove the (ir)reducibility of the Bloch and Fermi varieties ~\cite{GKTBook, ktcmh90, bktcm91, bat1, batcmh92, ls, shi2, fls, GKToverview,shva}. For instance, irreducibility of the Bloch variety implies that in case $B(H)\cap U\neq \emptyset$ for some open set $U\subset {\mathbbm{C}}^{d+1}$, the knowledge of $B(H)\cap U$ allows one to recover $B(H)$. Besides its own importance in algebraic geometry, the (ir)reducibility of these varieties is crucial in the study of spectral properties of periodic elliptic operators. In particular, this has implications for the structure of spectral band edges~\cite{LiuPreprint:Irreducibility}, the isospectrality~\cite{LiuPreprint:fermi} and the existence of embedded eigenvalues for operators perturbed by a local defect~\cite{kv06cmp, kvcpde20, shi1, IM14, AIM16,dks}. Based on existing evidence, Kuchment conjectures that the Bloch variety of any periodic second-order elliptic operator is irreducible \cite[Conjecture 5.17]{Kuchment2016BAMS}. After a substantial amount of important work (see, e.g.,~\cite{bat1, batcmh92, battig1988toroidal, bktcm91, GKTBook, ktcmh90}), the irreducibility of the Fermi variety of discrete periodic Schr\"odinger operators has been well understood in a recent paper of the second author ~\cite{LiuPreprint:Irreducibility}. \begin{theorem}\label{gcf1} \cite{LiuPreprint:Irreducibility} Let $d\geq3$. Then the Fermi variety $F_{\lambda}(-\Delta+V)/{\mathbbm{Z}}^d$ is irreducible for any $\lambda\in {\mathbbm{C}}$. \end{theorem} Denote by $[V]$ the average of $V$ over one periodicity cell. \begin{theorem}\label{thm21} \cite{LiuPreprint:Irreducibility} Let $d=2$. Then the Fermi variety $F_{\lambda}(-\Delta+V)/{\mathbbm{Z}}^2$ is irreducible for any $\lambda\in {\mathbbm{C}}$ except for $\lambda=[V] $, where $[V]$ is the average of $V$ over one periodicity cell. Moreover, if $F_{[V]}(-\Delta+V)/{\mathbbm{Z}}^2$ is reducible, it has exactly two irreducible components. \end{theorem} \begin{remark} The statements in Theorem \ref{thm21} are sharp. When $d=2$, for a constant function $V$, $F_{[V]}(-\Delta+V)/{\mathbbm{Z}}^2$ has two irreducible components. \end{remark} \begin{coro}\label{corbv1} \cite{LiuPreprint:Irreducibility} Let $d\geq2$. Then the Bloch variety $B(-\Delta+V)$ is irreducible (modulo periodicity). \end{coro} As the reader can see, our proof does not require $V$ to be real-valued. When $d=2$, Corollary \ref{corbv1} was proved by B{\"a}ttig \cite{battig1988toroidal}. In \cite{GKTBook}, Gieseker, Kn\"orrer and Trubowitz proved that $F_{\lambda}(-\Delta+V)/{\mathbbm{Z}}^2$ is irreducible except for finitely many values of $\lambda$, which immediately implies Corollary \ref{corbv1} for $d=2$. When $d=3$, Theorem \ref{gcf1} has been proved by B{\"a}ttig \cite{batcmh92}. For continuous periodic Schr\"odinger operators, Kn\"orrer and Trubowitz proved that the Bloch variety is irreducible (modulo periodicity) when $d=2$ \cite{ktcmh90}. When the periodic potential is separable, B{\"a}ttig, Kn\"orrer and Trubowitz proved that the Fermi variety at any level is irreducible (modulo periodicity) for $d=3$ \cite{bktcm91}. In \cite{GKTBook,ktcmh90,bktcm91,bat1,batcmh92,battig1988toroidal}, proofs heavily depend on the construction of toroidal and directional compactifications of Fermi and Bloch varieties. The perspective employed by us in the current manuscript is inspired by \cite{LiuPreprint:Irreducibility}, where the second author of the present paper introduced a new approach to study the Bloch and Fermi varieties on ${\mathbbm{Z}}^d$, $d\geq 2$. In general terms, the goal is to explicitly calculate ``asymptotics" of the (Laurent) polynomials at $z\in\{z: z_j=0 \text{ or } z_j=\infty, j=1,2, \cdots,k\}$ and show that the ``asymptotics" contain enough information about the original variety. Concretely, the proof is based on changing variables, studying of the lowest degree components of a family of (Laurent) polynomials in several variables and degree arguments. With regards to the Bloch variety, we expand the approach of \cite{LiuPreprint:Irreducibility} in different directions. As a consequence, for the main result of Theorem~\ref{mainthm} below, the underlying lattice may be of very general nature and contain somewhat arbitrary finite-range connections (see ~\ref{assump1} and ~\ref{assump2} below for a more precise statement of the assumptions). In particular, we obtain irreducibility for the Bloch variety corresponding to periodic Schr\"{o}dinger operators on the triangular lattice and the extended Harper lattice; see Section~\ref{sec:examples} for a precise description of these examples. While our approach is inspired by \cite{LiuPreprint:Irreducibility}, we do not follow the same path. By working directly with the lowest degree components, we can eschew a discussion of asymptotic statements about the varieties themselves. The structure of the paper is as follows. We precisely formulate Theorem~\ref{mainthm}, our main result, in Section~\ref{sec:mainresult}. Section~\ref{sec:technical lemmas} contains preparatory technical results which are then employed in Section~\ref{sec:mainproof} to prove Theorem~\ref{mainthm}. We elucidate the connection between this result and periodic operators in Section~\ref{sec:floquet}, which also contains some relevant background on periodic long-range Schr\"odinger operators. We also give the proof of Theorem~\ref{t:blochIrr} in Section~\ref{sec:floquet}. We conclude in Section~\ref{sec:examples} with some relevant examples and applications. \section{Main Result} \label{sec:mainresult} To state the main result, we begin by recalling some crucial terminology. \begin{definition} Suppose $f$ is a Laurent monomial in $m$ variables, that is, $f(z) =cz^a= cz_1^{a_1}z_2^{a_2}\cdots z_m^{a_m}$ with $a_i\in{\mathbbm{Z}}$ for $i=1, \ldots, d$ and $c \neq 0$. The degree of $f$ is defined as $\deg (f)=a_1+a_2+\cdots +a_m$. Abusing notation slightly, we also denote $\deg(a)=a_1+a_2+\cdots +a_m$ for the multi-index $a=(a_1,\ldots,a_m) \in {\mathbbm{Z}}^d$. \end{definition} \begin{definition}\label{lowestdfn} Given a Laurent polynomial \[p(z)= \sum c_az^{a},\] let $L_- = \min\{\deg(a) : c_a \neq 0\}$. Then, the \emph{lowest degree component} of $p$ is defined to be the Laurent polynomial \[h(z)=\sum_{\deg a =L_{-}}c_az^{a}.\] \end{definition} One of the crucial properties of this notion is the following: denoting the lowest-degree component of $p$ by $\underline{p}$, one has $\underline{(fg)} = \underline{f}\cdot\underline{g}$, which enables one to relate factorizations of a polynomial to factorizations of its lowest-degree component. Obviously, some care is needed to deduce nontrivial consequences from this observation, but this is the first idea. Let us write ${\mathbbm{C}}[z_1,\ldots,z_m]=:{\mathbbm{C}}[z]$ for the set of polynomials in $z_1,\ldots,z_m$. Similarly, we write ${\mathbbm{C}}[z_1,z_1^{-1},\ldots,z_m,z_m^{-1}]=:{\mathbbm{C}}[z,z^{-1}]$ for the set of Laurent polynomials in $z_1,\ldots,z_m$. \begin{definition} Recall that a polynomial $\mathcal{P} \in {\mathbbm{C}}[z]$ is called reducible if there exist nonconstant polynomials $f,g \in {\mathbbm{C}}[z]$ such that $\mathcal{P}=fg$ and \emph{irreducible} otherwise. Similarly, we say that a Laurent polynomial $\mathcal{P} \in {\mathbbm{C}}[z,z^{-1}]$ is irreducible if it can not be factorized non-trivially, that is, there are no non-monomial Laurent polynomials $f,g$ such that $\mathcal{P}= fg$. Notice that nonconstant monomials are units in the algebra of Laurent polynomials, which accounts for a small subtlety. That is, one must be somewhat careful here with zeros at $z=0$ and $z = \infty$. The polynomial $z^2$ is reducible in ${\mathbbm{C}}[z]$ but is a unit in ${\mathbbm{C}}[z,z^{-1}]$. In practice, this should cause no confusion, and we will write that $\mathcal{P}$ is irreducible in ${\mathbbm{C}}[z]$ (respectively in ${\mathbbm{C}}[z,z^{-1}]$) if we wish to emphasize the sense in which irreducibility is meant in a specific context. \end{definition} \begin{remark} If $P$ is an irreducible Laurent polynomial in $m$ variables, then the corresponding variety $\{z\in ({\mathbbm{C}}^{\star})^m: \mathcal{P}(z)=0\}$ is irreducible as an analytic set. Thus, the overall strategy of our work is to show that a suitable Laurent polynomial that describes the Bloch variety is irreducible. Concretely, we may consider the set $\mathcal{B}(H)$ which consists of those $(z,\lambda)\in ({\mathbbm{C}}^\star)^d \times {\mathbbm{C}}$ such that $H\psi = \lambda \psi$ enjoys a nontrivial solution $\psi \in {\mathscr{H}}(z,q)$. By Floquet theory, one may determine a suitable Laurent polynomial ${\mathcal{P}}(z,\lambda)$ such that $\mathcal{B}(H)$ is precisely the zero set of ${\mathcal{P}}$ (see Section~\ref{sec:floquet}). Thus, since $(k,\lambda) \in B(H)$ if and only if $(e^{2\pi i k},\lambda) \in \mathcal{B}(H)$, to show that $B(H)$ is irreducible modulo periodicity, it suffices to show that the corresponding Laurent polynomial is irreducible. However, let us observe that the converse is not always true. Indeed, consider the case in which $\mathcal{P}(z)=f^2(z)$ and $f$ is irreducible. In this case, $\mathcal{P}$ is reducible as a Laurent polynomial, but the corresponding variety is irreducible. \end{remark} Let us begin by collecting some notation that we will use throughout the paper. Given $q=(q_1,\ldots,q_d) \in{\mathbbm{N}}^d$, we define the lattice $\Gamma$ by \begin{equation} \Gamma = \bigoplus_{j=1}^d q_j{\mathbbm{Z}}= \{ n \in {\mathbbm{Z}}^d : q_j | n_j \ \forall\,\, 1\le j \le d\} \end{equation} and the fundamental cell, $W$, by \begin{equation} \label{eq:fundcelldef} W=\{n=(n_1,n_2,\ldots,n_d)\in{\mathbbm{Z}}^d: 0\leq n_j\leq q_{j}-1, j=1,2,\ldots, d\} = {\mathbbm{Z}}^d \cap \prod_{j=1}^d [0,q_j). \end{equation} Given $n\in W$ and $j\in\{1,\ldots,d\}$, let \begin{equation}\label{action} \rho^{j}_{n_j}=e^{2\pi i\frac{n_j}{q_j}}. \end{equation} We denote the corresponding character by $\mu_n$ and define the corresponding action on ${\mathbbm{C}}^d$ by \begin{equation} \label{eq:characterActionDef} \mu_n \cdot\left(z_{1}, z_{2},\ldots, z_d\right)=\left(\rho^{1}_{n_1}z_{1}, \rho^{2}_{n_2}z_{2},\ldots,\rho^{d}_{n_d}z_d\right). \end{equation} Let $p$ be a Laurent polynomial and define \begin{equation} p_{n}(z)=p(\mu_n\cdot z), \quad n \in W, \ z \in ({\mathbbm{C}}^{\star})^d. \end{equation} We shall work with Laurent polynomials in $m=d+1$ variables $z_1,...,z_d,\lambda$. Abusing notation somewhat, we write ${\mathbbm{C}}[z,\lambda]$ (respectively ${\mathbbm{C}}[z,\lambda,z^{-1},\lambda^{-1}]$) for the set of polynomials (respectively the set of Laurent polynomials) in $z$ and $\lambda$. The polynomials of interest are those of the form \begin{equation}\label{mathcalP} \widetilde{\mathcal{P}}(z,\lambda)=\prod_{n\in W} (p_{n}(z)-\lambda)+\sum_{X\in \mathcal{S}}C_{X}\prod_ { n\in X}(p_n(z)-\lambda),\end{equation} where the summation runs over $X$ in an arbitrary collection $\mathcal{S}$ of proper subsets of $W$ and $C_{X}\in {\mathbbm{C}}$. Collecting terms, we see that \begin{equation} \label{eq:PmonicInlambda} \widetilde{\mathcal{P}}(z,\lambda) = (-1)^Q \lambda^Q + \sum_{k=0}^{Q-1}b_k(z)\lambda^k,\end{equation} where $b_k \in {\mathbbm{C}}[z,z^{-1}]$ and $Q=q_1\cdots q_d$. Note that we do not exclude the case $\emptyset \in\mathcal{S}$, our convention being that $\prod_ { n\in \emptyset}(p_n(z) - \lambda)=1$. These are exactly the types of polynomials that one produces by expanding the determinant of the Floquet operator associated to a suitable periodic operator, hence their interest in the current work. For each $X$, the constant $C_{X}$ is assumed to be independent of $\lambda$ and $z$. Assume further that $\widetilde{\mathcal{P}}(z,\lambda)$ is invariant under action of each $\mu_n$, i.e., \begin{equation} \label{eq:mathcalPCharInv} \widetilde{\mathcal{P}}(z,\lambda)=\widetilde{\mathcal{P}}(\mu_n\cdot z,\lambda) \text{ for all } n\in W. \end{equation} \begin{remark} \label{rem:polyToSchrod} The assumptions \eqref{mathcalP} and \eqref{eq:mathcalPCharInv} include the central example where \begin{equation*} \widetilde{\mathcal{P}}(z,\lambda)=\mathrm{det}\left(D+B-\lambda I \right) \end{equation*} and the matrices $D=D(z)$ and $B$ are defined by \begin{equation} \label{eq:rem21Adef} D(n,n')=p_n(z) \delta_{n,n'}\ \end{equation} \begin{equation} B(n,n')=\widehat V\left(\frac{n_1-n'_1}{q_1},\ldots,\frac{n_d-n'_{d}}{q_d}\right),\,\,\, n,n'\in W. \end{equation} Compare to the discussion in Section~\ref{sec:floquet}, especially Proposition~\ref{prop:floquetTransf}. Let us note the key properties are that $D$ is a diagonal matrix and the entries of $B$ are independent of $z$. Consequently, neither self-adjointness of $A$ or real-valuedness of $V$ is a crucial ingredient. \end{remark} Since $\widetilde{\mathcal{P}}(z,\lambda)$ is invariant under the action of each $\mu_n$, it is elementary to check (cf.\ Lemma~\ref{liftlemma}) that there exists $\mathcal{P}(z,\lambda)$ such that \begin{equation}\label{Pdef}\widetilde{\mathcal{P}}(z,\lambda)=\mathcal{P}(z_1^{q_1},z_2^{q_2},\ldots,z_d^{q_d},\lambda). \end{equation} Our goal is to show that $\mathcal{P}(z,\lambda)$ is irreducible as a Laurent polynomial under the assumptions below. \begin{enumerate}[label=(\subscript{A}{{\arabic*}})] \item\label{assump1} $\deg(h) < 0$, where $h$ denotes the lowest degree component of $p$, (see Definition~\ref{lowestdfn}). \item\label{assump2} The polynomials $h_n(z)=h(\mu_n z)$ are distinct for each $n \in W $. \end{enumerate} The reader may readily check that $p_{n+m}(z) = p_n(\mu_m z)$ (with addition of indices computed mod $\Gamma$). Thus, to check Assumption~\ref{assump2} in practice, it suffices to show that $h_0 \neq h_n$ for every $n \in W\setminus\{0\}$. \begin{theorem}\label{mainthm} Let $p \in {\mathbbm{C}}[z,z^{-1}]$, $q \in {\mathbbm{N}}^d$, $\mathcal{S}$ a collection of proper subsets of $W$, and complex numbers $\{C_X\}_{X \in \mathcal{S}}$ be given. Assume that $\widetilde{\mathcal{P}}$ is a polynomial of the form \eqref{mathcalP} obeying \eqref{eq:mathcalPCharInv}, and let $\mathcal{P}$ be the polynomial given by \eqref{Pdef}. Under Assumptions~{\rm\ref{assump1}} and {\rm\ref{assump2}}, we conclude that $\mathcal{P}$ is irreducible as a Laurent polynomial. \end{theorem} As mentioned in Remark~\ref{rem:polyToSchrod}, the connection to Schr\"odinger operators and Theorem \ref{t:blochIrr} will be established in Section~\ref{sec:floquet}. \begin{remark}\label{rem:notation} Let us collect some notation from the previous paragraphs that will be repeatedly used throughout the proofs. \begin{enumerate} \item ${\mathbbm{C}}[z]$ (resp.\ ${\mathbbm{C}}[z,z^{-1}]$) denotes the set of polynomials (resp.\ Laurent polynomials) in $z_1,\ldots,z_d$.\smallskip \item $p \in {\mathbbm{C}}[z,z^{-1}]$.\smallskip \item $h(z)$ is the lowest degree component of $p(z)$.\smallskip \item $\Gamma = q_1{\mathbbm{Z}} \oplus \cdots \oplus q_d {\mathbbm{Z}}^d$, $W={\mathbbm{Z}}^d \cap \prod_{j=1}^d [0,q_j)$, $\mathcal{S} \subset 2^{W}\setminus\{W\}$ is arbitrary.\smallskip \item $\rho^{j}_{n_j}=e^{2\pi in_j/q_j}$, $n \in {\mathbbm{Z}}^d$, $j=1,\cdots,d$.\smallskip \item For $n \in W$, $\mu_n$ is given by~\eqref{eq:characterActionDef}, namely $\mu_n \cdot\left(z_{1}, z_{2},\ldots, z_d\right)=\left(\rho^{1}_{n_1}z_{1}, \rho^{2}_{n_2}z_{2},\ldots,\rho^{d}_{n_d}z_d\right)$.\smallskip \item $p_n(z)=p(\mu_n z).$\smallskip \item $\widetilde{\mathcal{P}}(z,\lambda)$ is given by \begin{equation*}\widetilde{\mathcal{P}}(z,\lambda)=\prod_{n\in W} (p_{n}(z)-\lambda)+\sum_{X\in \mathcal{S}}C_{X}\prod_ { n\in X}(p_n(z)-\lambda),\end{equation*}\smallskip \item $Q=q_1\cdots q_d$ \smallskip \item $z^k = z_1^{k_1} \cdots z_d^{k_d}$ for $z \in ({\mathbbm{C}}^\star)^d$, $k \in {\mathbbm{Z}}^d$.\smallskip \item $\mathcal{P}(z,\lambda)$ is defined by \[\widetilde{\mathcal{P}}(z,\lambda)=\mathcal{P}(z_1^{q_1},z_2^{q_2},\ldots,z_d^{q_d},\lambda).\] \item $a\odot b = (a_1b_1,\ldots,a_db_d)$ for ordered $d$-tuples $a=(a_1,\ldots,a_d)$ and $b = (b_1,\ldots,b_d)$. \end{enumerate} \end{remark} \section{Technical Lemmas} \label{sec:technical lemmas} \begin{lemma}\label{liftlemma} With notation as in Remark~\ref{rem:notation}, one has that $\widetilde{g}(z,\lambda) \equiv \widetilde{g}(\mu_n\cdot z,\lambda)$ for every $n \in W$ if and only if there is a polynomial $g(w,\lambda)$ such that \begin{equation} \label{eq:fromPtocalP} \widetilde{g}(z,\lambda) \equiv g(z_1^{q_1},\ldots,z_d^{q_d},\lambda). \end{equation} \end{lemma} \begin{proof} Writing \[g(w,\lambda) = \sum_{\ell \in {\mathbbm{Z}}^d,m \in {\mathbbm{Z}}} c_{\ell,m}w^\ell\lambda^m ,\] and noting that \[g(z_1^{q_1},\ldots,z_d^{q_d},\lambda) = \sum_{\ell \in {\mathbbm{Z}}^d,m \in {\mathbbm{Z}}} c_{\ell,m}z^{\ell\odot q}\lambda^m\] the desired conclusion follows from a brief calculation. \end{proof} \begin{definition}\label{gammadfn'} For each $j\in \{1,2,\cdots,d\}$, define $\gamma_j' \geq 0$ as follows. We let $-\gamma_{j}'$ be the lowest exponent of $z_j$ in $h(z)$ in case this exponent is negative and $\gamma_j'=0$ otherwise. \end{definition} \begin{lemma}\label{rnirred} Let $p$ be a Laurent polynomial in $z_1,\ldots,z_d$ and let $h$ be the lowest degree component of $p$. Then, the polynomials \[ r_n(z,{\widetilde{\lambda}})=\widetilde{\lambda} z^{\gamma_{1}'}_{1}\cdots z^{\gamma_{d}'}_{d}h(\mu_n z)-z^{\gamma_{1}'}_{1}\cdots z^{\gamma_{d}'}_{d} \] are irreducible in ${\mathbbm{C}}[z,\widetilde\lambda]$ for each $n\in W$. Moreover, under Assumption {\rm\ref{assump2}}, we conclude that for any $n \neq n' \in W$, $r_n$ and $r_{n'}$ are relatively prime. \end{lemma} \begin{proof} Assume for the sake of contradiction that $r_n(z,{\widetilde{\lambda}})$ is reducible. Since the degree of ${\widetilde{\lambda}}$ in $r_n(z,{\widetilde{\lambda}})$ is one, we must have that \begin{equation}\label{rirred}r_n(z,{\widetilde{\lambda}})=f(z,{\widetilde{\lambda}})g(z) \end{equation} for non-constant polynomials $f(z,{\widetilde{\lambda}})$ and $g(z)$. Since ${\widetilde{\lambda}}$ does not divide $r_n(z,{\widetilde{\lambda}})$ in ${\mathbbm{C}}[z]$, we see that there exist non-zero polynomials $f_1(z)$ and $f_2(z)$ such that \[f(z,{\widetilde{\lambda}})={\widetilde{\lambda}} f_1 (z) - f_2(z).\] From \eqref{rirred} and the definition of $r_n(z,{\widetilde{\lambda}})$ we obtain $f_2(z)g(z) = z^{\gamma_{1}'}_{1}\cdots z^{\gamma_{d}'}_{d}$. In particular, $g(z)= z^{m_1}_{1}\cdots z^{m_d}_{d}$ where $m_1,\ldots,m_d$ are integers with $0\leq m_j\leq \gamma_j'$ for $j\in\{1,\ldots,d\}$. Since $g$ is nonconstant, $m_l >0$ for at least one $l$. In particular, $$\gamma_l' \geq m_l>0.$$ Consequently, \eqref{rirred} implies that the polynomial $z^{\gamma_{1}'}_{1}\cdots z^{\gamma_{d}'}_{d}h(\mu_n z)$ is divisible by $z_l$ for some $l\in \{1,2,\ldots,d\}.$ However, the lowest degree of $z_l$ in $h(\mu_n z)$ is, by definition, equal to $-\gamma_{l}'$. Thus $z^{\gamma_{1}'}_{1}\cdots z^{\gamma_{d}'}_{d}h(\mu_n z)$ is not divisible by $z_l$, contradicting \eqref{rirred}. Consequently, $r_n$ is irreducible. The second statement of the lemma follows immediately. Concretely, if $r_n$ and $r_{n'}$ share a nontrivial common factor, then they must be constant multiples of one another by irreducibility. However, from the definition, this is only possible if $r_n=r_{n'}$, which contradicts Assumption~\ref{assump2}. \end{proof} Let us introduce the auxiliary polynomial \begin{equation} \label{eq:tildeADef} \widetilde{a}(z,{\widetilde{\lambda}})=\prod_{n\in W}r_n(z,{\widetilde{\lambda}}) \end{equation} with $r_n(z,{\widetilde{\lambda}})$ as in Lemma~\ref{rnirred} for $n\in W$. By a direct calculation, $\widetilde{a}(z,{\widetilde{\lambda}})$ is invariant under the action of each $\mu_n$, so, as a consequence of Lemma~\ref{liftlemma}, there exists $a(z,{\widetilde{\lambda}})$ such that \begin{equation}\label{tiladfn} \widetilde{a}(z,{\widetilde{\lambda}})=a(z_1^{q_1},\ldots,z_d^{q_d},{\widetilde{\lambda}}). \end{equation} \begin{lemma}\label{airred} Under Assumption~{\rm\ref{assump2}}, the polynomial $a(z,{\widetilde{\lambda}})$ given by \eqref{tiladfn} is irreducible in ${\mathbbm{C}}[z,{\widetilde{\lambda}}]$. \end{lemma} \begin{remark} It is important that we pass to the lift $a$ here, since $\widetilde a$ is clearly reducible. \end{remark} \begin{proof}[Proof of Lemma~\ref{airred}] Suppose for the sake of establishing a contradiction that $a(z,{\widetilde{\lambda}})$ is reducible, and write \begin{equation}\label{areducible} a(z,{\widetilde{\lambda}})={f}_1(z,{\widetilde{\lambda}}){g}_1(z,{\widetilde{\lambda}}) \end{equation} for non-constant polynomials $f_1$ and $g_1$. Let $\widetilde{f}_1(z,{\widetilde{\lambda}})={f}_1(z_1^{q_1},\ldots,z_d^{q_d},{\widetilde{\lambda}})$ and $\widetilde{g}_1(z,{\widetilde{\lambda}})={g}_1(z_1^{q_1},\ldots,z_d^{q_d},{\widetilde{\lambda}})$. Combining \eqref{tiladfn} and \eqref{areducible} yields \[\widetilde {a}(z,{\widetilde{\lambda}})=\widetilde{f}_1(z,{\widetilde{\lambda}})\widetilde{g}_1(z,{\widetilde{\lambda}}).\] Moreover, by definition $\widetilde{f}_1(z,{\widetilde{\lambda}})$ and $\widetilde{g}_1(z,{\widetilde{\lambda}})$ are both invariant under the action of each $\mu_n$. Recall from Lemma~\ref{rnirred} that each $r_n(z,{\widetilde{\lambda}})$ is irreducible. Therefore, each $r_n(z,{\widetilde{\lambda}})$ is a factor of either $\widetilde{f}_1$ or $\widetilde{g}_1$. By invariance of $\widetilde{f}_1(z,{\widetilde{\lambda}})$ (respectively $\widetilde{g}_1(z,{\widetilde{\lambda}})$) under the action of each $\mu_n$ and since, by Lemma ~\ref{rnirred}, $r_n$ and $r_{n'}$ are relatively prime for $n\neq n'$, we conclude the following: if $\widetilde{f}_1(z,{\widetilde{\lambda}})$ (respectively $\widetilde{g}_1(z,{\widetilde{\lambda}})$) has a factor of $r_n(z,{\widetilde{\lambda}})$ then it must have a factor of \[ \prod_{n\in W}r_n(z,{\widetilde{\lambda}})=\widetilde{a}(z,{\widetilde{\lambda}}).\] However, this, together with \eqref{areducible}, implies that either $\widetilde{f}_1(z,{\widetilde{\lambda}})$ or $\widetilde{g}_1(z,{\widetilde{\lambda}})$ must be constant, which is a contradiction. Thus, we conclude that ${a}(z,{\widetilde{\lambda}})$ is irreducible. \end{proof} \begin{lemma}\label{Lemma:irredmeets} Let $\mathcal{P}(z,\lambda)$ be given by \eqref{Pdef} and let $f$ be any irreducible factor of $\mathcal{P}$. Then $f$ must depend on $ \lambda.$ \end{lemma} \begin{proof} If $f$ is an irreducible factor of $\mathcal{P}$, then $f$ must depend on $\lambda$ since otherwise there would be a suitable choice of $z=(z_1,\ldots,z_d)$, namely any solution of $f(z)=0$, for which $\mathcal{P}(z,\lambda)=0$ for any $\lambda$. This, in turn, contradicts the fact that the term of highest degree of $\lambda$ in $\mathcal{P}(z,\lambda)$ is ${(-1)^Q}\lambda^Q$ (see \eqref{eq:PmonicInlambda} and \eqref{Pdef}). \end{proof} \section{Proof of Theorem~\ref{mainthm}} \label{sec:mainproof} Before proceeding with the proof of the main result, Theorem~\ref{mainthm}, let us introduce some notation \begin{definition}\label{gammadfn} For each $j\in \{1,2,\ldots,d\}$ denote by $-\gamma_{j}$ the lowest exponent of $z_j$ in $p(z)$ in case this exponent is negative and $\gamma_j=0$ otherwise. Clearly, $\gamma_j \geq \gamma_j'$ with $\gamma_j'$ given in Definition~\ref{gammadfn'}. \end{definition} \begin{proof}[Proof of Theorem~\ref{mainthm}] Let ${\widetilde{\lambda}}=\lambda^{-1}$. Then $\mathcal{P}(z,\lambda)=\mathcal{P}(z,{\widetilde{\lambda}}^{-1})$ is a Laurent polynomial in the variables $(z,{\widetilde{\lambda}})$. Let $\gamma_j$, $j=1,\ldots,d$ be as in Definition~\ref{gammadfn}. In case $\gamma_j>0$ for some $j\in\{1,\ldots,d\}$, the lowest power of $z_j$ in $\mathcal{P}(z,{\widetilde{\lambda}}^{-1})$ is $-\gamma_jQ/q_j$. Moreover, the lowest power of ${\widetilde{\lambda}}$ in $\mathcal{P}(z,{\widetilde{\lambda}}^{-1})$ is ${\widetilde{\lambda}}^{-Q}$ (cf.\ \eqref{eq:PmonicInlambda}), so \begin{equation} \mathcal{R}(z,{\widetilde{\lambda}}) = \left(\widetilde{\lambda}z^{\frac{\gamma_1}{q_1}}_{1}\cdots z^{\frac{\gamma_d}{q_d}}_{d}\right)^Q\mathcal{P}(z,{\widetilde{\lambda}}^{-1}) \end{equation} defines a polynomial $\mathcal{R} \in {\mathbbm{C}}[z,{\widetilde{\lambda}}]$. \begin{claim}\label{cl:mainproof} For each $1 \le j \le d$, $z_j$ does not divide $\mathcal{R}(z,{\widetilde{\lambda}})$. \end{claim} \begin{claimproof} Indeed, if $\gamma_j>0$, this is clear from the definitions, since $-\gamma_j$ is the smallest power of $z_j$ in $p$ and hence $-\gamma_j Q/q_j$ is the smallest power of $z_j$ in $\mathcal{P}$. Otherwise, $\gamma_j = 0$, and the claim can be seen from \eqref{eq:PmonicInlambda}.\end{claimproof} Since ${\widetilde{\lambda}}$ also does not divide $\mathcal{R}(z,{\widetilde{\lambda}})$, Claim~\ref{cl:mainproof} implies that reducibility of the Laurent polynomial $\mathcal{P}(z,{\widetilde{\lambda}}^{-1})$ is equivalent to reducibility of the polynomial $\mathcal{R}(z,{\widetilde{\lambda}})$. Now, assume for the sake of contradiction that $\mathcal{P}(z,{\widetilde{\lambda}}^{-1})$ is reducible. There exist $m>1$ and non-constant polynomials $f_{l}(z,{\widetilde{\lambda}})$, $l=1,2,\ldots,m$, in ${\mathbbm{C}}[z,{\widetilde{\lambda}}]$ such that \begin{equation}\label{contradictionsetup} \left(\widetilde{\lambda}z^{\frac{\gamma_1}{q_1}}_{1}\cdots z^{\frac{\gamma_d}{q_d}}_{d}\right)^Q\mathcal{P}(z,{\widetilde{\lambda}}^{-1})=\prod_{l=1}^mf_l(z,{\widetilde{\lambda}}).\end{equation} Let us recall the auxiliary polynomial $\widetilde{a}$ given by \[\widetilde{a}(z,{\widetilde{\lambda}}):=\left(\widetilde{\lambda}z^{\gamma_{1}'}_{1}\cdots z^{\gamma_{d}'}_{d}\right)^Q\prod_{n\in W}(h(\mu_n z) -{\widetilde{\lambda}}^{-1}).\] Let $\widetilde{f}_l(z,{\widetilde{\lambda}})=f_l(z_1^{q_1},\ldots,z_d^{q_d},{\widetilde{\lambda}})$. Then, by \eqref{Pdef} and \eqref{contradictionsetup}, we have that \begin{equation}\label{contradicteq} \left(\widetilde{\lambda}z^{\gamma_{1}}\cdots z^{\gamma_{d}}\right)^Q\widetilde{\mathcal{P}}(z,{\widetilde{\lambda}}^{-1})=\prod_{l=1}^m\widetilde{f}_l(z,{\widetilde{\lambda}}). \end{equation} By definition of $\widetilde{\mathcal{P}}$ in (\ref{mathcalP}) one sees that replacing ${\widetilde{\lambda}}$ by ${\widetilde{\lambda}}^\gamma$ for $\gamma=-{\rm deg} (h)>0$ allows us to conclude that the lowest degree component of $\left({\widetilde{\lambda}}^\gamma z^{\gamma_{1}}\cdots z^{\gamma_{d}}\right)^Q\widetilde{\mathcal{P}}(z,{\widetilde{\lambda}}^{-\gamma})$ is given by $\widetilde{a}_1(z,{\widetilde{\lambda}}^\gamma)$, where \begin{equation}\label{g11} \widetilde{a}_1(z,{\widetilde{\lambda}}^\gamma)=\left(\widetilde{\lambda}^\gamma z^{\gamma_{1}}_{1}\cdots z^{\gamma_{d}}_{d}\right)^Q\prod_{n\in W}(h(\mu_n z) -{\widetilde{\lambda}}^{-\gamma})= (z_1^{\gamma_1-\gamma_1^\prime}\cdots z_d^{\gamma_d-\gamma_d'})^Q\widetilde{a}(z,{\widetilde{\lambda}}^\gamma). \end{equation} We denote by $\widetilde{f}_l^1(z,{\widetilde{\lambda}}^{\gamma})$ the lowest degree components of $\widetilde{f}_l(z,{\widetilde{\lambda}}^{\gamma})$, $l=1,2,\ldots,m$. From (\ref{contradicteq}) it must be that \begin{equation} \prod_{l=1}^m\widetilde{f}_l^1 (z,{\widetilde{\lambda}}^{\gamma})=\widetilde{a}_1(z,{\widetilde{\lambda}}^\gamma) \end{equation} and hence \begin{equation}\label{g12} \prod_{l=1}^m\widetilde{f}_l^1 (z,{\widetilde{\lambda}})=\widetilde{a}_1(z,{\widetilde{\lambda}}). \end{equation} Given $l\in\{1,\ldots,m\}$, $\widetilde{f}_l^1 (z,{\widetilde{\lambda}})$ is a polynomial in $z_1^{q_1},z_2^{q_2}, \ldots, z_d^{q_d}$. Thus, by Lemma~\ref{liftlemma}, there exists $f_l^1 (z,{\widetilde{\lambda}})$ such that \begin{equation}\label{g13} \widetilde{f}_l^1 (z,{\widetilde{\lambda}})= f_l^1 (z_1^{q_1},\ldots,z_d^{q_d},{\widetilde{\lambda}}). \end{equation} By \eqref{g11}, \eqref{g12} and \eqref{g13}, we reach, recalling the definition of ${a}(z,{\widetilde{\lambda}})$ in (\ref{tiladfn}), \begin{equation} \prod_{l=1}^mf_l^1 (z,{\widetilde{\lambda}})=\left( z_1^{\frac{\gamma_1-\gamma_1'}{q_1}} z_2^{\frac{\gamma_2-\gamma_2'}{q_2}} \cdots z_d ^{\frac{\gamma_d-\gamma_d'}{q_d}} \right)^Q{a}(z,{\widetilde{\lambda}}). \end{equation} By Lemma \ref{airred}, ${a}(z,{\widetilde{\lambda}})$ is irreducible, so there exists $j\in\{1,2,\ldots,m\}$ such that $f_j^1 (z,{\widetilde{\lambda}})$ has a factor ${a}(z,{\widetilde{\lambda}})$. We conclude that the highest power of ${\widetilde{\lambda}}$ in $\widetilde{f}_j(z,\widetilde{\lambda})$ (hence in $f_j (z,{\widetilde{\lambda}})$) is at least $Q$. Since $m>1$ and, by Lemma~\ref{Lemma:irredmeets} and Claim ~\ref{cl:mainproof}, $\widetilde{f}_l (z,{\widetilde{\lambda}})$ , $l=1,2,\ldots, m$, must depend on ${\widetilde{\lambda}}$ we reach a contradiction since the highest power of ${\widetilde{\lambda}}$ on the left-hand side of \eqref{contradicteq} is equal to $Q$. \end{proof} \section{Floquet Theory for Long-Range Operators} \label{sec:floquet} Let us summarize some of the important points about Floquet theory for operators with long-range interactions. This is well-known, especially in the continuum case; see the survey \cite{Kuchment2016BAMS} and references therein. We are unaware of a precise reference in the discrete setting for long-range operators, so we included the details for the reader's convenience. Let us assume that $A:\ell^2({\mathbbm{Z}}^d) \to \ell^2({\mathbbm{Z}}^d)$ is bounded. Writing $A_{n,m} = \langle \delta_n,A\delta_m\rangle$ for the matrix elements, we further assume that $A$ is translation-invariant in the sense that \[ A_{n+k,m+k} = A_{n,m} \ \forall n,m,k \in {\mathbbm{Z}}^d, \] and that $A$ satisfies the decay estimate \[|A_{n,m}| \leq C e^{-\nu|n-m|}\] for constants $C,{\nu}>0$. By translation-invariance, $A$ is fully encoded by $\{a_n := A_{n,0}\}_{n \in {\mathbbm{Z}}^d}$ via \[ [A\psi]_n = \sum_{m \in {\mathbbm{Z}}^d} a_{n-m} \psi_m.\] We denote the Fourier transform on $\ell^2({\mathbbm{Z}}^d)$ by ${\mathscr{F}}:u \mapsto \widehat{u}$, where \[\widehat{u}(x) =\sum_{n \in {\mathbbm{Z}}^d} e^{-2\pi i \langle n,x\rangle} u_n, \] for $u \in \ell^1({\mathbbm{Z}}^d)$ and then extended to $\ell^2$ by Plancherel. By the assumptions on $A$, the \emph{symbol} $\widehat{a}$ is analytic, real-analytic whenever $a_n = a_{-n}^*$, and a trigonometric polynomial whenever $a$ is finitely supported. For example, when $A = -\Delta$ denotes the Laplacian on ${\mathbbm{Z}}^d$, \[\widehat{a}(x) = -2\sum_{j=1}^d\cos(2\pi x_j).\] Recall that $V:{\mathbbm{Z}}^d \to {\mathbbm{C}}$ is $q$-periodic and $\Gamma= \{q \odot k : k \in {\mathbbm{Z}}^d\}$ denotes the period lattice. We define the dual lattice $\Gamma^* = \{ (k_1/q_1,\ldots,k_d/q_d) : k_j \in {\mathbbm{Z}}\}$ and \[W^* := \Gamma^* \cap [0,1)^d = \set{0,\frac{1}{q_1}, \ldots,\frac{q_1-1}{q_1}} \times \cdots \times \set{0,\frac{1}{q_d}, \ldots,\frac{q_d-1}{q_d}}. \] The discrete Fourier transform of a $q$-periodic function $g:{\mathbbm{Z}}^d \to {\mathbbm{C}}$ is defined by \[ \widehat{g}_\ell = \frac{1}{\sqrt{Q}} \sum_{n \in W} e^{-2\pi i \langle n, \ell \rangle}g_n, \quad \ell \in W^*. \] Of course, this also makes sense for $\ell \in \Gamma^*$ and satisfies $\widehat{g}_{\ell+n}= \widehat{g}_{\ell}$ for any $\ell \in W^*$ and any $n\in {\mathbbm{Z}}^d$. One can check the inversion formula \begin{align} \label{eq:floq:inversion} \frac{1}{\sqrt{Q}} \sum_{\ell \in W^*} e^{2\pi i \langle \ell,n \rangle} \widehat{g}_{\ell} & = g_n, \ \forall n \in {\mathbbm{Z}}^d, \end{align} which holds for any $q$-periodic $g$. Let ${\mathbbm{T}}^d = {\mathbbm{R}}^d/{\mathbbm{Z}}^d$ denote the torus. \begin{prop} For any $f \in L^2({\mathbbm{T}}^d)$, \[[{\mathscr{F}} A {\mathscr{F}}^* f](x)= \widehat{a}(x) f(x)\] and \[[{\mathscr{F}} V {\mathscr{F}}^* f](x) = \frac{1}{\sqrt{Q}} \sum_{\ell \in {W}^*} \widehat{V}_\ell f(x-\ell).\] \end{prop} \begin{proof} These follow from direct calculations using the definitions of and assumptions on $A$ and $V$ and the inversion formula \eqref{eq:floq:inversion}. \end{proof} Let us now define ${\mathbbm{T}}^d_* = {\mathbbm{R}}^d/\Gamma^*$, \[{\mathscr{H}}_q = \int_{{\mathbbm{T}}^d_*}^\oplus {\mathbbm{C}}^W \, \frac{dx}{|{\mathbbm{T}}^d_*|} = L^2\left({\mathbbm{T}}^d_*,{\mathbbm{C}}^W; \frac{dx}{|{\mathbbm{T}}^d_*|}\right) \] and ${\mathscr{F}}_q : \ell^2({\mathbbm{Z}}^d) \to {\mathscr{H}}_q$ by $u \mapsto \widehat{u}$ where \[\widehat{u}_j(x) = \sum_{n \in {\mathbbm{Z}}^d} e^{-2 \pi i \langle n \odot q,x\rangle}u_{j+n\odot q}, \ x \in {\mathbbm{T}}^d_*, \ j \in W.\] As usual, this is initially defined for (say) $\ell^1$ vectors, but has a unique extension to a unitary operator on $\ell^2$ via Plancherel. \begin{prop} The operator ${\mathscr{F}}_q$ is unitary. If $V$ is $q$-periodic, then \[ {\mathscr{F}}_q H {\mathscr{F}}_q^* = \int^\oplus_{{\mathbbm{T}}^d_*} \widetilde{H}(x) \, \frac{dx}{|{\mathbbm{T}}^d_*|}, \] where $ \widetilde{H}(x)$ denotes the restriction of $H$ to $W$ with boundary conditions \begin{equation}\label{g1} u_{n+k\odot q} = e^{2\pi i \langle k\odot q,x\rangle } u_n, \ n,k\in{\mathbbm{Z}}^d. \end{equation} \end{prop} \begin{proof} Unitarity of ${\mathscr{F}}_q$ follows from Parseval's formula. The form of ${\mathscr{F}}_q H {\mathscr{F}}_q^*$ follows from a direct calculation.\end{proof} Given $x \in {\mathbbm{R}}^d$, let ${\mathscr{F}}^{x}$ be the Floquet-Bloch transform defined on ${\mathbbm{C}}^W$ as follows: for any vector on $W$, $\{u(n)\}_{n\in W}$, we set \[ [{\mathscr{F}}^{x} u]_l = \frac{1}{\sqrt{Q}} \sum_{n \in W} e^{-2\pi i \sum_{j=1}^d (\frac{l_j}{q_j}+x_j) n_j }u_n, \quad l\in W .\] Therefore, \[ [({\mathscr{F}}^{x} )^*u]_l = \frac{1}{\sqrt{Q}} \sum_{n \in W} e^{2\pi i \sum_{j=1}^d (\frac{n_j}{q_j}+x_j) l_j }u_n, \quad l\in W .\] Let $z_j=e^{2\pi i x_j}$, $j=1,2,\cdots,d$ and define the Laurent series $p(z)$ by \begin{equation} p(e^{2\pi i x_1},e^{2\pi i x_2},\ldots, e^{2\pi i x_d}) ={\widehat{a}}(x_1,x_2,\ldots,x_d). \end{equation} Using multi-index notation, we may rewrite this as \begin{equation}\label{pzequation} p(z) = \widehat{a}(x) = \sum_{n \in {\mathbbm{Z}}^d} e^{-2\pi i \langle n,x\rangle} a_n = \sum_{n \in {\mathbbm{Z}}^d} a_nz_1^{-n_1} z_2^{-n_2}\cdots z_d^{-n_d} = \sum_{n \in {\mathbbm{Z}}^d} a_n z^{-n}. \end{equation} \begin{prop} \label{prop:floquetTransf} Assume $V$ is $q$-periodic. Then $\widetilde{H}(x)$ given by \eqref{g1} is unitarily equivalent to $ D^z+B_V, $ where $z_j = e^{2\pi i x_j}$, $D^z$ is a diagonal matrix with entries \begin{equation}\label{D} D^z(n,n^\prime) = p(\mu_{n}\cdot z)\delta_{n,n^{\prime}}, \end{equation} $\mu_n$ is the action as in \eqref{eq:characterActionDef}, and $B=B_V$ has entries related to the discrete Fourier transform of $V$ via $$ B(n,n')=\widehat V\left(\frac{n_1-n'_1}{q_1},\ldots,\frac{n_{d}-n'_d}{q_d}\right).$$ \end{prop} \begin{remark} In particular, $D^z$ depends on $A$ and is independent of $V$, while $B_V$ depends only on $V$ with no dependence on $A$. \end{remark} \begin{proof}[Proof of Proposition~\ref{prop:floquetTransf}] By a direct calculation, we see that ${\mathscr{F}}^{x}$ is unitary, so it suffices to prove that $D^z+B_V= ({\mathscr{F}}^{x})\widetilde{H}(x) ({\mathscr{F}}^{x} )^*$. Let $\widetilde{H}_0(x)$ be $\widetilde{H}(x)$ with the potential $V$ set to zero. We are going to show $({\mathscr{F}}^{x})\widetilde{H}_0(x) ({\mathscr{F}}^{x})^* =D^z$ and $({\mathscr{F}}^{x})V({\mathscr{F}}^{x})^* =B$ separately. To prove that $({\mathscr{F}}^{x})\widetilde{H}_0(x) ({\mathscr{F}}^{x})^* =D^z$, it suffices to show that for any $u=\{u_n\}_{n\in W}$, \begin{equation*} ({\mathscr{F}}^{x} )^*D^z u=\widetilde{H}_0(x) ({\mathscr{F}}^{x} )^*u . \end{equation*} It is worth mentioning that $({\mathscr{F}}^{x} )^*u$ satisfies \eqref{g1} so that $\widetilde{H}_0(x)( {\mathscr{F}}^{x})^* u$ is well defined. With the given definitions, for any $m\in W$, \begin{align}\nonumber (\widetilde{H}_0(x)({\mathscr{F}}^{x})^* u)_m &= \sum_{l\in{\mathbbm{Z}}^d} a_{m-l }[({\mathscr{F}}^{x} )^*u]_l \\ \nonumber &= \frac{1}{\sqrt{Q}} \sum_{l\in{\mathbbm{Z}}^d} a_{m-l}\sum_{n \in W} e^{2\pi i \sum_{j=1}^d (\frac{n_j}{q_j}+x_j) l_j }u_n\\ \nonumber &= \frac{1}{\sqrt{Q}} \sum_{l\in{\mathbbm{Z}}^d} a_{l}\sum_{n \in W} e^{2\pi i \sum_{j=1}^d (\frac{n_j}{q_j} +x_j) (m_j-l_j) }u_n\\ &= \frac{1}{\sqrt{Q}} \sum_{n \in W} e^{2\pi i \sum_{j=1}^d (\frac{n_j}{q_j} +x_j) m_j } {\widehat{a}}\left(\frac{n_1}{q_1}+x_1, \ldots, \frac{n_d}{q_d}+x_d\right) u_n.\label{g2} \end{align} Putting together \eqref{eq:characterActionDef} and \eqref{g2}, \begin{equation}\label{g4} {\widehat{a}}\left(\frac{n_1}{q_1}+x_1, \ldots, \frac{n_d}{q_d}+x_d\right) = p(\mu_{n} \cdot z) = D^z(n,n). \end{equation} Similarly, \begin{align} (({\mathscr{F}}^{x} )^*D^z u)_m &= \frac{1}{\sqrt{Q}} \sum_{n \in W} e^{2\pi i \sum_{j=1}^d (\frac{n_j}{q_j} +x_j) m_j }u_n D^z(n,n).\label{g3} \end{align} By \eqref{g2}, \eqref{g3} and \eqref{g4}, we finish the proof of $({\mathscr{F}}^{x})\widetilde{H}_0(x) ({\mathscr{F}}^{x} )^{*}=D^z$. The proof of $({\mathscr{F}}^{x})V({\mathscr{F}}^{x} )^{*}=B$ is similar. \end{proof} \begin{proof}[Proof of Theorem~\ref{t:blochIrr}] The Bloch variety precisely consists of those $(k,\lambda)$ such that there is a nontrivial solution of $Hu=\lambda u$ satisfying the boundary conditions \eqref{g1}. Thus, with $D$ and $B$ as in Proposition~\ref{prop:floquetTransf}, the Bloch variety is the zero set of the polynomial $\mathcal{P}(z,\lambda)$ defined by \eqref{Pdef} where \[\widetilde{\mathcal{P}}(z,\lambda) = \det(D^z+B-\lambda I).\] After using the standard permutation expansion for this determinant, we see that $\widetilde{\mathcal{P}}$ is of the form \eqref{mathcalP} (with $p$ given via \eqref{pzequation}). By a brief calculation, one can check that $\widetilde{\mathcal{P}}$ satisfies \eqref{eq:mathcalPCharInv}. Namely, if $S_m$ denotes the shift $e_n \mapsto e_{n+m}$ with addition computed modulo $\Gamma$, one can check that \begin{align*} \widetilde{\mathcal{P}}(\mu_mz,\lambda) & = \det(D^{\mu_m z}+B-\lambda) \\ & = \det(S_m^* D^z S_m + B-\lambda). \end{align*} Since $S_m^* B S_m = B$, \eqref{eq:mathcalPCharInv} follows. Thus, the result follows from Theorem~\ref{mainthm}. \end{proof} \section{Examples} \label{sec:examples} Let us conclude by discussing a few examples of how to obtain the generator $p(z)$ for which Theorem~\ref{mainthm} is applicable. In particular, the examples below show that the framework of the present paper allows one to consider different discrete geometries. We start with the most basic example of the Laplacian on ${\mathbbm{Z}}^d$, where \begin{equation*} [A \psi]_{n} = - \sum_{\|m-n\|_1=1}\psi_{m}. \end{equation*} In this case, it readily follows from (\ref{pzequation}) that \begin{equation} \label{eq:sqsymbol} p(z)=-\left(z_1+\frac{1}{z_1}+z_2+\frac{1}{z_2}+\cdots+z_d+\frac{1}{z_d}\right). \end{equation} \begin{proof}[Proof of Corollary~\ref{coro:square}] From \eqref{eq:sqsymbol}, we see that the minimal degree component of $p$ is precisely \begin{equation*} h(z)=-\left(\frac{1}{z_1}+\frac{1}{z_2}+\cdots+\frac{1}{z_d}\right). \end{equation*} Here assumptions~\ref{assump1} and~\ref{assump2} are fulfilled with $\rm{deg}(h)=-1$. \end{proof} We then proceed to the description of a couple of two dimensional examples. The triangular lattice is given by specifying the vertex set \[\mathcal{V} = \{ nb_1+nb_2 : n,m \in {\mathbbm{Z}} \}, \quad b_1 = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \ b_2 = \frac{1}{2}\begin{bmatrix} 1 \\ \sqrt{3} \end{bmatrix}\] with edges given by $u\sim v \iff \|u-v\|_2 = 1$. Applying the shear transformation $b_1\mapsto b_1$, $b_2 \mapsto [0,1]^\top$, one can view this graph as having vertices in ${\mathbbm{Z}}^2$ and \[ u \sim v \iff u-v \in \{\pm e_1, \pm e_2, \pm(e_1-e_2)\}. \] In particular, the nearest-neighbor Laplacian on the triangular lattice is equivalent to the operator $A_{\rm tri}:\ell^2({\mathbbm{Z}}^2) \to \ell^2({\mathbbm{Z}}^2)$ such that \[[A_{\rm tri} \psi]_{n_1,n_2} = -\psi_{n_1-1,n_2}-\psi_{n_1+1,n_2} -\psi_{n_1,n_2-1}-\psi_{n_1,n_2+1} -\psi_{n_1-1,n_2+1}-\psi_{n_1+1,n_2-1}. \] Making use of \eqref{pzequation} one finds that \begin{equation} \label{eq:trisymb} p_{\rm tri}(z)=-\left(z_1+\frac{1}{z_1}+z_2+\frac{1}{z_2}+\frac{z_1}{z_2}+\frac{z_2}{z_1}\right). \end{equation} \begin{proof}[Proof of Corollary~\ref{coro:tri}] From \eqref{eq:trisymb}, we see that \[h_{\rm tri}(z)=-\frac{1}{z_1}-\frac{1}{z_2},\] from which it is trivial to check Assumptions~\ref{assump1} and \ref{assump2}. \end{proof} Finally, in the Extended Harper Model \begin{align*} [A_{\rm EHM} \psi]_{n_1,n_2} =& -\psi_{n_1-1,n_2}-\psi_{n_1+1,n_2} -\psi_{n_1,n_2-1}-\psi_{n_1,n_2+1}\\ &-\psi_{n_1-1,n_2+1}-\psi_{n_1+1,n_2-1}-\psi_{n_1-1,n_2-1}-\psi_{n_1+1,n_2+1}. \end{align*} Equation (\ref{pzequation}) now implies that \[p_{\rm EHM}(z)=-\left(z_1+\frac{1}{z_1}+z_2+\frac{1}{z_2}+\frac{z_1}{z_2}+\frac{z_2}{z_1}+z_1z_2+\frac{1}{z_1z_2}\right)\] The proof of Corollary~\ref{coro:ehm} follows in just the same way as before. \bibliographystyle{abbrv}
1,116,691,501,078
arxiv
\section{Introduction} III-V semiconductor nanowires combined with superconducting electrodes are versatile building blocks for various applications in the field of quantum computation and experiments addressing fundamental aspects of quantum nanostructures. Josephson junctions formed by two superconducting electrodes bridged by a nanowire segment allows the control of the critical current by a gate voltage.\cite{Doh05,Xiang06,Guenel12} Such an approach, results in a much more compact superconducting circuit lay-out compared to the common flux-controlled one. This advantage is used e.g. in a gatemon qubit, a special form of the transmon qubit, in which the Josephson junction in the qubit resonator circuit is controlled by a gate.\cite{deLange15,Larsen15,Luthi18,Kringhoj20} Furthermore, owing to the large Fermi wavelength of the electrons in the semiconductor combined with the small diameter of the nanowire, a finite number of discrete Andreev bound states that carry the Josephson supercurrent are formed. Coherent transitions between these discrete states can be used in Andreev level qubits for quantum circuits.\cite{Zazunov03,Woerkom17,Tosi19} Apart from these more conventional qubit applications, nanowire-superconductor hybrids are also very promising for topological qubits based on Majorana fermions.\cite{Mourik12,Deng12,Das12,Albrecht16,Zhang17} InAs and InSb are the common semiconductors of choice to form a highly transparent interface with the superconducting electrodes, a prerequisite for a sufficiently large supercurrent. Depending on the application different superconductor materials are deposited. Thus, for example, aluminium with a small superconducting gap and a small critical magnetic field but large superconducting coherence length is a common choice.\cite{Doh05,Das12,Guenel14} For higher temperatures or higher magnetic field operation, superconductors such as Nb and its alloys,\cite{Guenel12,Guel17,Zhang17,Carrad19} Pb,\cite{Paajaste15,Kanne20} or V, \cite{Spathis11,Bjergfelt19} are used. However, detailed analysis of these semiconductor/superconductor systems are just starting to emerge. Until recently, the conventional methods to produce superconductor-semiconductor interfaces, are based on the semiconductor surface cleaning (by wet chemical etching or Ar$^+$ sputtering) prior to the superconductor deposition.\cite{Guel17} The major issues with these ex-situ approaches are the presence of residual atoms and the semiconductor surface damage that results in a non-ideal interface. Consequently, a soft induced gap might form in the semiconductor nanowire with a significant density of states present within the superconducting gap induced by the proximity effect.\cite{Guel17} This effect is especially detrimental for topological qubits based on Majorana fermions. A successful method to circumvent these problems is the in-situ deposition of the superconducting material on the semiconductor nanowires.\cite{Krogstrup15,Guesgen17,Bjergfelt19,Carrad19} Another important issue is the residual material left on the structure from wet-chemical etching technique normally used for the fabrication of Josephson junctions with a small gap. However, recently it was demonstrated that closely separated superconducting electrodes can also be achieved by shadow evaporation technique, i.e. either by using a nanowire which crosses another\cite{Gazibegovic17,Khan20} or using a patterned, suspended SiO$_2$ layer as a shadow or stencil mask.\cite{Bjergfelt19,Carrad19} Here, we combine both of the above approaches to report the fabrication of a fully in-situ Josephson junction based on the evaporation of superconductor half-shells on InAs nanowires by a specially designed shadow evaporation technique. Pairs of NWs are selectively grown on different Si(111) adjacent facets in such a way that one NW shadows the other one situated at a small distance. Thus, the deposited junction width is determined by the nanowire diameter and the distance between them. Although our approach is applicable to almost any superconductor material, here we focused on Nb, since small separations between Nb superconducting electrodes are difficult to fabricate by other methodologies. The InAs/Nb interface of the Josephson junctions fabricated by our novel approach are analysed in depth by electron microscopy techniques. Low temperature transport properties of the Josephson junctions are also reported. \section{Results and Discussion} \textbf{In-situ prepared InAs-Nb junctions.} InAs-nanowire growth was carried out by molecular beam epitaxy (MBE). To enable selective area growth, 3\,$\mu$m wide square-shaped troughs are etched on to SiO$_2$-covered Si(100) substrates to obtain Si (111) side facets. Subsequently, nano-holes are etched on the side facets, defining the position of the nanowires. An offset of 100\,nm from the center of the facet is imposed to enable nanowires from neighboring facets to cross each other closely without merging. Following the growth of $4-5\,\mu$m long and 80\,nm diameter InAs nanowires, the sample was transferred to a metal MBE chamber for the Nb half-shell deposition. A scanning electron microscopy (SEM) image with crossed nanowires are shown in Figure~\ref{fig:SEM-selective}a. A false-coloured magnified top-view image of one square trough is presented in Figure~\ref{fig:SEM-selective}b with two Nb-covered nanowires grown on adjacent Si (111) facets. The Nb gap on the bottom nanowire is clearly observed in the further magnified image in Figure~\ref{fig:SEM-selective}c, which confirms the formation of the Josephson junction with the separation of the two Nb electrodes defined by the shadow of the upper nanowire. \begin{figure}[ht!] \centering \includegraphics[width=1.0\columnwidth]{Fig1-SEM-selective.pdf} \caption{Scanning electron microscopy images of selectively grown nanowire junctions: (a) Overview of the 3\,$\mu$m square troughs with selectively grown nanowires (top view). (b) False-coloured single square with two Nb covered nanowires grown off the adjacent Si (111) facets. The shadow depicts the direction of the metal (Nb) deposition (green arrow). (c) Close-up of the crossing section, showing the gap in the Nb layer (purple) in the bottom nanowire (orange) due to the shadowing from the top nanowire (30$^\circ$ titled image). } \label{fig:SEM-selective} \end{figure} \textbf{Transmission Electron Microscopy.} Due to the shadowing from the closely placed thin upper nanowire, junctions with separations of few tens of nanometers can be formed. Figure~\ref{fig:TEM-Nb-InAs}a shows a bright field scanning transmission electron microscope (STEM) image of a Nb-InAs junction and the corresponding energy dispersive x-ray (EDX) elemental map, superimposed on the annular dark field (ADF) image. The separation is $\sim 55$\,nm and the junction is clean, i.e. with no traces of Nb within the gap. Along rest of the nanowire, a uniform and continuous Nb layer is formed, with only $\pm 2$\,nm variation in thickness along the nanowire. The side-view TEM studies also revealed that the nanowire has a polytypic crystal structure with thin wurtzite (WZ) and zincblende (ZB) segments, and high density of stacking faults within the segments (see Figure~S5a in Supporting Information (SI)). Figure~\ref{fig:TEM-Nb-InAs} (b) shows an ADF image of a nanowire cross-section, along with two higher magnification images of the interface from the regions indicated. Nb is deposited on three of the six (\{11$\bar{2}$0\} type) side facets, in agreement with the SEM observations in Figure~\ref{fig:SEM-selective}c. Deposition on the middle facet is thicker ($\sim$22 nm), smooth and formed by relatively large grains of $\sim15-30$\,nm (one grain boundary is indicated by a red arrow). In contrast, those on the two facets on the sides are column-like, polycrystalline in structure and lower in thickness ($\sim$16 nm). This is due to the difference between the effective deposition angles on different facets. The metal flux is almost perpendicular to the nanowire axis at 87$^\circ$, and is directed at the middle facet. Which, also aided by the substrate temperature,\cite{Guesgen17} results in a smooth growth on this facet. The effective angles created with the two facets on either side are steep with a smaller deposition angles, resulting in column-like growth.\cite{Tanvir08} Closer inspection of the InAs-Nb interface reveals a $\sim$1\,nm uniform amorphous layer on all three facets (marked by the red dashed lines). Figure~\ref{fig:TEM-Nb-InAs}c shows an EDX line scan across the interface at the position indicated by the yellow arrow in Figure~\ref{fig:TEM-Nb-InAs}b. Two interesting observations can be made. Firstly, the amorphous region contains a high percentage of As with a clear lag visible in the decrease of the As curve compared to that of In. Secondly, In, which initially shows a dip within the amorphous region, slightly increases afterwards, before decreasing again (red arrow). The compositions of the amorphous layer measured across different facets and nanowires were found to vary between As:\,25-40\%\, In:\,5-20\%\ and, Nb:\,45-60\%\ (considering only Nb, As and In). One example is shown in SI Figure~S5b. Although the composition values of the 1 nm layer cannot be ascertained precisely due to the contributions from layers on either side, it is clear that this region contains a higher percentage of As and Nb. Considering the ternary phase diagram between In-As and Nb at room temperature,\cite{Klingbeil89} one could see that there is no tie-line between InAs and Nb. This means that InAs and Nb cannot exist in equilibrium. Instead, they react in a dominant reaction\cite{Klingbeil89} and form compounds, even at room temperature. The crystal structure of the layer formed by mixing (or solid diffusion) is amorphous, similar to many semiconductor-metal interfaces that show similar behaviour of solid state amorphisation.\cite{Zhang17,Chen20,Sinclair94} \begin{figure*} \centering \includegraphics[width=0.80\textwidth]{Fig2-TEM-Nb-InAs.pdf} \caption{(a) Bright field and annular dark field (ADF) STEM images of a junction, with the EDX elemental maps superimposed on the ADF image. Both images show the clean gap in Nb. (b) ADF image of a nanowire cross section and higher magnification images from the regions indicated by squares. The amorphous layers are marked by the red broken lines in the higher magnification images. The red arrow points to a grain boundary. (c) EDX line scan profile along the yellow arrow in (b). The red arrow indicates the increase in In within the Nb layer and the greyed area marks the interface region shown in the inset high magnification ADF image.} \label{fig:TEM-Nb-InAs} \end{figure*} As the amorphous layer contains much less In than As, the excess In from InAs decomposition is expelled and segregated on the far Nb side, forming an In rich band. The existence of tie-lines between number of Nb$_x$As$_y$ and In/Nb$_3$In in the In-As-Nb phase-diagram,\cite{Klingbeil89} suggests that compounds of the former can co-exist with In or Nb$_3$In. Similar observations have been made in other material systems such as Pt-GaAs, where gallide formation took place close to the metal interface and arsenide formation close to the semiconductor interface during re-crystallisation.\cite{Sinclair94} Although the subsequent transport measurements do not indicate significant effects from this amorphous layer, the current results bring to notice the important aspect of room temperature reactions and amorphisation of semiconductor-superconductor/metal interfaces. \textbf{Electrical Characterization.} The nanowire Josephson junction was integrated in an on-chip bias tee, containing an inter-digital capacitor and a planar coil, to perform both AC and DC measurements (cf. Figure~\ref{fig:device}a). \begin{figure*}[ht!] \centering \includegraphics[width=0.80\textwidth]{Fig3-Device-bias-tee-junction.pdf} \caption{(a) Optical microscope image of a bias-tee chip implemented by combining a coil (1) and an inter-digital capacitor (2) connected to one side of the junction. The other side is connected to the global ground plane. For electrostatic tuning, we use a bottom gate electrode, which is terminated by a large bonding pad (3). The junction is located at (4). (b) Scanning electron micrograph of an InAs nanowire covered by Nb half-shells, which are contacted by NbTi fingers. The junction is placed on a bottom-gate electrode. The metal finger grid on either sides of the gate are for mechanical support of the nanowire.} \label{fig:device} \end{figure*} The Nb shells were contacted by NbTi fingers, while control of the carrier concentration of the InAs segment between the Nb electrodes was achieved by means of a bottom gate. A scanning electron micrograph of the junction device is depicted in Figure~\ref{fig:device}b. In order to get an overview of the junction properties the current-voltage ($IV$) characteristics at three different gate voltages at the temperature of 15\,mK were measured. As can be seen in Figure~\ref{fig:IV}a, at zero gate voltage a relatively large switching current of $I_\mathrm{c}=75$\,nA and a re-trapping current of $I_\mathrm{r}=60$\,nA are observed, which is inline with previous studies on Nb/InAs nanowire junctions.\cite{Guenel12} For larger gate voltages, i.e. $V_\mathrm{g}=7$\,V, those values increase to $I_\mathrm{c}=133$\,nA and $I_\mathrm{r}=67$\,nA, respectively. Whereas, for $V_\mathrm{g}=-7$\,V the switching and retrapping currents are lowered to values of about 40\,nA and 25\,nA, respectively. Thus, between the smallest and largest gate voltage the junction exhibited an almost tripling of the switching current. The fact that $I_\mathrm{r}$ is only slightly lower than $I_\mathrm{c}$ indicates that the heating effect caused by dissipation in the resistive state is only moderate. \begin{figure*}[!t] \centering \includegraphics[width=0.90\textwidth]{Fig4-InAs-Nb-IVC_excess.pdf} \caption{(a) Current-voltage characteristics of an InAs/Nb shadow junction measured at $V_{\mathrm{g}}=0$\,V , $-7$\,V, and $7$\,V, showing an almost three-fold reduction of the critical current between the largest and smallest gate voltages. The sweep direction is indicated by arrows. The junction is slightly underdamped and shows a small hysteretic behaviour due to overheating. (b) $IV$ characteristics of the same junction for large bias currents and a gate voltage of $V_{\mathrm{g}}$=7$\,$V. Based on the measurement we obtain an excess current $I_\mathrm{exc}=327.8\,$nA and a normal state resistance $R_\mathrm{N}=2850\,\Omega$.} \label{fig:IV} \end{figure*} As a result of the special circuit geometry, namely the coupling to a coplanar waveguide transmission line, the junction is affected by the emission and self-absorption of photons due to the AC Josephson effect, resulting in so-called self-induced Shapiro steps on the re-trapping branch. The decrease $I_\mathrm{c}$ with decreasing $V_\mathrm{g}$ can be attributed to the fact that by lowering the carrier concentration the number of transport channels carrying supercurrent via phase-coherent Andreev reflections is reduced. However, for even more negative gate voltages, no complete suppression of $I_\mathrm{c}$ could be achieved. We attribute this to the incomplete pinch-off of the electron gas in the InAs nanowire bridge segment. This is in contrast to our Al-based nanowire junctions for which all transport could be completely suppressed.\cite{Zellenkens20a} A possible reason is that the in-situ deposited Nb layer changes the Fermi level pinning at the interface leading to an enhanced carrier accumulation at the interface. Furthermore, the structural properties, like the alloying and the amorphous layer at the interface may have an affect as well. As a consequence of the inability to pinch-off the junction, no tunnel spectroscopy could be performed, in order to find out about the hardness of the induced gap. However, due to the large superconducting gap of Nb, e.g. compared to the gap of Al, a more robust supercurrent is maintained in the junction. \begin{figure*}[!t] \centering \includegraphics[width=0.90\textwidth]{Fig5-Shapiro-InAs-Nb-4GHz-histo_IVCs.pdf} \caption{(a) Current-voltage traces for different microwave excitation powers at a fixed frequency of $f=5$\,GHz. While the trace at $-30\,$dBm (dark red) reproduces the current-voltage curves without any additional AC component, the pronounced plateau region within the zero voltage state is replaced by equidistant voltage steps when the power is increased. The sweep direction for all measurements is indicated with the black arrow. (b) Histogram of the power-dependent Shapiro response for a constant microwave frequency of $f=4\,$GHz.} \label{fig:Shapiro} \end{figure*} In order to obtain information about the junction transparency, we measured the $IV$ characteristics up to large bias voltages at $V_\mathrm{g}=0\,$V. By linear extrapolation in the bias voltage range above $2\Delta/e$ we were able to extract an excess current of $I_{\mathrm{exc}}=$327.8$\,$nA and a normal state resistance of $R_{\mathrm{N}}=$2850$\,\Omega$. By utilizing the framework of the corrected Octavio--Tinkham--Blonder--Klapwijk theory,\cite{Octavio83,Flensberg88} we obtain a ratio of $eI_\mathrm{exc}R_\mathrm{N}/\Delta=$0.623 which results in a barrier strength of $Z=0.69$. The latter corresponds to a transparency $\mathcal{T}=0.68$, which is a typical value for a nanowire Josephson junction with a wide-gap superconductor like Nb in the many channel regime.\cite{Guenel12} The junction transparency is large, with no significant detrimental effect apparent from the amorphous interfacial layer observed in the TEM studies. For mesoscopic Josephson junctions, the device response to an applied microwave signal provides information about the internal state structure, such as presence of a topological state.\cite{Dominguez17,Bocquillon17} Thanks to the combination of the coplanar waveguide transmission line and the bias-tee, the nanowire Josephson junction can be supplied with an AC and DC signal simultaneously with efficient transmission to the junction over a wide range of frequency, which would be difficult to achieve with an external antenna. Figure~\ref{fig:Shapiro}a shows a set of current-voltage traces for a fixed frequency of $f=5\,$GHz and different microwave powers at $V_{\mathrm{g}}=7\,$V. For low power, i. e. $-30\,$dBm, the curve mimics the behavior of a purely DC-driven junction. However, if the power is increased, the zero voltage state is gradually suppressed and replaced by equidistant voltage plateaus, so-called Shapiro steps, of height $n \times hf/2e$, with $h$ Plancks's constant and $n=1,2,3, \dots$. Originating from the AC Josephson effect, they are tightly bound to the current-phase relation and damping behavior. However, while all integer steps are well pronounced, indicating a single well-defined junction, there is no obvious indication for any non-trivial features such as missing steps. Most importantly, there are no signs of subharmonic steps, which may be observed as a consequence of a non-sinusoidal current-phase-relationship due to a interface transparency close to unity. As one can see in Figure~\ref{fig:Shapiro}a, the quality and shape of the Shapiro steps also depend on the effective microwave power that is applied to the junction. Thus, we performed a more systematic mapping of the AC response for constant frequencies. Figure~\ref{fig:Shapiro}b shows the histogram of binned voltage data scaled by the current step size of the measurement at $f=4\,$GHz for microwave powers at the input port of the fridge between $-20\,$dBm and $+5\,$dBm. For low powers, the junction exhibits a chevron-like pattern without well-defined steps. The latter can probably be attributed to the fact that the small amplitude of the AC drive is not sufficiency to maintain a resonant motion of the particle in the washboard potential and resistively shunted junction (RSJ) model. However, when the microwave power is increased above $-5\,$dBm, clearly pronounced Shapiro steps are observed. \begin{figure*}[!t] \centering \includegraphics[width=0.90\textwidth]{Fig6-InAs-Nb-oop-zero-Gate-map_and_trace.pdf} \caption{(a) Magnetic field dependent differential resistance for V$_g\,=\,$0$\,$V. The field is oriented in-plane along the nanowire axis. For both sweep directions, the nanowire junction exhibits a fluctuating resistance that corresponds to the alternating suppression and revival of the supercurrent. This effect can be attributed to a mixture of spin-orbit interaction and the interference between multiple transverse modes in the nanowire.\cite{Zuo17} The observed behavior is maintained for magnetic fields above 2$\,$T, indicating a comparably large critical field B$_c$. (b) Field-dependent magnitude of the switching current, clearly showing the reappearance of the supercurrent for fields up to 2$\,$T.} \label{fig:Fraunhofer} \end{figure*} Operation in the presence of a magnetic field is common to all structures that are based on few-channel mesoscopic nanowire Josephson junctions, i.e. the Andreev qubit or topological systems. In the case of the former, for example, the system can operate as intended if the junction and the connected superconducting loop is exposed to a magnetic flux $\Phi=\Phi_0/2$ corresponding to a phase bias of $\pi$, with magnetic flux quantum $\Phi_0=h/2e$. For the creation of Majorana zero modes, on the other hand, one needs a strong in-plane field that can easily exceed hundreds of milli-Tesla. This is especially true in the case of InAs due to the smaller g-factor if compared with InSb.\cite{Winkler03}. Thus, the magnetic field robustness of the induced superconductivity in the nanowire junction is of special interest to benchmark the device performance. Figure \ref{fig:Fraunhofer}a shows the device response in terms of the change of the differential resistance, if the system is penetrated by a magnetic field in parallel to the nanowire axis. Here, the most obvious feature is the lobe-like pattern centered around zero magnetic field. Considering the center lobe, the supercurrent is suppressed at large magnetic field magnitudes of around $\pm 0.7$\,T. The strong asymmetry along the current axis close to zero magnetic field can be attributed to the difference of the switching and retrapping current. The stripe-like structure in the retrapping branch is due to the existence of self-induced Shapiro steps. We find that the field-dependency of the supercurrent does not follow the expected monotonous decrease when the superconducting gap energy decreases with the magnetic field. Instead, the device exhibits an alternating, non-periodic series of sections with and without a supercurrent. These lobe-like structures at higher magnetic fields, in which the supercurrent reappears for a finite field range, can probably be attributed to the intermixing and interference of multiple but not-too-many transverse modes.\cite{Zuo17} The corresponding, field-dependent magnitude of the switching current depicted in Figure~\ref{fig:Fraunhofer}b clearly shows that the lobe-structures do not follow a typical Fraunhofer-like pattern and the junction can still host finite supercurrent even up to 2\,T. \section{Conclusion} Our results show that highly transparent Josephson junctions can be fabricated by combining selective-area growth with a shadow evaporation scheme for the superconducting electrodes. The transmission electron microscopy investigations confirmed the absence of any foreign residue at the interface between the superconductor and the InAs nanowire. However, a very thin amorphous layer is observed at the interface. The Nb growth on the middle facet is found to be smooth consisting of large grains, while the side facets are polycrystalline and column-like. Owing to the large interface transparency, the junctions showed a clear signature of a Josephson supercurrent in the transport experiments. Gate control was possible, however, compared to Al-based junctions prepared in a similar fashion no complete pinch-off was achieved, which may be due to an enhanced surface accumulation in InAs when in contact to Nb. The Shapiro steps observed in the $IV$ characteristics show pronounced integer steps, indicating a sinusoidal current-phase relation. Taking the magnetic interference effects into account we have strongly evidenced successful fabrication of a weak link which works well even in high magnetic fields. Our Nb/InAs nanowire-based junctions prove to be very interesting devices, with great potential for applications in superconducting quantum circuits that require large magnetic fields. In fact, most of the advanced approaches for the detection, manipulation and utilization of topological excitations, like the Majorana-transmon, rely on phase-sensitive and well-controlled detector structures. Here, our InAs/Nb in-situ nanowire shadow Josephson junctions can act as ideal gate-tunable components in superconducting quantum interference device (SQUID) structures which do not exhibit pronounced quantum fluctuations of the supercurrent and, additionally, maintain their superconducting properties even at large magnetic fields. The exact origin of the non-ideal transparency remains unclear. Even though we do not see any obvious indications in our measurements that the transport properties are altered by the interlayer, it still interesting for its effects to be investigated experimentally. Further improvement of the nanowire Josephson junctions may be possible by post-growth annealing and hence inducing re-crystallisation of the amorphous layer at the Nb/InAs interface. \section{Methods} \textbf{Growth and Fabrication.} The pre-pattered Si substrates used for the selective-area growth of the InAs nanowires were prepared by using a three-step electron beam lithography process, i.e. the first step places the alignment marker, the second defines the square-shaped troughs, and the third step is employed to define the nanoholes on the side facets of the square troughs. The substrates used for the template fabrication are Si (100) wafers, thermally oxidized for 20\,nm. The alignment markers for electron beam lithography are defined by deep etching using reactive ion processing. Next, several sets of 3\,$\mu$m wide square troughs with a pitch of 10\,$\mu$m are defined on the substrate surface (cf. Figure~S1a in Supporting Information), Here, a PMMA resist layer is used as a mask to anisotropically etch the SiO$_2$ layer by reactive ion etching using CHF$_3$ and oxygen revealing the Si(100) surface. Subsequently, the resist mask is removed and the Si(100) surface is etched for 90\,s with tetramethyl ammonium hyroxide (TMAH) to form the 300\,nm deep square-shaped troughs with the Si(111) facets (cf. Figure~S1b). Next, the oxide on the substrate surface is removed completely with buffered HF. Subsequently, a thermal re-oxidation is performed resulting in a 23 and 16\,nm thick oxide layer on the (111) and (100) surfaces, respectively (cf. Figure~S2). As a part of the third lithography step, 80\,nm wide growth holes are defined in the oxide layer on the side facets for the subsequent selective-area growth. The holes are etched using a combination of reactive ion etching and HF wet etching (cf. Figure~S1c). A focused ion beam etching prepared cross-sectional cut of a 80\,nm wide hole on a Si (111) facet is depicted in Figure~S1d. Regarding the position of the holes an offset of 100\,nm from the center of the facet is imposed to enable nanowires from neighboring facets to cross each other closely rather than merging into one crystal. The InAs nanowires are selectively grown in the holes on the Si(111) facets via molecular beam epitaxy (MBE). A vapour-solid method without any catalyst is employed. In the first step the nanowires are grown at a substrate temperature of 480\,$^{\circ}$C with an indium growth rate of 0.08 $\mu$m/h and an As$_4$ beam equivalent pressure (BEP) of $\approx 4 \times 10^{-5}$\,mbar for 10\,min to sustain an optimal growth window and then in the second step the substrate temperature is decreased to 460\,$^{\circ}$C with an indium growth rate of 0.03\,$\mu$m/h and As$_3$ BEP of $\approx 3 \times 10^{-5}$\,mbar for 2.5\,h resulting in $4-5\,\mu$m long and 80\,nm wide nanowires. After the growth of the InAs nanowire, the substrate undergoes an arsenic desorption at 400\,$^\circ$C for 20\,min and at 450\,$^\circ$C for 5\,min. Subsequently, the sample is transferred to a metal MBE chamber. Further on, the Nb metal shell is deposited at an angle of 87$^\circ$ to the nanowire axis. The Nb is evaporated at a substrate temperature of 50$\,^\circ$C. The measured growth rate of 0.082\,nm/s resulted in an average Nb thickness on the nanowire of 17\,nm. Further processing details on substrate preparation and growth can be found in the Supporting Information. In addition, as elaborated in the Supporting Information, in-situ InAs nanowire-based Josephson junctions can also be fabricated by using random nanowire growth on adjacent Si(111) side facets. This approach offers the advantage of easier fabrication but has the disadvantage of uncontrolled junction formation (cf. Figure S4). \noindent \textbf{Device fabrication.} The devices for electrical characterization were fabricated on highly-resistive Si substrates with pre-patterned bottom gate structures and a superconducting circuit, as shown in Figure~\ref{fig:device}a. As the devices are intended to work for both AC and DC measurements, we use a transmission line in coplanar waveguide geometry to form the source contact of the nanowire Josephson junction. The latter is terminated by an on-chip bias tee, consisting out of an inter-digital capacitor and a planar coil. All three elements, together with the surrounding ground plane, were made of reactively sputtered TiN with a thickness of 80$\,$nm. Subsequently, the nanowires were deposited onto the electrostatic gates by means of a SEM-based micro-manipulator setup. To ensure an ohmic coupling between the contacts, made out of NbTi, and the Nb shell, we used an in-situ Ar$^+$ dry etching step prior to the metal deposition. The contact separation is chosen to be at least 1.5$\,\mu$m in order to reduce the effect of the wide-gap superconductor NbTi on the actual junction characteristics. The finished junction device is depicted in Figure~\ref{fig:device}b. \\ \noindent \textbf{Transmission electron microscopy} For the side view analysis, the nanowires were transferred from growth arrays to holey carbon grids by gently rubbing the two surfaces. The cross section samples were prepared using focused ion beam (FIB). TEM analysis was carried out using doubly corrected Jeol ARM 200F and Jeol 2100 microscopes, both operating at 200\,kV. The EDX measurements were carried out using an Oxford Instruments $100\,\mathrm{mm}^2$ windowless detector installed within the Jeol ARM 200F. \noindent \textbf{Electrical measurements.} The electrical measurements were performed in a $^3$He/$^4$He dilution refrigerator with a base temperature of 13\,mK. The current-voltage characteristics were measured in a quasi four-terminal configuration using a current bias. For the differential resistance measurements a standard lock-in technique was employed. The rf-frequency signal for the measurements of the Shapiro steps was applied to the junction via the capacitor of the bias-tee. \section*{Acknowledgements} We thank Tobias Ziegler and Anton Faustmann for their helpful discussions and the assistance with the micro-manipulator, Michael Schleenvoigt with the metal deposition, Christoph Krause and Herbert Kertz for technical assistance. Dr. Florian Lentz and Dr. Stefan Trellenkamp are also gratefully acknowledged for electron beam lithography. Dr. Elmar Neumann and Stephany Bunte for their immense help with the FIB and the Magellan SEM assistance. Dr. Gianluigi Catelani is gratefully acknowledged for theory support regarding the magnetic field measurements. All samples have been prepared at the Helmholtz Nano Facility.\cite{GmbH2017} The work at RIKEN was partially supported by Grant-in-Aid for Scientific Research (B) (No. 19H02548), Grants-in-Aid for Scientific Research (S) (No. 19H05610), and Scientific Research on Innovative Areas "Science of hybrid quantum systems" (No. 15H05867). The work at Forschungszentrum J\"ulich was partly supported by the project "Scalable solid state quantum computing", financed by the Initiative and Networking Fund of the Helmholtz Association. UK EPSRC is acknowledged for funding through grant No. EP/P000916/1.
1,116,691,501,079
arxiv
\section{Introduction} It is well-established that the symmetries of a superconducting order parameter influence many of the physical properties of a superconductor\cite{anderson1959theory,ferrell1959knight,anderson1959knight,sigrist1991phenomenological,van1995phase,tsuei2000pairing,balatsky2006impurity,qi2011topological}, including robustness to impurities\cite{anderson1959theory,balatsky2006impurity}, Knight shift as measured by nuclear magnetic resonance\cite{ferrell1959knight,anderson1959knight}, anisotropy in phase-sensitive measurements\cite{van1995phase}, and topological properties\cite{qi2011topological}. Within the standard BCS theory of superconductivity the order parameter, or mean field, usually denoted $\Delta$, is given in terms of equal-time expectation values of the form $\Delta\sim \langle \psi(t) \psi(t)\rangle$, which can be viewed as a many-body wavefunction describing pairs of electrons at the same time, $t$. Since electrons are fermions, this wavefunction must be antisymmetric under the simultaneous permutation of all quantum numbers describing the electrons, i.e.~spin and position degrees of freedom, for single band superconductors. This implies that, for superconductors with a single relevant band, order parameters with even spatial parity (like $s$- or $d$-wave) must correspond to a spin-singlet configuration, while order parameters with odd spatial parity ($p$- or $f$-waves) must correspond to spin-triplet states. A more accurate description of conventional superconductivity, however, starts with a retarded phonon-mediated interaction, described by Eliashberg theory\cite{eliashberg1960interactions,abrikosov2012methods,mahan2013many} and its generalizations\cite{scalapino1966strong,berk1966effect}. In this formalism the superconducting mean field is related to time-ordered expectation values, $\Delta(t-t')\sim \langle T \psi(t) \psi(t')\rangle$, and therefore necessarily depends on the relative time, $t-t'$, or, equivalently, the relative frequency, $\omega$. As a consequence, we obtain a symmetry constraint for this time-ordered expectation value of Cooper pairs, also known as the pair correlator or anomalous Green's function. As Berezinskii showed in 1974\cite{Berezinskii1974}, this allows for the possibility of odd-frequency (odd-$\omega$) order parameters which possess the opposite relationship between the spatial parity and spin configurations, i.e.~even-parity spin-triplet and odd-parity spin-singlet. We stress that this does not imply the breaking of time-reversal symmetry, since odd-$\omega$ order parameters are simply odd functions of the relative time coordinate, while the time-reversal operation includes complex conjugation\cite{kuzmanovski2017multiple,geilhufe2018symmetry,linder2017odd}. While Berezinskii's original proposal was made in the context of superfluid $^3$He, later works generalized the possibility of odd-$\omega$ order parameters to superconductivity\cite{kirkpatrick_1991_prl, belitz_1992_prb, BalatskyPRB1992, coleman_1993_prl, coleman_1994_prb, coleman_1995_prl}. However, since these original proposals, the thermodynamic stability of intrinsically odd-$\omega$ superconductors have been called into question\cite{heid1995thermodynamic, belitz_1999_prb, solenov2009thermodynamical, kusunose2011puzzle, FominovPRB2015}. Still, while the existence of such intrinsic odd-$\omega$ order parameters remains an intriguing theoretical question, a great deal of progress has been made studying the emergence of odd-$\omega$ pair correlations in systems with conventional equal-time order parameters\cite{bergeret2005odd,linder2017odd}. This latter possibility relies on the conversion of intrinsic even-$\omega$ superconducting correlations to odd-$\omega$ correlations, with a number of proposals in the literature realizing such symmetry conversion through a variety of different mechanisms\cite{BergeretPRL2001, bergeret2005odd, halterman2007odd, yokoyama2007manifestation, houzet2008ferromagnetic, EschrigNat2008, LinderPRB2008, crepin2015odd, YokoyamaPRB2012, Black-SchafferPRB2012, Black-SchafferPRB2013, TriolaPRB2014, tanaka2007theory, TanakaPRB2007,cayao2017odd, cayao2018odd, LinderPRL2009, LinderPRB2010_2, TanakaJPSJ2012, triola2016prl, triolaprb2016, black2013odd, sothmann2014unconventional, parhizgar_2014_prb, asano2015odd, komendova2015experimentally, burset2016all, komendova2017odd, kuzmanovski2017multiple, triola2017pair,keidel2018tunable, triola2018odd,fleckenstein2018conductance,asano2018green,triola2018oddnw}. The prototypical example is the superconductor-ferromagnet (SF) junction, in which numerous theoretical works have demonstrated that the breaking of spin-rotational symmetry can convert conventional $s$-wave spin-singlet Cooper pairs to odd-$\omega$ spin-triplet pairs\cite{BergeretPRL2001, bergeret2005odd, halterman2007odd, yokoyama2007manifestation, houzet2008ferromagnetic, EschrigNat2008, LinderPRB2008, crepin2015odd}. Furthermore, experiments on these junctions have observed multiple signatures of the odd-$\omega$ spin-triplet pair correlations\cite{petrashov1994conductivity,giroud1998superconducting,petrashov1999giant,aumentado2001mesoscopic,zhu2010angular,di2015signature,di2015intrinsic}. Interestingly, it has also been demonstrated that odd-parity odd-$\omega$ pairing can emerge at the interface between a conventional even-parity superconductor and a normal metal (SN junction) due to broken spatial translation symmetry\cite{tanaka2007theory,TanakaPRB2007}. The magnitudes of the odd-$\omega$ correlations have been shown to dominate over the even-$\omega$ amplitudes at discrete energy levels coinciding exactly with peaks in the local density of states (LDOS)\cite{TanakaPRB2007}, establishing a relationship between odd-$\omega$ pairing and McMillan-Rowell oscillations\cite{rowell1966electron,rowell1973tunneling} as well as midgap Andreev resonances\cite{alff1997spatially,covington1997observation,wei1998directional}. In this article we focus on several recent works exploring the intriguing possibility of realizing odd-$\omega$ pair correlations in multiband superconductors, without the need for magnetism or interfaces. In 2013 it was shown that odd-$\omega$ pairing should arise ubiquitously in such systems due to the presence of interband hybridization\cite{black2013odd}. Since this interband hybridization is generally uniform throughout the bulk of a multiband superconductor, this establishes the existence of a class of systems that should host bulk odd-$\omega$ pairing. Since the original proposal, multiple additional works have built on this concept\cite{ sothmann2014unconventional, parhizgar_2014_prb,asano2015odd, komendova2015experimentally,triola2016prl,triolaprb2016, burset2016all, ebisu2016theory, komendova2017odd, kuzmanovski2017multiple, triola2017pair, balatsky2018odd, keidel2018tunable, triola2018odd,fleckenstein2018conductance,asano2018green,triola2018oddnw} generalizing the original idea to related systems, focusing on specific physical examples, or studying the experimental consequences of odd-$\omega$ pairing in these multiband systems. With many known multiband superconductors with highly unconventional features, such as Sr$_2$RuO$_4$\cite{maeno1994superconductivity,maeno2012}, iron-based superconductors \cite{hunte2008two, kamihara2008iron, ishida2009extent, cvetkovic2009multiband, stewart2011superconductivity}, MgB$_2$ \cite{nagamatsu2001superconductivity, bouquet2001specific, brinkman2002multiband, golubov2002specific,iavarone2002two}, and UPt$_3$\cite{stewart_prl_1984, adenwalla1990,sauls1994,strand_prl_2009,strand_science_2010} it remains a very interesting question how much odd-$\omega$ superconductivity contributes to the physical properties of these and related systems. It is important to note that the presence of additional band degrees of freedom in these multiband superconductors leads, not only to novel methods of inducing odd-$\omega$ pairing, but also to a broader classification scheme for the symmetry of the Cooper pairs. For a generic superconducting system with multiple bands crossing the Fermi level, we write the anomalous Green's function as the time-ordered expectation value: $F(1,2)=-\langle T \psi_{\sigma_1,x_1,\alpha_1}(t_1)\psi_{\sigma_2,x_2,\alpha_2}(t_2) \rangle$, where $\sigma_i$, $x_i$, $\alpha_i$, and $t_i$ represent the spin, positions, band, and time degrees of freedom. We emphasize that this band index has its origin in the electronic degrees of freedom of the superconductor and could stem from any of the indices characterizing the system, including the atomic orbital, sublattice, layer, dot, lead, or valley indices, as we discuss in Sec.~\ref{sec:other}. Accounting for the symmetry under the exchange of each of these pairs of indices we necessarily have $F(1,2)=-F(2,1)$. This leads to eight possible symmetry classes\cite{black2013odd, asano2015odd, triolaprb2016, linder2017odd}, four even-$\omega$ and four odd-$\omega$ classes as seen in Table \ref{table:classification}. \begin{table} \begin{tabular}{c || c | c | c | c || c | c | c | c |} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline Spin ($\mathcal{S}$ ) & - & + & + & - & + & - & - & + \\ \hline Parity ($\mathcal{P}$) & + & - & + & - & + & - & + & - \\ \hline Orbital ($\mathcal{O}$) & + & + & - & - & + & + & - & - \\ \hline Time ($\mathcal{T}$) & + & + & + & + & - & - & - & - \\ \end{tabular} \caption{Characterization of the eight symmetry classes for superconducting pair amplitudes allowed by Fermi-Dirac statistics. Each column represents a different symmetry class, with the sign, $\pm$, representing the symmetry of the anomalous Green's function under the exchange of the index indicated in the far left column: $\mathcal{S}F_{\sigma_1,\sigma_2}= F_{\sigma_2,\sigma_1}$ (spin); $\mathcal{P}F_{x_1,x_2}=F_{x_2,x_1}$ (parity); $\mathcal{O}F_{\alpha_1,\alpha_2}=F_{\alpha_2,\alpha_1}$ (band or similar); and $\mathcal{T}F_{t_1,t_2}=F_{t_2,t_1}$ (time).} \label{table:classification} \end{table} In the remainder of this article we provide, in Sec. \ref{sec:overview}, a pedagogical overview of the emergence of odd-$\omega$ pairing in a simple two-band model due to interband hybridization, as well as a discussion of when we should expect to find odd-$\omega$ pairing in a generic superconducting system. Following the overview, we focus in Sec.~\ref{sec:examples} on specific proposals for realizing odd-$\omega$ pairing in well-known multiband superconductors, and also in materials and systems possessing related active electronic degrees of freedom, such as layer, dot, lead or valley indices. In Sec.~\ref{sec:experiment} we discuss proposed experimental signatures of odd-$\omega$ pairing in multiband superconductors. Finally, in Sec.~\ref{sec:conclusions} we conclude our discussion. \section{General Results for Multiband Odd-frequency Pairing} \label{sec:overview} \subsection{Two Band Model with Interband Hybridization} \label{sec:twoband_criteria} To illustrate the emergence of odd-$\omega$ pairing in multiband superconductors we study the pair amplitudes associated with the following two band Hamiltonian\cite{black2013odd}: \begin{equation} H=\frac{1}{2}\sum_{\textbf{k}} \Psi^\dagger_{\textbf{k}} \left( \begin{array}{cc} \hat{h}_{\textbf{k}} & \hat{\Delta}_{\textbf{k}} \\ \hat{\Delta}^\dagger_{\textbf{k}} & -\hat{h}^*_{-\textbf{k}} \end{array} \right) \Psi_{\textbf{k}}, \label{eq:ham2band_compact} \end{equation} using the basis: \begin{equation} \Psi^\dagger_{\textbf{k}}= \left( c^\dagger_{\uparrow,1,\textbf{k}} c^\dagger_{\uparrow,2,\textbf{k}} c^\dagger_{\downarrow,1,\textbf{k}} c^\dagger_{\downarrow,2,\textbf{k}} c_{\uparrow,1,-\textbf{k}} c_{\uparrow,2,-\textbf{k}} c_{\downarrow,1,-\textbf{k}} c_{\downarrow,2,-\textbf{k}} \right), \end{equation} where $c^{\dagger}_{\sigma,\alpha,\textbf{k}}$ ($c_{\sigma,\alpha,\textbf{k}}$) creates (annihilates) a fermionic quasiparticle with spin $\sigma$ in band $\alpha$ and with momentum $\textbf{k}$, together with the definitions: \begin{equation} \begin{aligned} \hat{h}_{\textbf{k}}=\left(\begin{array}{cc} \xi_{1,\textbf{k}} & \Gamma \\ \Gamma^* & \xi_{2,\textbf{k}} \end{array} \right) \otimes \hat{\sigma}_0, \ \hat{\Delta}_{\textbf{k}}=\left(\begin{array}{cc} \Delta_{1,\textbf{k}} & 0 \\ 0 & \Delta_{2,\textbf{k}} \end{array} \right)\otimes i\hat{\sigma}_2, \end{aligned} \label{eq:ham2band_compact_definitions} \end{equation} where $\hat{\sigma}_0$ and $\hat{\sigma}_{i=1,2,3}$ are the identity and Pauli matrices in spin space. Here $\xi_{\alpha,\textbf{k}}$ is the energy dispersion of band $\alpha$, $\Gamma$ is a measure of the interband hybridization, and $\Delta_{\alpha,\textbf{k}}$ is the superconducting order parameter in band $\alpha$, here assumed for simplicity to be spin-singlet in nature, although it is trivial to extend the derivation to spin-triplet pairing. We note that this kind of interband hybridization, $\Gamma$, is intrinsic to a superconductor whenever there is a mismatch between the the quasiparticles of the normal state and the orbital character of the Cooper pairs or, alternatively, it can arise from scattering processes in the presence of disorder\cite{black2013odd,komendova2015experimentally,komendova2017odd}. Hence, for generic multiband superconductors we expect it to be nonzero. To study the pair amplitudes associated with Eq.~\eqref{eq:ham2band_compact}, it is convenient to define the Nambu-Gorkov Green's functions as follows\cite{abrikosov2012methods,mahan2013many}: \begin{equation} \begin{aligned} G_{\sigma,\alpha;\sigma',\alpha'}(\textbf{k},\tau)&=-\langle T_\tau c_{\sigma,\alpha,\textbf{k}}(\tau) c^\dagger_{\sigma',\alpha',\textbf{k}}(0) \rangle, \\ F_{\sigma,\alpha;\sigma',\alpha'}(\textbf{k},\tau)&=-\langle T_\tau c_{\sigma,\alpha,-\textbf{k}}(\tau) c_{\sigma',\alpha',\textbf{k}}(0) \rangle, \\ \bar{G}_{\sigma,\alpha;\sigma',\alpha'}(\textbf{k},\tau)&=-\langle T_\tau c^\dagger_{\sigma,\alpha,-\textbf{k}}(\tau) c_{\sigma',\alpha',-\textbf{k}}(0) \rangle, \\ \bar{F}_{\sigma,\alpha;\sigma',\alpha'}(\textbf{k},\tau)&=-\langle T_\tau c^\dagger_{\sigma,\alpha,\textbf{k}}(\tau) c^\dagger_{\sigma',\alpha',-\textbf{k}}(0) \rangle, \end{aligned} \label{eq:greens_definition} \end{equation} where $\tau$ is imaginary time and $T_\tau$ is the $\tau$-ordering operator. With these definitions, it is straightforward to derive the following equations of motion: \begin{equation} \left(\begin{array}{cc} i\omega_n-\hat{h}_{\textbf{k}} & -\hat{\Delta}_{\textbf{k}} \\ -\hat{\Delta}^\dagger_{\textbf{k}} & i\omega_n+\hat{h}_{-\textbf{k}}^* \end{array} \right) \left(\begin{array}{cc} \hat{G}(\textbf{k},i\omega_n) & \hat{F}(\textbf{k},i\omega_n) \\ \hat{\bar{F}}(\textbf{k},i\omega_n) & \hat{\bar{G}}(\textbf{k},i\omega_n) \end{array} \right) =\mathbb{1}, \label{eq:2band_eom} \end{equation} where we have Fourier-transformed the Green's functions from imaginary time, $\tau$, to Matsubara frequency, $i\omega_n$, and $\mathbb{1}$ is the 8$\times$8 identity matrix in band $\times$ spin $\times$ particle-hole space. For simplicity, we assume time-reversal symmetry such that $\xi_{\alpha,-\textbf{k}}=\xi_{\alpha,\textbf{k}}$ and for the moment we also set $\Gamma=\Gamma^*$. After some straightforward algebra we find that the anomalous Green's function, $\hat{F}$, is given by: \begin{widetext} \begin{equation} \begin{aligned} \hat{F}(\textbf{k},i\omega_n)=&\frac{1}{D_{\textbf{k},i\omega_n}}\left(\begin{array}{cc} \Delta_{1,\textbf{k}}\left[(i\omega)^2-E_{2,\textbf{k}}^2 \right]-\Delta_{2,\textbf{k}}\Gamma^2 & \Gamma\left[-i\omega_n\left(\Delta_{1,\textbf{k}}-\Delta_{2,\textbf{k}} \right)+\Delta_{1,\textbf{k}}\xi_{2,\textbf{k}}+\Delta_{2,\textbf{k}}\xi_{1,\textbf{k}} \right] \\ \Gamma\left[i\omega_n\left(\Delta_{1,\textbf{k}}-\Delta_{2,\textbf{k}} \right)+\Delta_{1,\textbf{k}}\xi_{2,\textbf{k}}+\Delta_{2,\textbf{k}}\xi_{1,\textbf{k}} \right] & \Delta_{2,\textbf{k}}\left[(i\omega)^2-E_{1,\textbf{k}}^2 \right]-\Delta_{1,\textbf{k}}\Gamma^2 \end{array} \right) \otimes i\hat{\sigma}_2, \end{aligned} \label{eq:2band_anomalous} \end{equation} \end{widetext} where we define: \begin{equation} \begin{aligned} D_{\textbf{k},i\omega_n}& =(i\omega_n)^4 -(i\omega_n)^2\left[ E_{1,\textbf{k}}^2+E_{2,\textbf{k}}^2+2\Gamma^2 \right]+E_{1,\textbf{k}}^2E_{2,\textbf{k}}^2 \\ &+ \Gamma^2\left(\Delta_{1,\textbf{k}}\Delta_{2,\textbf{k}}^*+\Delta_{1,\textbf{k}}^*\Delta_{2,\textbf{k}}+\Gamma^2-2\xi_{1,\textbf{k}}\xi_{2,\textbf{k}} \right), \\ E_{\alpha,\textbf{k}}&=\sqrt{\xi_{\alpha,\textbf{k}}^2 + \Delta_{\alpha,\textbf{k}}^2}. \end{aligned} \label{eq:denomenator} \end{equation} From Eq. (\ref{eq:2band_anomalous}) we directly see that the pair amplitude's spin structure is given by $i\hat{\sigma}_2$, therefore the spin-singlet nature of the Cooper pairs remains completely unaffected by the presence of the interband hybridization. Inspecting the intraband pairing, given by the diagonal elements of the matrix in Eq.~(\ref{eq:2band_anomalous}), we find all amplitudes being even in Matsubara frequency and spatial parity, corresponding to pair amplitudes in the first column of Table \ref{table:classification}. However, turning our attention to the interband pairing, given by the off-diagonal elements of the matrix in Eq. (\ref{eq:2band_anomalous}), we find that these pair amplitudes have both even- and odd-$\omega$ terms. Notably, we see that the even-$\omega$ amplitude is also even in the band index, and, thus, also belongs to the symmetry class in column 1 of Table \ref{table:classification}, while the odd-$\omega$ amplitude is odd in the band index and, thus, belongs to the symmetry class in column 7 of Table \ref{table:classification}. As a consequence, for this model, all pair amplitudes that are even in the band index are also even in frequency while the odd-band pairing is entirely odd-$\omega$\cite{black2013odd,komendova2015experimentally,komendova2017odd}. This complete reciprocity between frequency and band parity also holds for more complicated models as long as the order parameter appearing in the Hamiltonian is even in the band index and no other symmetries are broken. If we relax the assumption that the interband hybridization is real, instead setting $\Gamma=|\Gamma|e^{i\phi}$, we find that the odd-$\omega$ pair amplitude is given by\cite{asano2015odd}: \begin{equation} F_{odd}(\textbf{k};i\omega_n)=\frac{i\omega_n |\Gamma|}{D'_{\textbf{k},i\omega_n}}\left(\Delta_{1,\textbf{k}}e^{-i\phi}-\Delta_{2,\textbf{k}}e^{i\phi}\right)\hat{\rho}_2\otimes\hat{\sigma}_2, \label{eq:odd_complex_gamma} \end{equation} where $\hat{\rho}_2$ is a Pauli matrix in band space and $D'_{\textbf{k},i\omega_n}$ is still an even function of $\textbf{k}$ and $i\omega_n$. From Eq. (\ref{eq:odd_complex_gamma}) we see that, when $\phi=0$ (real $\Gamma$) the odd-$\omega$ pairing in this two-band model is proportional to $i\omega_n\Gamma\left(\Delta_{1,\textbf{k}}-\Delta_{2,\textbf{k}} \right)$, and is therefore non-zero whenever there is both interband hybridization, $\Gamma\neq 0$, and a difference between the two gaps, $\Delta_{1,\textbf{k}}-\Delta_{2,\textbf{k}}\neq 0$, which is usually the case in multiband superconductors. When $\phi=\tfrac{\pi}{2}$ (imaginary $\Gamma$) the odd-$\omega$ pairing is instead proportional to $i\omega_n\Gamma\left(\Delta_{1,\textbf{k}}+\Delta_{2,\textbf{k}} \right)$ and is therefore non-zero as long as $\Gamma\neq 0$ and $\Delta_{1,\textbf{k}}\neq-\Delta_{2,\textbf{k}}$. Between these two extremes we find a non-zero odd-$\omega$ interband pair amplitude regardless of the values of the two gaps, as long as there is finite interband hybridization. Moreover, given that this kind of interband hybridization should be present in most multiband superconductors, we expect odd-$\omega$ pairing to be ubiquitous in multiband superconductors\cite{black2013odd,komendova2015experimentally,komendova2017odd}. \subsection{Generalization to Arbitrary Hamiltonians} \label{sec:gen} To understand how the results for the simple two-band model generalize to more complicated models, it is instructive to consider a generic model: \begin{equation} \begin{aligned} \label{eq:hamgenband_compact} H=&\sum_{n,m} \left( \begin{array}{cc} c^\dagger_n & c_n \end{array} \right) \left( \begin{array}{cc} h_{nm} & \Delta_{nm} \\ \Delta^\dagger_{nm} & -h_{nm}^* \end{array} \right) \left( \begin{array}{c} c_m \\ c^\dagger_m \end{array} \right), \end{aligned} \end{equation} where the indices $n,m$ label all degrees of freedom for the quasiparticles, including: spin, position, band/orbital, sublattice, etc. It is easy to see that the Hamiltonian in Eq. (\ref{eq:ham2band_compact}) is a particular example of a Hamiltonian of this form. Moreover, any Hermitian Hamiltonian with a BCS-like order parameter may be written in this form. To examine the pair amplitudes for the Hamiltonian in Eq.~\eqref{eq:hamgenband_compact} we use Green's functions which are merely generalized versions of Eqs. (\ref{eq:greens_definition}), using the anomalous Green's function $F_{nm}(\tau)=-\langle T_\tau c_{n}(\tau) c_{m}(0) \rangle$. It is straightforward to write down the equations of motion for these Green's functions as a generalized version of Eq. (\ref{eq:2band_eom}), from which we find that $\hat{F}$ is given by: \begin{equation} \begin{aligned} \hat{F}(i\omega_n)= &\left[\left(i\omega_n-\hat{h}\right)-\hat{\Delta}\left(i\omega_n+\hat{h}^*\right)^{-1}\hat{\Delta}^\dagger \right]^{-1}\hat{\Delta} \\ &\times\left(i\omega_n+\hat{h}^*\right)^{-1}. \end{aligned} \label{eq:f_general} \end{equation} Here, the $\hat{}$ -symbol denotes matrices with indices $n$,$m$ running over all quantum numbers describing the quasiparticles, including position, spin, and any band or similar degrees of freedom. From Eq. (\ref{eq:f_general}) we see that, in general, this matrix should possess both even-$\omega$ and odd-$\omega$ terms, with the details depending on the precise form of the Hamiltonian. Further insight can be gained by expanding the right-hand-side to leading order in $\hat{\Delta}$, in which case we find linearized expressions for both the even- and odd-$\omega$ pair amplitudes: \begin{equation} \begin{aligned} \hat{F}_{even}(i\omega_n)= &-\left[\omega_n^2+\hat{h}^2\right]^{-1}\left[\hat{h},\hat{\Delta}\right]_*\hat{h}^*\left[\omega_n^2+(\hat{h}^*)^2\right]^{-1}\\ &-\left[\omega_n^2+\hat{h}^2\right]^{-1}\hat{\Delta} , \\ \hat{F}_{odd}(i\omega_n)= &i\omega_n\left[\omega_n^2+\hat{h}^2\right]^{-1}\left[\hat{h},\hat{\Delta}\right]_*\left[\omega_n^2+(\hat{h}^*)^2\right]^{-1}, \end{aligned} \label{eq:foddeven_general} \end{equation} where we define $\left[\hat{h},\hat{\Delta}\right]_*\equiv \hat{h}\hat{\Delta}-\hat{\Delta}\hat{h}^*$. Clearly, when $\left[\hat{h},\hat{\Delta}\right]_*$ vanishes, the system has only even-$\omega$ pairing, given by $-\left[\omega_n^2+\hat{h}^2\right]^{-1}\hat{\Delta}$. Thus the condition for the emergence of odd-$\omega$ pairing, in a general mean field theory, is given by: \begin{equation} \hat{h}\hat{\Delta}-\hat{\Delta}\hat{h}^* \neq 0. \label{eq:odd_criterion} \end{equation} For the simple two-band model in Eq. (\ref{eq:ham2band_compact_definitions}) this condition is clearly satisfied because $\hat{\Delta}$ and $\hat{h}$ are proportional to different $2\times 2$ Pauli matrices in band-space, but Eq.~\eqref{eq:odd_criterion} is much more general. In particular, the connection between the structures of $\hat{\Delta}$ and $\hat{h}$ and the emergence of odd-$\omega$ pairing applies to superconductors with any number of bands or other internal electronic degrees of freedom. For example, if $\hat{h}$ describes a generic real-space tight-binding Hamiltonian then, if $\hat{\Delta}$ is an inhomogeneous on-site order parameter, then the inequality in Eq. (\ref{eq:odd_criterion}) is generically satisfied and we expect to find odd-$\omega$ pairing. Note that this is entirely consistent with previous results studying the emergence of odd-$\omega$ pairing in inhomogeneous systems, including at SN interfaces \cite{tanaka2007theory, TanakaPRB2007,Black-SchafferPRB2012,cayao2017odd,triola2018oddnw,triola2019odd}. In contrast, if $\hat{\Delta}$ is a spatially homogeneous on-site order parameter and $\hat{h}$ is trivial in spin space and has only real elements, no odd-$\omega$ pairing is possible. If the system under consideration is translation invariant we can Fourier-transform from real-space to momentum space and we find the following condition for odd-$\omega$ pairing: \begin{equation} \hat{h}_{\textbf{k}}\hat{\Delta}_\textbf{k}-\hat{\Delta}_\textbf{k}\hat{h}^*_{-\textbf{k}}\neq 0. \label{eq:odd_criterion_k} \end{equation} This condition is in fact identical to a measure of ``superconducting fitness" recently discussed by Ramires and Sigrist\cite{ramires2016identifying}. In that work, it was demonstrated that whenever this quantity is non-zero there is a reduction in the critical temperature. The authors, therefore, concluded that superconducting fitness can be a tool in the search for order parameters that are expected to be more thermodynamically stable\cite{ramires2016identifying}. It is here interesting to note that, just one year prior to the publication of Ref. \cite{ramires2016identifying}, a work by Asano and Sasaki\cite{asano2015odd} concluded that the emergence of odd-$\omega$ pairing in two-band superconductors is linked to a suppression of the critical temperature. Given the general nature of the results by Ramires and Sigrist, together with the results presented in this section, we conclude that the assertions of Asano and Sasaki are likely to hold in general, i.e. the emergence of odd-$\omega$ pairing appears to cause a suppression of the superconducting critical temperature. It is here important to note that, even though the presence of odd-$\omega$ pairing is associated with a suppressed critical temperature, such a state can still easily be the most thermodynamically favored as that depends crucially on the form of the interaction and the normal state Hamiltonian. In the next section we discuss several examples of real systems believed to host exactly this kind of odd-$\omega$ pairing. \section{Examples of Multiband Odd-frequency Pairing} \label{sec:examples} Having derived the general criteria for odd-$\omega$ superconductivity to appear in multiband systems, we now discuss real examples of multiband superconductors in which odd-$\omega$ pairing has been predicted to emerge. We begin by covering examples in which band degrees of freedom are intrinsic to the superconductor, arising from either different atomic orbitals or a sublattice index, forming what would properly be known as a multiband superconductor. We then discuss superconducting systems in which the additional electronic degrees of freedom are not strictly speaking band indices but have their origin in some other aspect of the system, considering the cases of two-dimensional (2D) bilayers, one-dimensional (1D) nanowires, zero-dimensional (0D) quantum dots, superconducting leads in Josephson junctions, and isolated valleys in momentum space. \subsection{Sr$_{2}$RuO$_4$} \label{sec:sr2ruo4} In this subsection we discuss the emergence of odd-$\omega$ pairing in the multiband superconductor Sr$_{2}$RuO$_4$, which was recently examined by Komendov\'{a} and Black-Schaffer\cite{komendova2017odd}. While it possesses a fairly low critical temperature, $T_c\approx 1$K, the superconducting phase of Sr$_{2}$RuO$_4$ has attracted a great deal of attention since its discovery in 1994\cite{maeno1994superconductivity} due to its highly unusual properties. Both Knight shift and neutron scattering measurements have indicated the possibility of spin-triplet pairing \cite{ishida1998spin,ishida2015spin,duffy2000polarized}. Additionally, it has been observed that the superconducting phase exhibits spontaneous time-reversal symmetry breaking using muon spin-relaxation measurements\cite{luke1998time,luke2000unconventional}, as well as measurements of the Kerr effect\cite{xia_prl_2006}. Taken together, these heavily imply a chiral $p$-wave order parameter. However, measurements of the specific heat\cite{nishizaki1999effect,nishizaki2000changes,deguchi2004gap} are more consistent with a nodal gap structure. Furthermore, recent NMR studies revisiting the Knight shift have found evidence more consistent with spin-singlet pairing\cite{pustogow2019pronounced}. The lack of consistency between these complementary studies continues to make Sr$_{2}$RuO$_4$ both an interesting and hotly debated superconductor. While the superconducting state of Sr$_{2}$RuO$_4$ is controversial, the normal state properties are now quite well-understood with experiments\cite{mackenzie1996quantum,bergemann2000detailed} and theory\cite{oguchi1995electronic,singh1995relationship} converging on the same picture of three quasi-2DFermi sheets, with contributions primarily from the ruthenium $d_{xy}$, $d_{xz}$, and $d_{yz}$ orbitals. Therefore, to capture the relevant physics of Sr$_{2}$RuO$_4$ a three-orbital Hamiltonian, similar to Eq. (\ref{eq:ham2band_compact}), can be employed, with normal state Hamiltonian, $\hat{h}$, and order parameter, $\hat{\Delta}$, given by: \begin{equation} \begin{aligned} \hat{h}_{\textbf{k}}&=\left(\begin{array}{ccc} \xi_1 & \epsilon_{12} & \epsilon_{12} \\ \epsilon_{12} & \xi_2 & \epsilon_{23} \\ \epsilon_{13} & \epsilon_{23} & \xi_3 \end{array} \right), \ \hat{\Delta}_{\textbf{k}}&= \left(\begin{array}{ccc} \Delta_1 & \Delta_{12} & \Delta_{13} \\ \Delta_{12} & \Delta_2 & \Delta_{23} \\ \Delta_{13} & \Delta_{23} & \Delta_{3} \end{array} \right), \end{aligned} \label{eq:tbsr2ruo4} \end{equation} where the $k$-dependence of the matrix elements has been suppressed for brevity and where the indices 1, 2, and 3 correspond to the ruthenium $d_{xy}$, $d_{xz}$, and $d_{yz}$ orbitals, respectively. Here, it is assumed that the order parameter is either spin-singlet or mixed spin-triplet\cite{komendova2017odd}. Before assuming precise values for the tight-binding model in Eq. (\ref{eq:tbsr2ruo4}), two special cases were considered analytically in Ref.\cite{komendova2017odd}. For the first case, the order parameter was assumed to be completely diagonal in the orbital basis, $\hat{\Delta}=\text{diag}\left(\Delta_1,\Delta_2,\Delta_3\right)$, and the interorbital terms in $\hat{h}$ were all assumed to be equal, $\epsilon_{ij}=\Gamma$. The analytic expressions for the pair amplitudes were examined\cite{komendova2017odd} and it was found that the odd-$\omega$ pairing is present as long as at least two of the gaps are different, $\Delta_i\neq\Delta_j$ for some $i\neq j$. For the second case, the hybridization was assumed to only occur between the $d_{xz}$, and $d_{yz}$ orbitals, so that: $\epsilon_{12}=\epsilon_{13}=0$ and $\Delta_{12}=\Delta_{13}=0$, which is often used as as simplification. In this case, two separate contributions to the odd-$\omega$ pair amplitudes were found, one proportional to the interorbital component of the order parameter, $\sim \Delta_{23}(\xi_3-\xi_2)$, and one proportional to the interorbital hybridization, $\sim\epsilon_{23}(\Delta_3-\Delta_2)$. It is straightforward to confirm that these same conditions can be obtained from the criterion in Eq. (\ref{eq:odd_criterion_k}). Focusing specifically on parameters which faithfully reproduce the three bands of Sr$_2$RuO$_4$, $\gamma$, $\alpha$, and $\beta$, Ref.~\cite{komendova2017odd} also provided a numerical evaluation of all even- and odd-$\omega$ components of the anomalous Green's functions. In this calculation, the $\gamma$ band was assumed to have contributions only from orbital 1 ($d_{xy}$) while the $\alpha$ and $\beta$ bands emerge from hybridization between orbital 2 and orbital 3 ($d_{xz}$ and $d_{yz}$). Each of these channels was summed over the positive Matsubara frequencies and plotted over the first Brillouin zone, with the result presented in Fig. \ref{fig:komendova_prl_2017_fig1}, which is adapted from Ref.~\cite{komendova2017odd}. From these color plots we see that both the even- and odd-$\omega$ interorbital pair amplitudes, $F_{even}$ and $F_{odd}$, possess all of their weight along the same bands as $F_{22}$ and $F_{33}$, $\alpha$ and $\beta$. Additionally, the phases associated with $F_{even}$ and $F_{odd}$ undergo a full $2\pi$ rotation around the $\Gamma$ point, consistent with the assumed chiral $p$-wave order parameter. However, from the analytic criteria discussed in the previous paragraph, we see that assuming another order parameter will not change the results significantly. This confirms that, regardless of the precise symmetry of the order parameter in Sr$_2$RuO$_4$, it is likely to host odd-$\omega$ interband pairing due to interband hybridization. \begin{figure*}[htb] \centering \includegraphics*[width=0.8\textwidth]{figure1_komendova_prl_2017.eps} \caption{Intraorbital pair amplitudes, $F_{11}$, $F_{22}$, $F_{33}$, and interorbital pair amplitudes, $F_{even}$ and $F_{odd}$, plotted over the first Brillouin zone for Sr$_2$RuO$_4$. Note that, $F_{11}$ possessses all of its spectral weight on the $\gamma$ band, while $F_{22}$, $F_{33}$, $F_{even}$ and $F_{odd}$, possess spectral weight on both the $\alpha$ and $\beta$ bands, consistent with the form of the interorbital hybridization. Top row represents the magnitudes of each function while bottom row shows the complex phase. For $F_{odd}$ the results are multiplied by a factor of 100. Reprinted figure with permission from [L. Komendov\'{a} and A. M. Black-Schaffer, Phys. Rev. Lett. 119, 087001 (2017)] Copyright (2017) by the American Physical Society.} \label{fig:komendova_prl_2017_fig1} \end{figure*} \subsection{UPt$_3$} \label{sec:upt3} Next we discuss a recent work demonstrating the emergence of odd-$\omega$ pairing in the heavy-fermion superconductor UPt$_3$\cite{triola2018odd}. In addition to possessing multiple relevant bands at the Fermi level, UPt$_3$ is a truly unconventional superconductor, exhibiting two zero-field superconducting phases, the $A$ phase and the $B$ phase, with critical temperatures $T_{c,+}\approx 550$ mK and $T_{c,-}\approx 500$ mK\cite{stewart_prl_1984,sauls1994}, respectively. Additionally, a third phase, the $C$ phase, emerges at high magnetic field\cite{adenwalla1990}. Knight shift observations point to a spin-triplet superconducting order parameter\cite{tou_prl_1996}. Josephson interferometry has revealed the presence of line nodes in the A phase\cite{strand_science_2010}, as well as the onset of a complex order parameter in the $B$ phase\cite{strand_prl_2009,strand_science_2010}. Moreover, recent measurements of the Kerr effect have demonstrated time-reversal symmetry breaking in the $B$ phase, consistent with a complex order parameter \cite{schemm_2014}. To capture the essential features of the Fermi surfaces appearing at either the $\Gamma$-point or the $A$-point in UPt$_3$ the following normal state tight-binding Hamiltonian has recently been employed\cite{yanase_prb_2016, yanase_2017_prb,wang_2017}: \begin{equation} \begin{aligned} \hat{h}_{\textbf{k}}&=\left( \begin{array}{cccc} \xi_\textbf{k} + g_\textbf{k} & \epsilon_\textbf{k} & 0 & 0 \\ \epsilon_\textbf{k}^* & \xi_\textbf{k} - g_\textbf{k} & 0 & 0 \\ 0 & 0 & \xi_\textbf{k} - g_\textbf{k} & \epsilon_\textbf{k} \\ 0 & 0 & \epsilon_\textbf{k}^* & \xi_\textbf{k} + g_\textbf{k} \end{array}\right), \end{aligned} \label{eq:hamiltonian} \end{equation} written in the basis described by $\Psi^\dagger=( c^\dagger_{\textbf{k}1\uparrow}, c^\dagger_{\textbf{k}2\uparrow}, c^\dagger_{\textbf{k}1\downarrow}, c^\dagger_{\textbf{k}2\downarrow})$ where $c^\dagger_{\textbf{k}m\sigma}$ creates a fermionic quasiparticle with crystal momentum $\textbf{k}$, on sublattice $m=\{1,2\}$, and with spin $\sigma=\{\uparrow,\downarrow\}$. Here, $\xi_\textbf{k}$ is an even function of $\textbf{k}$ describing the intra-sublattice hopping, $\epsilon_\textbf{k}$ is a complex-valued inter-sublattice hopping term, and the function $g_\textbf{k}$ is odd in $\textbf{k}$ and describes the spin-orbit coupling. In Eq. (\ref{eq:hamiltonian}) we note that, in contrast to Sr$_2$RuO$_4$ whose multiband character has its origin in the atomic orbitals of the Ru atoms, the multiple bands within this model have their origin in the sublattice degree of freedom with contributions coming from only a single itinerant $5f$ orbital of the uranium atoms. The superconducting order parameter in UPt$_3$ is widely believed to belong to the $E_{2u}$ irreducible representation with spin-triplet $m_z = 0$ pairing\cite{sauls1994,joynt_rmp_2002,yanase_prb_2016}. Following recent work explicitly accounting for the symmetries of the lattice \cite{yanase_prb_2016}, the order parameter is given by a linear combination of $d$-wave and $f$-wave basis functions: \begin{equation} \hat{\Delta}_{\textbf{k}}= f_\textbf{k} \hat{\rho}_1\otimes\hat{\sigma}_1 - d_{\textbf{k}}\hat{\rho}_2\otimes\hat{\sigma}_1, \label{eq:upt3gap} \end{equation} where $\hat{\sigma}_i$ and $\hat{\rho}_i$ are Pauli matrices in spin and sublattice space, respectively, $f_\textbf{k}=\eta_1 f_{(x^2-y^2)z}(\textbf{k})+\eta_2f_{xyz}(\textbf{k})$ and $d_\textbf{k}=\eta_1 d_{yz}(\textbf{k})+\eta_2d_{xz}(\textbf{k})$, and $\eta_i$ are complex numbers parameterizing the phase diagram\cite{sauls1994,joynt_rmp_2002,nomoto_prl_2016,yanase_prb_2016}. Notice the unusual combination of spin-triplet $f$-wave terms being odd in spatial parity and spin-triplet $d$-wave terms being even in parity. This combination is caused by the nonsymmorphic lattice symmetry \cite{yanase_prb_2016}. Note that these terms still satisfy the constraints imposed by Fermi-Dirac statistics on the Cooper pairs since the $f$-wave terms are even in the sublattice index while the $d$-wave terms are odd in the sublattice index, belonging to the symmetry classes in columns 2 and 3 of Table \ref{table:classification}, respectively, when viewing the sublattice index as a band degree of freedom. Using the Hamiltonian and order parameter in Eqs. (\ref{eq:hamiltonian}) and (\ref{eq:upt3gap}), the symmetries of the anomalous Green's function were explored in Ref.~\cite{triola2018odd}, using the same conventions as in Sec.~\ref{sec:gen}. In that analysis UPt$_3$ was found to exhibit a plethora of pairing channels, both even and odd in frequency. More specifically, four different kinds of odd-$\omega$ pair amplitudes were found, with the general form\cite{triola2018odd}: \begin{equation} \begin{aligned} \hat{F}_{odd}&=\psi_1 \hat{\rho}_3\otimes\hat{\sigma}_1 + \psi_2\hat{\rho}_0\otimes\hat{\sigma}_2+ \psi_3\hat{\rho}_1\otimes\hat{\sigma}_2 + \psi_4\hat{\rho}_2\otimes\hat{\sigma}_2. \end{aligned} \label{eq:fodd_upt3} \end{equation} From the matrix structure in Eq. (\ref{eq:fodd_upt3}) we see that the intra-sublattice spin-triplet amplitude, $\psi_1$, corresponds to the symmetry class in column 5 of Table \ref{table:classification}, while the intra-sublattice spin-singlet amplitude, $\psi_2$, and the even inter-sublattice spin-singlet amplitude, $\psi_3$, both correspond to the symmetry class in column 6. Finally, the odd inter-sublattice spin-singlet amplitude, $\psi_4$, belongs to the symmetry class in column 7. In terms of the physical parameters, the presence of a finite inter-sublattice term, $\epsilon_\textbf{k}\neq 0$, gives rise to the odd-$\omega$ intra-sublattice term $\psi_1$, despite the fact that the initial order parameter in Eq. (\ref{eq:upt3gap}) is entirely in the inter-sublattice channels. Moreover, the addition of spin-orbit coupling, $g_\textbf{k}$, gives rise to multiple odd-$\omega$ spin-singlet inter-sublattice pair amplitudes, one of which is sublattice odd, $\psi_3$, and the other sublattice even, $\psi_4$. Finally, the combination of both spin-orbit coupling and inter-sublattice hybridization leads to the odd-$\omega$ intra-sublattice spin-singlet term, $\psi_2$. \subsection{Buckled Honeycomb Materials} In this subsection we discuss the emergence of odd-$\omega$ pairing in buckled 2D honeycomb lattices with proximity-induced superconductivity, investigated in Refs.~\cite{black2013odd, kuzmanovski2017multiple}. It is well-known that 2D honeycomb lattices are composed of two triangular sublattices\cite{katsnelson2012graphene}. This sublattice degree of freedom gives rise to two bands near the Fermi level, similar in spirit to the multiband nature of UPt$_3$ discussed in the previous subsection. But more remarkable in honeycomb materials is that the intersublattice hybridization is especially prominent, since the dominating nearest neighbor hopping necessarily couple the two sublattices. This band structure is realized in many of the known 2D materials, including graphene\cite{novoselov2004electric,neto2009electronic,katsnelson2012graphene}, silicene\cite{vogt2012silicene}, germanene\cite{liu2011quantum,davila2014germanene}, and stanene\cite{zhu2015epitaxial}. While the two sublattices in graphene are symmetric and lie in the same plane, in silicene, germanene, and stanene, the structures are naturally buckled, so that the two sublattices are staggered. Therefore, in the latter three materials an asymmetry between the two sublattices can be induced and controlled simply by applying a gate voltage perpendicular to the layer. Such an asymmetry between the sublattices has been shown to directly lead to odd-$\omega$ pairing in these materials \cite{black2013odd}, in complete analogy with the results in Sec.~\ref{sec:twoband_criteria}. Another interesting aspect of buckled honeycomb materials is that a sublattice asymmetry has also been shown to appear in finite-width nanoribbons due to the presence of sample edges \cite{kuzmanovski2017multiple}. More specifically, in Ref.~\cite{kuzmanovski2017multiple}, the authors start by describing the normal state of a buckled honeycomb system with possibly finite spin-orbit coupling, using the Kane-Mele Hamiltonian in real space\cite{haldane1988model,kane2005qshi}: \begin{equation} \begin{aligned} H_0&=t\sum_{\langle i,j\rangle,\sigma} c^\dagger_{i\sigma}c_{j\sigma}+\frac{i\lambda_{\text{SO}}}{3\sqrt{3}}\sum_{\langle\langle i,j\rangle\rangle,\sigma}\nu_{ij}(\hat{\sigma}_3)_{\sigma\sigma'} c^\dagger_{i\sigma}c_{j\sigma'} \\ &-\sum_{i,\sigma}\mu_i c^\dagger_{i\sigma}c_{i\sigma}, \end{aligned} \label{eq:kane-mele} \end{equation} where $c^\dagger_{i\sigma}$ ($c_{i\sigma}$) creates (annihilates) as fermionic quasiparticle at site $i$ with spin $\sigma$, $\langle i,j\rangle$ sums over nearest-neighbor (NN) sites, $i,j$, of the honeycomb lattice, $\langle\langle i,j\rangle\rangle$ sums over next-nearest-neighbor (NNN) sites. Here, $t$ represents the NN hopping parameter and $\lambda_{\text{SO}}$ is the spin-orbit coupling due to NNN hopping, where $\nu_{ij}=\pm 1$ depending on whether the vector from site $i$ to $j$ is oriented clockwise or counterclockwise around the hexagonal plaquette\cite{haldane1988model}. The possibility of gating, is captured by a sublattice-dependent chemical potential $\mu_i=\mu+\zeta_i \lambda_V$, where $\mu$ is the chemical potential in the absence of any applied voltage, $\lambda_V$ is proportional to the applied voltage, and $\zeta_i=\pm 1$ depending on whether $i$ belongs to sublattice A or B. At finite doping, the normal state described by Eq. (\ref{eq:kane-mele}) possesses a large enough electronic density of states for bulk superconductivity to be induced by proximity effect. In this limit, the bulk pair amplitudes have been studied by transforming the above model to momentum space and assuming a $k$-independent $s$-wave order parameter, as appropriate for proximity effect from a conventional superconductor \cite{kuzmanovski2017multiple}. The resulting Hamiltonian possesses a similar form to the two-band model in Eq. (\ref{eq:ham2band_compact}) but with a momentum-dependent interband hybridization term and non-trivial spin structure parameterized by $\lambda_{\text{SO}}$. Solving for the anomalous Green's function, odd-$\omega$ pairing in all four odd-$\omega$ symmetry classes in Table~\ref{table:classification} are possible in this system, although the authors of Ref. \cite{kuzmanovski2017multiple} did not mention the pair amplitudes belonging to columns 5 and 6 in their discussion. In particular, odd-$\omega$ spin-singlet pair amplitudes are present whenever there is an asymmetry between the order parameters on the different sublattices, i.e. $\Delta_{\text{A}}\neq\Delta_{\text{B}}$, with both even- and odd-sublattice contributions due to the momentum-dependent intersublattice NN hopping. The required order parameter sublattice difference is present when $\lambda_V\neq 0$ and thus odd-$\omega$ pairing is controlled by gating\cite{kuzmanovski2017multiple}. Moreover, odd-$\omega$ spin-triplet pairing is present due to a finite spin-orbit coupling, $\lambda_\text{SO}\neq 0$. As is well-known, for $\lambda_V<\lambda_\text{SO}$ the Kane-Mele Hamiltonian in Eq.~\eqref{eq:kane-mele} describes a topological insulator with a bulk band gap and conducting edge modes. Ref.\cite{kuzmanovski2017multiple} also studied this phase by considering nanoribbons with both zigzag (ZZ) and armchair (AC) terminations in the low doping reigme. In this case, superconductivity vanishes throughout the bulk, but a finite $\Delta_i$ was obtained using a self-consistent algorithm for each site along the edges\cite{kuzmanovski2017multiple}. However, in contrast to the translation-invariant case, the magnitudes of all pair amplitudes in these cases are largest in the absence of $\lambda_V$. Still, odd-$\omega$ pairing appear in these ribbons due to an inherent asymmetry between the two sublattices at the edges. In the case of the ZZ termination, the A and B sublattices are clearly different at the edge, since one sublattice has only two NNs while the other retains three. For AC termination, the situation is a bit more subtle as the two sublattices are equivalent, but an asymmetry exists between every other pair of sublattices. The latter induces a gradient of the order parameter along the edge, which is also known to induce odd-$\omega$ pairing in topological insulators \cite{Black-SchafferPRB2012}. \subsection{Other Analogous Systems} \label{sec:other} One common aspect of the previously discussed examples is that, odd-$\omega$ pairing emerges from the hybridization of a discrete set of multiple bands. These multiple bands offer an expansion of the set of allowed Cooper pair symmetries, as illustrated in Table \ref{table:classification}, and have their origin in either the atomic orbitals associated with individual lattice sites of a bulk crystal or the sublattice structure defining the crystal's unit cell. However, there are other ways to obtain similar discrete sets of multiple ``bands" in superconducting systems, as we now discuss. One proposal by Parhizgar and Black-Schaffer\cite{parhizgar_2014_prb} involves the use of 2D bilayer systems proximity-coupled to conventional superconductors, In this case the layer index provides a band-like degree of freedom analogous to the preceding examples. Such 2D bilayer systems include, bilayer graphene\cite{ohta2006controlling,sarma2011electronic,katsnelson2012graphene,mccann2013electronic}, bilayer transition metal dichalcogenides\cite{ramasubramaniam2011tunable,zhang2016visualizing}, other layered Van der Waals heterostructures\cite{geim2013van,novoselov20162d}, as well as topological insulator thin films\cite{zhang2010crossover,cheng2010landau,zhang2011growth}. These kinds of layered systems have uniquely tunable electronic properties due to the variety of 2D systems available, as well as the ability to control their electronic properties through gating and introducing a relative twist angle between the layers\cite{li2010observation,bistritzer2010transport,dos2012continuum,cao2018unconventional}. As shown in Ref. \cite{parhizgar_2014_prb} when a generic bilayer 2D system is proximity coupled to a conventional $s$-wave spin-singlet superconducting substrate, the layer closest to the substrate necessarily obtains a larger superconducting gap, thus directly producing a layer asymmetry. Further, when examining the symmetries of the anomalous Green's function, a rich variety of allowed symmetries were found, including both even- and odd-$\omega$ interlayer pairing. Moreover it was determined that within these models there is a complete reciprocity between the layer symmetry and the frequency symmetry: all odd-layer amplitudes are odd-$\omega$, all even-layer amplitudes are even-$\omega$\cite{parhizgar_2014_prb}, in complete analogy with results for two-band superconductors\cite{black2013odd}. Another set of proposals rely on double-quantum dots coupled to superconductors\cite{sothmann2014unconventional,burset2016all}. In this case the dot index acts as an effective band index and interdot coupling can thus induce odd-$\omega$ pairing. The first proposal by Sothmann and collaborators\cite{sothmann2014unconventional} utilized two quantum dots proximitized by a conventional $s$-wave superconductor, in the presence of both interdot tunneling and an external magnetic field. They demonstrated a variety of possible odd-$\omega$ pair amplitudes in these systems, both spin-singlet and spin-triplet, and tunable using either the externally applied magnetic field, a difference in on-site energy levels, or an asymmetry in coupling between normal and superconducting leads\cite{sothmann2014unconventional}. This possibility was explored further by Burset and colleagues\cite{burset2016all} in the absence of a magnetic field. In this case, they were able to find spin-triplet pairing by coupling the two dots to a spin-triplet superconductor. Both studies also explored tunable signatures of the odd-$\omega$ pairing observable in transport between superconducting or normal leads\cite{sothmann2014unconventional,burset2016all}. In a similar spirit to the proposals involving double quantum dots, odd-$\omega$ pairing has also been proposed in double nanowires coupled to a superconducting substrate\cite{ebisu2016theory,triola2018oddnw}. In these cases, the nanowire index acts as an effective band. In a work by Ebisu {\it et al.} \cite{ebisu2016theory}, an effective model was used to describe two nanowires with Rashba spin-orbit coupling in the presence of both intrawire and interwire superconducting mean fields. It was found that, for generic parameters, odd-$\omega$ pairing is present in these systems, and that it is strongly enhanced when the system is tuned into the topological regime, where interwire pairing dominates. In a later work, Triola and Black-Schaffer \cite{triola2018oddnw} studied a similar setup but explicitly considered the two nanowires coupled to a 2D superconductor and studied the emergent pair amplitudes of this system as a whole. In particular, they found that, in agreement with previous work, odd-$\omega$ interwire pairing is generically induced by coupling the two wires to the superconductor. Moreover, the authors showed that the presence of the nanowires also profoundly affect the pair symmetries of the superconducting substrate, leading to measurable signatures in local observables\cite{triola2018oddnw}. Odd-$\omega$ pairing has also been explored in conventional Josephson junctions\cite{linder2017odd,balatsky2018odd}, in which the two weakly-coupled superconducting leads naturally provides a lead index, playing the role of bands. Interestingly, it was found that, in general, Josephson junctions should possess odd-$\omega$ interlead pairing proportional to $\sin{\tfrac{\phi}{2}}$, where $\phi$ is the phase difference across the junction. Comparing this condition to the well-known formula for the Josephson current, it was concluded that whenever Josephson current is expected to flow across the junction odd-$\omega$ interlead pairing will also be present. The above examples involving bilayers, double quantum dots, double nanowires, and Josephson junctions, all utilize spatial separation to obtain an additional index akin to the band index, but it is also possible to obtain such an index using a separation in reciprocal space. In particular, the transition metal dichalcogenides (TMDs) may be described by an effective model governing the physics of separate points in the Brillouin zone, so-called valleys. In these systems, the valley index can thus behave like an effective band degree. Using a low-energy effective model to describe the two $k$-space valleys of a single layer of TMD proximity-coupled to an $s$-wave superconductor with Rashba spin-orbit coupling, Ref. \cite{triola2016prl} found that the combination of valley-dependent spin-orbit coupling, intrinsic to the monolayer TMD, and the Rashba spin-orbit term at the TMD-superconductor interface necessarily leads to an odd-$\omega$ intervalley pair amplitude. \section{Experimental Signatures} \label{sec:experiment} Having shown how odd-$\omega$ superconductivity is ubiquitous in many superconducting systems, we now present several experimental signatures that have been proposed to measure the odd-$\omega$ pairing. Due to its intrinsically dynamical nature, with a zero equal-time amplitude, odd-$\omega$ pairing has proven to be notoriously hard to probe directly, still, as seen below, there are a growing number of known signatures of odd-$\omega$ pairing in multiband superconductors. \subsection{Hybridization Gaps} Shortly after the initial theoretical proposal for the emergence of odd-$\omega$ pairing in the two-band model defined in Eq. (\ref{eq:ham2band_compact})\cite{black2013odd} it was observed that the emergence of interband odd-$\omega$ pairing can be correlated with measurable signatures in the density of states (DOS)\cite{komendova2015experimentally}. In Ref. \cite{komendova2015experimentally} the simple two-band Hamiltonian, Eq. (\ref{eq:ham2band_compact}), was considered and the DOS was computed to search for features correlated with the emergence of odd-$\omega$ pairing. In addition to the total DOS, the separate contributions to the DOS coming from bands 1 and 2, $N_1$ and $N_2$, were examined to highlight the features which are strictly intraband and those which obtain contributions from both. \begin{figure} \begin{center} \centering \includegraphics[width=0.5\textwidth]{figure2_komendova_prb_2015.eps} \caption{DOS computed for the two-band model in Eq. (\ref{eq:ham2band_compact}) using $m_1=20m_e$, $m_2=22m_e$, $\mu_1=100 \text{meV}$, $\mu_2=105 \text{meV}$, $\Delta_1=2.5 \text{meV}$, and $\Delta_2=1 \text{meV}$, for four values of $\Gamma$, specified in each panel. Reprinted figure with permission from [L. Komendov\'{a}, A. V. Balatsky, and A. M. Black-Schaffer, Phys. Rev. B 92, 094517 (2015)] Copyright (2015) by the American Physical Society.} \label{fig:ldos_2band} \end{center} \end{figure} Computing the DOS in this manner, it was found that, as expected, in the presence of two different gaps $\Delta_1\neq\Delta_2$, and in the absence of interband hybridization, $\Gamma=0$, the DOS is simply a superposition of the DOS of two superconductor with coherence peaks at $E=\pm\Delta_1$ and $E=\pm\Delta_2$, respectively, see Fig. \ref{fig:ldos_2band}(a). However, for any finite value of $\Gamma$ additional gaps, and associated coherence peaks, were found to appear at energies away from the two gaps at the Fermi level, Figs. \ref{fig:ldos_2band}(b)-(d). These hybridization-induced gaps arise due to avoided crossings in the quasiparticle dispersion at the energies where the bands $E_1=\sqrt{\xi_1^2+|\Delta_1|^2}$ and $E_2=\sqrt{\xi_2^2+|\Delta_2|^2}$ meet. Solving for these crossing points, it can be shown that, in general, there could be two avoided crossings at different positive energies. However, assuming the initial quasiparticle bands do not intersect, i.e. $\xi_{1,\textbf{k}}\neq \xi_{2,\textbf{k}}$ for any $\textbf{k}$, then only one of these is a true avoided crossing. For quadratic dispersions with effective masses $m_i$ and chemical potentials $\mu_i$, this condition will be true as long as $(\mu_1-\mu_2)/(m_1-m_2)>0$. In this case, one can show that hybridization gaps will emerge if and only if $\Gamma\neq 0$ and $\Delta_1\neq\Delta_2$, which are exactly the same as the conditions for odd-$\omega$ interband pairing \cite{komendova2015experimentally}. While, hybridization gaps are a robust and simple probe of odd-$\omega$ pairing in the kind of two band model considered in Ref. \cite{komendova2015experimentally}, they do not emerge in all models for multiband superconductors with odd-$\omega$ pairing, since their emergence requires that the Bogoliubov bands intersect, in the absence of interband hybridization. For example, in the case of UPt$_3$, the hybridization term, $\epsilon_\textbf{k}$, induces odd-$\omega$ pair amplitudes; however, from the Hamiltonian in Eq. (\ref{eq:hamiltonian}) no avoided crossings emerge due to $\epsilon_\textbf{k}$, because, in the absence of spin-orbit coupling ($g=0$) the bands are degenerate for $\epsilon_\textbf{k}=0$ \cite{triola2018odd}. Furthermore, it can also be shown that neither Sr$_2$RuO$_4$\cite{komendova2017odd} nor the buckled honeycomb lattice\cite{kuzmanovski2017multiple} possess these hybridization gaps for similar reasons. Thus, it is necessary to also study alternative experimental signatures of odd-$\omega$ pairing in multiband superconductors. \subsection{Paramagnetic Meissner Effect} \label{sec:paramagnetic} One of the defining properties of superconducting states is their response to magnetic fields. As was first discovered by Meissner and Ochsenfeld\cite{meissner1933neuer}, superconductors exhibit perfect diamagnetism, referred to as the Meissner effect, in which magnetic flux is completely expelled from the bulk of a superconductor\cite{tinkham2004introduction,abrikosov2012methods}. In contrast to these classic results, it has been established by numerous theoretical works that odd-$\omega$ pairing often attracts magnetic flux, in a phenomenon termed the \textit{paramagnetic} Meissner effect \cite{tanaka2005anomalous, asano2011unconventional, higashitani2013magnetic, asano2014consequences, asano2015odd, FominovPRB2015} to contrast with the usual \textit{diamagnetic} Meissner effect. Such a paramagnetic response has been observed experimentally in magnetic-superconductor junctions using $\mu$SR, demonstrating that long-lived odd-$\omega$ pair amplitudes dominate deep within the magnetic bulk\cite{di2015intrinsic}. In Ref. \cite{asano2015odd} the magnetic response was studied using a two-band model similar to the one in Eq. (\ref{eq:ham2band_compact_definitions}), but with normal Hamiltonian possessing two kinds of interband hybridization, one spin-independent hybridization similar to $\Gamma$, and a spin-dependent hybridization with components given by $-\textbf{L}\times\textbf{k}\cdot\boldsymbol{\sigma}$, where $\textbf{L}$ describes the spin-orbit coupling in the system. The authors also considered three different types of order parameters: (i) spin-singlet even-parity intraband, (ii) spin-singlet even-parity even-interband, and (iii) spin-triplet even-parity odd-interband order. In each of these three cases it was found that odd-$\omega$ pairing can be induced by some asymmetry between the two bands, but the particular asymmetry and the properties of the induced odd-$\omega$ pairing were found to be different in each of case\cite{asano2015odd}. For the model in Ref. \cite{asano2015odd} the current density, $\textbf{j}$, can be related to a uniform applied magnetic field, $\textbf{A}$, with linear response theory: \begin{equation} \textbf{j}=-K \textbf{A} \end{equation} where $K$ is the Meissner kernel, which can be written in terms of the Nambu-Gorkov Green's functions: $\hat{G}$, $\hat{F}$, and $\hat{\bar{F}}$. Furthermore, assuming equal masses $m_1=m_2=m$ and chemical potentials $\mu_1=\mu_2=\mu$ for the two bands, the contribution to the Meissner kernel, $K$, takes on a relatively simple form: \begin{equation} K_F=\frac{e^2}{c} \frac{1}{m^2} T\sum_{\omega_n} \frac{1}{V_{\text{vol}}}\sum_\textbf{k}\frac{k^2}{d} \text{Tr}[\hat{F}(\textbf{k};i\omega_n)\hat{\bar{F}}(\textbf{k};i\omega_n)], \label{eq:K_F} \end{equation} where $e$ is the charge of the electron, $c$ the speed of light, $T$ the temperature, and $V_{\text{vol}}$ is the volume of the system in $d$ dimensions. Using Eq. (\ref{eq:K_F}), the authors examined the contributions to the Meissner effect coming from of each of the different superconducting pair channels. For case (i), and focusing on the simple case of $\textbf{L}=0$ and $\Gamma\neq 0$, only even-parity spin singlet pairing can emerge. Here, since only spin-singlet and even-parity pairing are induced, we find that $\hat{\bar{F}}(\textbf{k};i\omega_n)=-\hat{F}(\textbf{k};i\omega_n)^*$ and: \begin{equation} \begin{aligned} \hat{F}(\textbf{k};i\omega_n)&=i\hat{\sigma}_2\otimes \sum_{i=0}^3 f_{i}(\textbf{k};i\omega_n) \hat{\rho}_i, \\ \end{aligned} \label{eq:f_asano} \end{equation} where the odd-$\omega$ pair amplitude is necessarily given by the coefficient proportional to $\hat{\rho}_2$ (second Pauli matrix in band space), since that is the only possibility consistent with the symmetry constraints given by Fermi-Dirac statistics. From Eqs. (\ref{eq:K_F}) and (\ref{eq:f_asano}) it is easy to see that \begin{equation} \begin{aligned} K_F=&\frac{e^2}{c} \frac{1}{m^2} T\sum_{\omega_n} \frac{1}{V_{\text{vol}}}\sum_\textbf{k}\frac{k^2}{d} \left[|f_0(\textbf{k};i\omega_n)|^2 \right. \\ &\left.+|f_1(\textbf{k};i\omega_n)|^2-|f_2(\textbf{k};i\omega_n)|^2 +|f_3(\textbf{k};i\omega_n)|^2 \right], \end{aligned} \end{equation} where all of the terms are strictly positive except for the contribution from the odd-$\omega$ pairing. This explicitly demonstrates that, in this case, odd-$\omega$ pairing always contributes paramagnetically to the Meissner kernel, thus countering the flux repulsion due to the conventional even-$\omega$ Cooper pairs. The authors went on to show that this pattern holds for all of the even-$\omega$ and odd-$\omega$ pair amplitudes in the three of the cases described above, demonstrating that, in a generic two-band model, all even-$\omega$ Cooper pairs exhibit diamagnetism while all odd-$\omega$ pairs exhibit paramagnetism\cite{asano2015odd}. Their analysis thus establishes the paramagnetic Meissner effect as a direct probe of odd-$\omega$ pairing in multiband systems. However, since both even- and odd-$\omega$ pair amplitudes are usually present and only the total Meissner response can be measured, isolation of the paramagnetic contributions may be challenging. \subsection{Kerr Effect} It has long been known that when polarized light is reflected from the surface of a magnetic material, the polarization of the reflected light can be shifted by an angle $\theta_{\text{K}}$ relative to the incident beam. This phenomenon, known as the Kerr effect, gives a direct probe of the breaking of time-reversal symmetry in magnetic materials. In recent years, the Kerr effect has also been applied to study time-reversal symmetry breaking (TRSB) order parameters in superconductors, in the absence of magnetism\cite{xia_prl_2006,schemm_2014}. However, it was later established that multiband mechanisms are also necessary to observe the Kerr effect in clean superconductors even if the order parameter breaks TRSB \cite{taylor_prl_2012,taylor2013anomalous,wang_2017}. When applied to realistic tight-binding models, these calculations appear to match observations of the Kerr effect in both Sr$_2$RuO$_4$\cite{wysokinski_2012_prl,taylor_prl_2012,taylor2013anomalous,gradhand_2013_prb} and UPt$_3$\cite{wang_2017}. In particular, Taylor and Kallin\cite{taylor_prl_2012} studied the Kerr angle using a two-band model to describe superconducting Sr$_2$RuO$_4$. This model has the exact same form as Eq. (\ref{eq:ham2band_compact}) but with a real-valued momentum-dependent interband hybridization, $\Gamma_\textbf{k}$, and an order parameter that has both intraband components, $\Delta_{1}$ and $\Delta_2$, and an interband component $\Delta_{12}$. Using this model, they demonstrated that a necessary condition for the observation of a finite Kerr angle is: \begin{equation} \Gamma\text{Im}(\Delta_1^*\Delta_2)+\xi_{1}\text{Im}(\Delta_2^*\Delta_{12})-\xi_{2}\text{Im}(\Delta_1^*\Delta_{12})\neq 0, \label{eq:2band_kerr} \end{equation} where we have suppressed the $k$-dependence on the left-hand side for brevity. This implies that, in addition to a TRSB order parameter, either interband hybridization or a complex interband order parameter are essential for the observation of the Kerr effect in a clean two-band superconductor without magnetism. In Ref.\cite{komendova2017odd}, the criterion for a finite Kerr effect, Eq. (\ref{eq:2band_kerr}), was compared to the conditions for odd-$\omega$ pairing in that same model. There, it was demonstrated that whenever there is a finite Kerr effect, there will be odd-$\omega$ pairing in the system. The only possible exception was for the case in which $\xi_1=\xi_2$ and $\Delta_1\neq\Delta_2$; however, this would be incredibly unlikely. The same conclusion, that a finite Kerr effect signals the existence of odd-$\omega$ pairing, was also found to hold for a more realistic three-band model of Sr$_2$RuO$_4$ \cite{wysokinski_2012_prl,gradhand_2013_prb}. These results were later extended to UPt$_3$\cite{wang_2017,triola2018odd} demonstrating that the conditions giving rise to the Kerr effect are generically accompanied by odd-$\omega$ pairing. Taken together, these results solidify the status of the Kerr effect as a probe of odd-$\omega$ pairing in multiband superconductors with TRSB order parameters and strongly support the premise that both Sr$_2$RuO$_4$ and UPt$_3$, host odd-$\omega$ pairing. It is worth noting, however, that while these results show that the Kerr effect measures odd-$\omega$ pairing, it is possible to have odd-$\omega$ paring without exhibiting a Kerr effect, since the Kerr effect requires TRSB, which is only present in a few odd-$\omega$ multiband superconductors. Therefore, the lack of a finite Kerr angle is not evidence for the absence of odd-$\omega$ pairing. \section{Conclusions} \label{sec:conclusions} In this article we have reviewed recent work on the possibility of odd-$\omega$ pairing in multiband superconductors. After a brief pedagogical examination of the emergence of odd-$\omega$ pairing in a simple two-band model we extended the formalism to derive a general criterion for the emergence of odd-$\omega$ pairing in any superconductor with an equal-time BCS order parameter, $\Delta$, and normal state Hamiltonian, $h$, given in Eq. (\ref{eq:odd_criterion}): $h\Delta-\Delta h^*\neq 0$. We noted that this condition is identical to a recently proposed measure of superconducting fitness which has been shown to suppress the superconducting critical temperature\cite{ramires2016identifying}. We then discussed several previous works in which multiband superconductors are predicted to host odd-$\omega$ pairing. In particular, we focused on Sr$_2$RuO$_4$\cite{komendova2017odd}, UPt$_3$\cite{triola2018odd}, and buckled honeycomb lattices\cite{kuzmanovski2017multiple}. In addition to these examples we also discussed several similar systems which have been predicted to host odd-$\omega$ pairing due to a band-like degree of freedom. These systems included proximitized bilayers\cite{parhizgar_2014_prb}, double quantum dots\cite{sothmann2014unconventional,burset2016all}, double nanowires\cite{ebisu2016theory,triola2018oddnw}, Josephson junctions\cite{linder2017odd,balatsky2018odd}, and monolayer transition metal dichalcogenides\cite{triola2016prl}. After discussing examples of systems which are predicted to host odd-$\omega$ pairing, we reviewed three different experimental probes which are relevant for odd-$\omega$ pairing in multiband systems: hybridization-induced gaps in the electronic density of states\cite{komendova2015experimentally}; paramagnetic Meissner effect\cite{asano2015odd}; and Kerr effect\cite{komendova2017odd,triola2018odd}. Each observable was found to have both distinct advantages and disadvantages. Hybridization-induced gaps always accompany odd-$\omega$ interband pairing in certain two-band models, thus providing a robust signature of odd-$\omega$ pairing. However, these gaps only appear when the Bogoliubov band structure exhibits specific avoided crossings and are therefore not observable in all multiband superconductors. A paramagnetic Meissner signal is a robust signature of odd-$\omega$ pairing, as it does not depend sensitively on the band structure. However, since even-$\omega$ pairing is expected to coexist with the odd-$\omega$ amplitudes, the net magnetic response is likely to be diamagnetic in generic multiband superconductors. Finally, a finite Kerr effect always signals odd-$\omega$ pairing, but only exists in superconductors which break time-reversal symmetry, which is not true for all odd-$\omega$ states. To conclude, the ubiquity of odd-$\omega$ superconductivity has been shown in a wide variety of superconducting materials and systems, ranging from traditional multiband superconductors, to systems where other electronic degrees fo freedom provide an effective band index, including systems with layer, dot, wire, lead, and valley indices. Most importantly, as we demonstrated, the basic principles leading to the emergence of odd-$\omega$ pairing in all of these diverse superconducting systems can be understood from a simple unifying criterion. Additionally, these odd-$\omega$ pair amplitudes have been demonstrated to play multiple roles in determining the properties of these systems. Considering the generality of these phenomena, we believe that many more systems are likely awaiting discovery as odd-$\omega$ superconductors and that as odd-$\omega$ pairing is related to more observable properties it will grow in importance as a means to characterize and understand these systems. \begin{acknowledgements} We thank A.~V.~Balatsky, Y.~Gaucher, R. M.~Geilhufe, D.~Kuzmanovski, E.~Langmann, T.~L\"{o}thman, M.~Mashkoori, F.~Parhizgar, B.~Sothmann, and Y.~Tanaka for useful discussions. This work was supported by the Swedish Research Council (Vetenskapsr\aa det) Grant Nos.~2014-3721 and 2018-03488, the Knut and Alice Wallenberg Foundation through the Wallenberg Academy Fellows program, and the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (ERC-2017-StG-757553). \end{acknowledgements} \bibliographystyle{andp2012}
1,116,691,501,080
arxiv
\section{Motivations and overview} The question on knowing the asymptotic growth of the norm of the powers of a given matrix is a well-known exercise of linear algebra. Its Lyapunov spectrum, in terms of limit exponential behavior, which is defined by the Lyapunov exponents (i.e. logarithms of the eigenvalues) and eigendirections, are completely determined by using standard linear algebraic computations. Besides, the stability demeanor, when allowing perturbations, is a fairly understood subject (see e.g. ~\cite{Ka}). However, another question which is substantially harder, intends to understand the spectral properties of a given product of a collection (finite or infinite) of matrices and its stability. It is easy to see that even if we have only two matrices the spectrum can change drastically by a small change on the initial elements. We can think for instance, in combining a $2\times 2$ diagonal matrix different from the identity and also the identity matrix. The problem is reduced to the one described above, yet a small perturbation on the identity causes a substantial change in the final result, depending if we choose to keep it as a diagonal matrix or else we decide to input some rotational behavior. In very general terms, there are mostly two ways of contextualize products of matrices: within the \emph{random} framework or else within the \emph{deterministic} one. In this paper we follow the deterministic viewpoint on which the deterministic behavior is established once we fix a map $T$ in a closed manifold $X$, an ``automatic generator matrices'' defined by a map $A$ from $X$ into a Lie subgroup of $\GL(d,\mathbb{R})$ and a mode of relating $T$ with $A$ (see \S\ref{cocycles} for full details). These objects are part of the language of the so-called \emph{linear cocycles} (see \cite[\S2 and \S3]{BP}). The existence of the previous mentioned objects like eigendirections and Lyapunov exponents are guaranteed once we have a $T$-invariant measure on $X$ and an integrability condition on $A$ (cf.~\cite{O}). Choosing the accuracy on which we measure the size of a perturbation of the initial system will be crucial to answer the question of knowing the changes produced in the Lyapunov spectrum. The goal of finding non-zero Lyapunov exponents is an old quest dating back to early 1980's and the work of Cornelis and Wojtkowski~\cite{CW}. About twenty years ago Knill~\cite{Kn} proved that non-zero Lyapunov exponents are a $C^0$-dense phenomena within bounded $\SL(2,\mathbb{R})$ cocycles. A much sharper update was developed by Bochi ~\cite{B} taking into account the pioneering ideas of Ma\~n\'e~\cite{M1,M2} on rotation solutions (see also ~\cite{No}). Bochi observed that, from the more accurate $C^0$-generic point of view, we have the coexistence of strata on the manifold displaying positive Lyapunov exponents and hyperbolic behavior with other strata where zero Lyapunov exponents appeared (see also ~\cite{BV2} for generalizations). Observe that Cong \cite{Con2} improved the previous result for \emph{bounded} cocycles obtaining that a generic {bounded} $\SL(2,\mathbb{R})$-cocycle is uniformly hyperbolic, i.e., has a fibered exponential separateness. As far as we know, the best result on the abundance of simple spectrum (i.e. all Lyapunov exponents are different), on a quite large scope of topologies and on the two dimensional case, is given by a recent result of Avila (see~\cite{Av}). From the continuous-time viewpoint we have the linear differential systems or skew-product flows which are, in general, morphisms of vector bundles covering a flow. As a quintessential example, we consider a dynamics given by a smooth flow, and in this case the morphism corresponds to the action of the tangent flow in the tangent bundle. These systems are the flow counterpart of the discrete cocycles, i.e., the $d$-dimensional ($d\geq 2$) \emph{linear differential systems} over continuous $\mu$-invariant flows in compact Hausdorff spaces $X$, where $\mu$ is a Borel regular measure. Linear differential systems are equipped with a dynamics in the base $X$ given by a continuous flow $\varphi^{t}:X\rightarrow{X}$, a dynamics in the $d$-dimensional tangent bundle, given by a linear cocycle $\Phi^{t}\colon X\rightarrow{\GL(d,\mathbb{R})}$ with time $t$ evolving on $\mathbb{R}$, and a certain relation between them (see \S\ref{LDS} for more details). This continuous-time case is somehow different from its discrete counterpart. For the $L^p$-denseness results we recall the statement in \cite{AC0} \emph{``...the results of this paper (with some appropriate changes) can be applied to the continuous-time case as well''}. In \S\ref{cc} we expose in detail those \emph{``appropriate changes''} pointed by Arnold and Cong. Moreover, we give the continuous-time version for our strategy in order to obtain the $L^p$-residuality of the one-point spectrum. We stress that any perturbation must be performed upon a given differential equation. With respect to the continuous-time versions, in~\cite{Be,Be2}, it was proved the Ma\~{n}\'{e}-Bochi-Viana theorem for linear differential systems. We notice out that several particular examples of genericity of hyperbolicity (exponential dichotomy) in $C^{0}$-topology on the torus were already explored by Fabbri~\cite{F} and by Fabbri and Johnson~\cite{FJ1}. Some approaches have been proposed for determining the positivity of Lyapunov exponents for linear differential systems (see~\cite{F2,FJ2}). This last result follows from the paper of Kotani~\cite{Ko}. We suggest~\cite{FJZ} for a quite complete survey about these issues. It is pretty clear that, for both discrete and continuous-time case, there exists lots of subtleties on this subject: the choice on the topology, the choice of $A$ being bounded or continuous, the choice of whether we take the dense or the generic viewpoint. The strengthening of this thesis can be pushed forward by recalling that, by one hand, Arnold-Cong~\cite{AC0} and Arbieto-Bochi~\cite{AB} proved that, for feeble topologies like the $L^p$-topology (see ~\S\ref{topologies}), generic cocycles have zero Lyapunov exponents. On the other hand, Viana~\cite{V} proved that for stronger topologies the positive Lyapunov exponents are prevalent (see also \cite{Av}). In between we have the Bochi-Ma\~n\'e dichotomy. If we scrutinize carefully the Arnold and Cong strategy carried out in \cite{AC0} to obtain simple spectrum we observe that, besides the idiosyncrasy of the $L^p$-topology which allows large uniform-norm perturbations by making small $L^p$-perturbations, they used strongly two properties of the group of matrices, one from a \emph{topological} and the other from a \emph{geometric} nature: \medskip \begin{enumerate} \item {\bf Topological Condition:} first they needed to commingle any different directions in the fiber space which they called \emph{``the turning solution method of Millionshchikov''} (see \cite{Mi}), and; \item {\bf Geometric Condition:} second, they input a small expansion in the predefined direction on which the Lyapunov exponent should grow combined with a balanced contraction to give a volume invariance. \end{enumerate} \medskip In the present paper we considered two abstract properties of subgroups of matrices which reflect (1) and (2) above and following the insight from the $L^p$-topology. In brief terms, the property (1) is called \emph{accessibility} and was already considered in ~\cite{BV2} (see also a related definition in ~\cite{N}) and the property (2) is called \emph{saddle-conservativeness}. Once we formulate the results taking into account these two properties we derive easily that the theorems in the vein of those in \cite{AC0,AB} hold for the most important families of matrices, like, e.g., $\GL(d,\mathbb{R})$, $\SL(d,\mathbb{R})$ and $\Sp(2d,\mathbb{R})$. This was the strongest motivation for having opted for this abstract approach. Another aspect that may raise some doubts to the reader and we intend to clarify at once was our choice not to follow the strategy of Arnold and Cong \cite{AC0} when we try to achieve the $L^p$-denseness of the one-point spectrum. In fact, we opt to develop the argument first used in ~\cite{BV2} allowing us to waive the ergodic hypothesis and deal with dynamical cocycles, and also allowing the approach to the infinite dimensional case. As an application, in \S\ref{dynamical} we apply our results to the dynamical cocycle given by the derivative of an area-preserving diffeomorphism and endowed with the $L^p$-norm. In \S\ref{infinite}, we point out that for discrete $L^p$ cocycles evolving on compact operators of infinite dimension (cf. ~\cite{BeC}) the one-point spectrum is prevalent. In the following table we consider an abbreviated summary of the prevalence of the different spectrums with respect to both discrete and continuous-time systems and also considering different type of topologies. \bigskip \begin{center} \small{\begin{tabular}{|c|c|c|c|} \hline & {\tiny \textbf{$L^p$-topology}} & {\tiny \textbf{$C^0$-topology} } & {\tiny \textbf{$C^{r+\alpha}$-topology} ($r\geq 0$, $\alpha>0$)} \\ \hline {\tiny \textbf{maps}} & {\tiny o.p.s. (\cite{AC0,AB}; Theorems~\ref{ops}, \ref{simple}, \ref{dc} and \ref{BeCLp}}) & {\tiny o.p.s. vs hyperbolicity (\cite{B, BV2}}) & {\tiny hyperbolicity (\cite{V,Av,BVar}})\\ \hline {\tiny \textbf{flows}} & {\tiny o.p.s. (Theorems~\ref{ops2} and \ref{simple2}}) & {\tiny o.p.s. vs hyperbolicity (\cite{FJ2,Be,Be2}}) & {\tiny hyperbolicity (\cite{BVar}})\\ \hline \end{tabular}} \end{center} \bigskip This paper is organized as follows: in \S\ref{discrete} we concern to discrete-time cocycles, where we establish the existence of an $L^p$-residual subset of the accessible cocycles with one-point spectrum (Theorem \ref{ops}) and the $L^p$-denseness of saddle-conservative accessible cocycles having simple spectrum (Theorem~\ref{simple}). In \S\ref{cc} we treat with the continuous-time results. We state the existence of an $L^p$-residual subset of the accessible linear differential systems with one-point spectrum (Theorem~\ref{ops2}) and the $L^p$-denseness of saddle-conservative accessible linear differential systems having simple spectrum (Theorem~ \ref{simple2}). Finally, in \S\ref{app} we apply our results to the dynamical cocycles given by the derivative of area-preserving diffeomorphims, and to discrete cocycles evolving on compact operators of infinite dimension. \section{The discrete-time case}\label{discrete} \subsection{Definitions and statement of the results}\label{discrete results} \subsubsection{Cocycles and Lyapunov exponents}\label{cocycles} Let $X$ be a compact Hausdorff space, $\mu$ a Borel regular non-atomic probability measure and $T:X\to X$ be an automorphism preserving $\mu$. Consider the set $\mathcal G$ of the ($\mu$ mod 0 equivalence classes of) measurable maps $A:X\to \GL(d,\mathbb R)$, $d \geq2$, endowed with its Borel $\sigma$-algebra. The Euclidean space $\mathbb{R}^d$ is endowed with the canonic inner product. Each map $A$ generates a linear cocycle $$ \begin{array}{cccc} F_A: & X\times\mathbb R^d & \longrightarrow & X\times\mathbb R^d \\ & (x,v) & \longmapsto & (T(x),A(x) v), \end{array} $$ over the dynamical system $T:X\to X$. We set $$A^n(x):= A(T^{n-1}(x))\cdots A(x)$$ for the composition of the maps $A(T^{n-1}(x))$ up to $A(x)$ and, if $T$ is invertible, $$A^{-n}:= A^{-1}(T^{-n}(x))\cdots A^{-1}(T^{-1}(x)).$$ As usual, we consider $A^0:=\textrm{Id}$ where $\textrm{Id}$ stands for the $d\times d$ identity matrix. By an abuse of language we will often identify $F_A$ and $A$. Let $\|\cdot\|$ be an operator norm on the set $d\times d$ matrices with real entries. Consider the subset $\mathcal{G}_{\!I\!C}$ of $\mathcal G$ of all maps $A\in\mathcal G$ satisfying the following \emph{integrability condition}: \begin{equation*} \int_X\log^+\|A^{\pm1}(x)\|\,d\mu<\infty, \end{equation*} where $\log^{+}(y)=\text{max}\,\{0,\log(y)\}$. The multiplicative ergodic theorem of Oseledets \cite{O} ensures that the Lyapunov exponents $\lambda_1(A,x)\geq\ldots\geq\lambda_d(A,x)$ of the \emph{integrable} cocycle $A\in\mathcal{G}_{\!I\!C}$ are defined for almost every point $x$. If $T$ is ergodic, these functions are constant almost everywhere, as the possible values for the limits $$\lim_{n\to\pm\infty}\frac1n\log \|A^n(x)v\|,$$ for $\mu$ almost every ($\mu$-a.e.) $x\in X$ and all $v\in\mathbb R^d\setminus\{0\}$. We say that $A\in\mathcal{G}_{\!I\!C}$ has \emph{one-point (Lyapunov) spectrum} if all Lyapunov exponents are equal. If, in addition, we include that the cocycle $A$ takes values in $\SL(d,\mathbb R)$ then $A$ has one-point spectrum if and only if all Lyapunov exponents are zero. On the other hand, we say that $A\in\mathcal{G}_{\!I\!C}$ has \emph{simple (Lyapunov) spectrum} if all Lyapunov exponents are different. \subsubsection{A topology on cocycles}\label{topologies} Let us endow $\mathcal G$ with an $L^p$-like topology as in \cite{AC0}. For $A, B\in\mathcal G$ and $1\leq p \leq \infty$ set \begin{equation*} \|A\|_p:=\left\{\begin{array}{lll} \Big(\displaystyle\int_X \|A(x)\|^p\, d\mu\Big)^{1/p}, & &\textrm{if}\, 1\leq p < \infty \\ \displaystyle\esssup_{x\in X}\|A(x)\|, & & \textrm{if}\, p=\infty \\ \end{array} \right., \end{equation*} and $$\Delta_p(A,B):=\|A-B\|_p+\|A^{-1}-B^{-1}\|_p.$$ We define now $$ d_p(A,B):=\frac{\Delta_p(A,B)}{1+\Delta_p(A,B)},$$ where $d_p(A,B)=1$ if $\Delta_p(A,B)=\infty$. According \cite{AC0}, $(\mathcal G,d_p)$, and hence $(\mathcal{G}_{\!I\!C},d_p)$ is a complete metric space. \begin{remark} It follows from the definition of the metric and from H\"older inequality (see e.g. \cite{Ru}) that, for all $A, B\in\mathcal G$ and $1\leq p\leq q\leq\infty$, we have $d_p(A,B)\leq d_q(A,B)$. \end{remark} \subsubsection{Families of cocycles} We are interested in classes of maps $A$ taking values in specific subgroups of $\GL(d,\mathbb R)$. In the greater generality we consider subgroups that satisfy an accessability type condition. \begin{definition} We call $\mathcal{S}\subseteq \GL(d,\mathbb R)$ \textbf{accessible} if it is a non-empty closed subgroup of $\GL(d,\mathbb R)$ which acts transitively in the projective space $\mathbb RP^{d-1}$, that is, given $u,v\in \mathbb RP^{d-1}$, there is $R\in\mathcal{S}$ such that $R\,u=v$. \end{definition} \begin{example} The subgroups $\GL(d,\mathbb R)$, $\SL(d,\mathbb R)$, $\Sp(2q,\mathbb R)$, as well $\GL(d,\mathbb C)$ and $\SL(d,\mathbb C)$, are accessible. \end{example} \begin{remark} In \cite[Definition 1.2]{BV2} the authors introduced a slightly different notion of accessability. See \cite[Lemma 5.12]{BV2} for a relation between those concepts. \end{remark} Next result shows that accessibility allows us to reach anywhere within the projective space and acting on elements of the group. \begin{lemma}\label{rot} Let $\mathcal{S}$ be an accessible subgroup of $\GL(d,\mathbb R)$. There exists $K>0$ such that, for all $u,v\in\mathbb RP^{d-1}$, there is $R_{u,v}\in \mathcal{S}$, with $\|R_{u,v}^{\pm1}\|\leq K$, such that $R_{u,v} u=v$. \end{lemma} \begin{proof} Fix some $\epsilon>0$ and let $0<\delta<\epsilon$ be such that if $R_1,R_2\in U_\delta:=\{R\in \mathcal{S}\colon \|R\|<\delta\}$, then $R_2R_1^{-1}\in U_{\epsilon}$. The hypothesis over $\mathcal S$ imply that for any $w\in\mathbb RP^{d-1}$, the evaluation map $w\colon \mathcal{S}\to\mathbb RP^{d-1}$ given by $A\mapsto A(w)$ is open, so that $U_\delta(w):=\{Rw: R\in U_\delta\}$ is an open subset of $\mathbb RP^{d-1}$. Due to the compactness of the projective space one can write $$\mathbb RP^{d-1}=U_\delta(w_1)\cup\cdots\cup U_\delta(w_{m}),$$ for some $m\geq1$. Let $u,v\in\mathbb RP^{d-1}$ be given with $u\in U_\delta(w_{i})$ and $v\in U_\delta(w_{j})$ for some $i,j\in\{1,\ldots,m\}$. Let $R_u, R_v\in U_\delta$ be such that $R_u w_i=u, R_v w_j=v$. There exist $1\leq k\leq m$ and $\{R_{i}\}_{i\leq k}$, with $\|R_i\|<\epsilon$, such that $R_k\cdots R_1 w_i=w_j$. Then $R_{u,v} u:=R_vR_{k}\cdots R_{1}R_u^{-1} u=v$ and $\|R_{u,v}\|\leq (m+2)\epsilon$. We just have to consider $K=(m+2)\epsilon$. \end{proof} The next definition stresses the possibility of implementing some expansion in a given direction and simultaneously compensate with a contraction so that, ultimately, it preserves the volume. We note that the expansion will be used in the sequel when we want to enlarge a certain Lyapunov exponent under a small perturbation. \begin{definition}\label{SC} Let $\mathcal{S}$ be a closed subgroup of $\GL(d,\mathbb R)$. We call $\mathcal{S}$ \textbf{saddle-conservative} if given any direction $e\in \mathbb{R}^d$ and $\delta>0$ there exists $A_\delta\in \mathcal{S}$ such that: \begin{enumerate} \item $A_\delta \in \SL(d,\mathbb R)$ and \item $A_\delta e =(1+\delta)e$. \end{enumerate} \end{definition} \begin{example} The groups $\GL(d,\mathbb R)$, $\SL(d,\mathbb R)$, $\Sp(2q,\mathbb R)$, as well as $\GL(d,\mathbb C)$ and $\SL(d,\mathbb C)$, display the saddle-conservative property. The special orthogonal group $\SO(d,\mathbb{R})$ is not saddle-conservative because, despite the fact that displays condition (1) it fails condition (2). \end{example} \subsubsection{Statement of the results} Denote by $\mathcal{T}_{\!I\!C}$ an accessible subgroup of $\mathcal{G}_{\!I\!C}$ and by $\mathcal{S}_{\!I\!C}$ a saddle-conservative closed subgroup of $\mathcal{T}_{\!I\!C}$. We present now our first result: \begin{maintheorem}\label{ops} There exists an $L^p$-residual subset $\mathcal R\in \mathcal{T}_{\!I\!C}$, $1\leq p < \infty$, such that any $B\in \mathcal{R}$ has one-point spectrum. \end{maintheorem} Once we obtain an $L^p$-residual where one-point spectrum prevails we ask if it is possible to find $L^p$-open subsets where all Lyapunov exponents are equal. Clearly, this question is interesting if we exclude certain contexts where the problem becomes easy to solve. That is the case when we deal with cocycles evolving on isometry subgroups (like $\SO(d,\mathbb{R})$) or with cocycles evolving on compact subgroups, where we surely have one-point spectrum. In order to reach simple spectrum similarly to \cite[Theorem 4.4]{AC0} we need to deal with subgroups displaying additional features. We this in mind we obtain: \begin{maintheorem}\label{simple} Let $T:X\to X$ be ergodic. For any $A\in \mathcal{S}_{\!I\!C}$, $1\leq p < \infty$, and $\epsilon>0$ there exists $B\in \mathcal{S}_{\!I\!C}$, with $d_p(A,B)<\epsilon$ and $B$ has simple Lyapunov spectrum. \end{maintheorem} Theorems \ref{ops} and \ref{simple} are respectively proved in \S\ref{proof ops} and \S\ref{proof simple}. \subsection{One-point spectrum is $L^p$-residual}\label{proof ops} The core argument in order to obtain a residual subset is by proving that a certain function related with the Lyapunov exponents of the cocycles is upper semicontinuous and that the continuity points are those with one-point spectrum. Once we have this proved we use the fact that the set of points of continuity of an upper semicontinuous function is always a residual subset (see e.g. \cite{K}). It was this idea that led Arbieto and Bochi \cite{AB} to improve the $L^p$-denseness result of Arnold and Cong \cite{AC0} for one-point spectrum cocycles to an $L^p$-residual grade. In ~\cite[Theorem 4.5]{AC0} the one-point spectrum $L^p$-prevalence among cocycles evolving on $\GL(d,\mathbb R)$ is proved. Notwithstanding we believe that the proof of Arnold and Cong can be readapted to accessible cocycles, here we obtain the proof of this result following a different approach by reformulating the arguments developed in ~\cite{BV2}, where it was presented a strategy for equalizing the Lyapunov exponents with small perturbations in the delicate $C^0$ topology, and for a quite general classe of cocycles. One of the main purpose is to avoid the ergodicity condition on the dynamics over the base $T:X\to X$, which will be useful in some applications of our main results for discrete-time cocycles to dynamical cocycles (see \S\ref{dynamical}). In this section we start by recalling the $L^p$-upper semicontinuity of the entropy function from Arbieto and Bochi ~\cite{AB}, and some elementary facts on exterior power and their relation to Lyapunov exponents. We revisit then the strategy of Bochi and Viana ~\cite{BV2} and give the (simplified) versions of some results adapted to our $L^p$ setting. We finish the section with the proof of Theorem~\ref{ops}. We inform the reader that our notation differs slightly from that of ~\cite{BV2}. \subsubsection{The upper semicontinuity of the entropy function}\label{ArBo} For $k=1,\ldots,d$ and $A\in\mathcal G_{\!I\!C} $ let $$\hat{\lambda}_k(A,x):= \lambda_1(A,x)+\ldots+\lambda_k(A,x)\quad\textrm{and}\quad\Lambda_k(A):=\int_X \hat\lambda_k(A,x)\,d\mu.$$ It was proved in ~\cite{AB} that $ A\mapsto \Lambda_k(A)$ is upper semicontinuous for all $k=1,\ldots,d$, with respect to the $L^p$-like topology, that is, for every $1\leq p \leq \infty$, $A\in\mathcal{G}_{\!I\!C}$ and $\varepsilon>0$ there exists $0<\delta<1$ such that, if $d_p(A,B)<\delta$ then $\Lambda_k(B)\leq \Lambda_k(A)+\varepsilon$. Moreover, $\Lambda_d$ is continuous on $\mathcal G_{IC}$. In particular, those results hold on the restriction of $\Lambda_k$ to the subsets $\mathcal S_{\!I\!C}$ and $\mathcal T_{\!I\!C}$ of $\mathcal G_{\!I\!C}$. \subsubsection{Exterior powers} The language of multilinear algebra is much appropriate when we want deal with several Lyapunov exponents (say $k$) for a cocycle $A$ by considering consider the dual problem of studying the upper Lyapunov exponent of the $k^{th}$ exterior product of $A$. Let us recall now some basic definitions. For details on multilinear algebra of operators see Arnold's book~\cite{A}. The \emph{$k^{th}$ exterior power} of $\mathbb{R}^d$, denoted by $\wedge^{k}(\mathbb{R}^d)$, is also a vector space which satisfies ${\rm dim}(\wedge^{k}(\mathbb{R}^d))=(_{k}^{d})$. Given an orthonormal basis $\{e_{j}\}_{j=1}^d$ of $\mathbb{R}^d$, the family of exterior products $e_{j_{1}}\wedge e_{j_{2}}\wedge\ldots\wedge e_{j_{n}}$ for $j_{1}<\ldots<j_{k}$, with $j_{\alpha}\in \{1,\ldots,d\}$, constitutes an orthonormal basis of $\wedge^{k}(\mathbb{R}^d)$. Given a linear operator $A\colon \mathbb{R}^d\rightarrow \mathbb{R}^d$ we define the operator $\wedge^{k}(A)$, acting on the $k$-vector $u_{1}\wedge\ldots\wedge u_{k}$, by $$\begin{array}{cccc} \wedge^{k}(A)\colon & \wedge^{k}(\mathbb{R}^d) & \longrightarrow & \wedge^{k}(\mathbb{R}^d) \\ & u_{1}\wedge\ldots\wedge u_{k} & \longmapsto & A(u_{1})\wedge\ldots\wedge A(u_{k}). \end{array}$$ As we already said, this operator will be very useful to prove our results since we can recover the spectrum and splitting information of the dynamics of $\wedge^{k}(A^n)$ from the one obtained by applying Oseledets' theorem to $A^n$. This information will be for the same full measure set and with this approach we deduce our results. Next, we present the multiplicative ergodic theorem for exterior power (for a proof see~\cite[Theorem 5.3.1]{A}). \begin{lemma}\label{arnauld} The Lyapunov exponents $\lambda_{i}^{\wedge k}(x)$ for $i\in\left\{1,\ldots,\left(_{k}^{d}\right)\right\}$, repeated with multiplicity, of the $k^{th}$ exterior product operator $\wedge^{k}(A)$ at $x$ are the following numbers given by the sums of the Lyapunov exponents of $A$ at $x$: $$\sum_{j=1}^{k}\lambda_{i_{j}}(x), \text { where }1\leq i_{1}< \ldots<i_{k}\leq d.$$ This nondecreasing sequence starts with $\lambda_{1}^{\wedge k}(x)=\lambda_{1}(x)+\lambda_{2}(x)+\ldots+\lambda_{k}(x)$ and ends with $\lambda_{q(k)}^{\wedge k}(x)=\lambda_{d+1-k}(x)+\lambda_{d+2-k}(x)+\ldots+\lambda_{d}(x)$. Moreover, the splitting of $\wedge ^{k}(\mathbb{R}_{x}^{d}(i))$ for $0\leq i\leq q(k)$ (of $\wedge^{k}(A)$) associated to $\lambda_{i}^{\wedge k}(x)$ can be obtained from the splitting $\mathbb{R}_{x}^{d}(i)$ (of $A$) as follows; take an Oseledets basis $\{e_{1}(x),\ldots,e_{d}(x)\}$ of $\mathbb{R}_{x}^{d}$ such that $e_{i}(x)\in E_{p}^{\ell}$ for ${\rm dim}(E_{x}^{1})+\ldots+{\rm dim}(E_{x}^{\ell-1})<i\leq {\rm dim}(E_{x}^{1})+\ldots+{\rm dim}(E_{x}^{\ell})$. Then, the Oseledets space is generated by the $k$-vectors: $$e_{i_{1}}\wedge \ldots\wedge e_{i_{k}}\text { such that } 1\leq i_{1}<\ldots<i_{k}\leq d \text { and } \sum_{j=1}^{k}\lambda_{i_{j}}(x)=\lambda_{i}^{\wedge k}(x).$$ \end{lemma} \subsubsection{Bochi-Viana's strategy revisited}\label{BVrevisited} The following result is the $L^p$ version of ~\cite[Proposition 7.1]{BV2} which can be very simplified in the weak topologies that we are using. For the reader who is familiar with ~\cite{BV2}, we substantially simplify their proof because the \emph{third case} in the proof of ~\cite[Proposition 7.1]{BV2}, which deals with the concatenation of a large amount of small $C^0$-perturbations in the absent of a certain type of non-dominance, can be solved with a small sole $L^p$-perturbation. We can summarize by saying that the dominated splitting ceases to be an impediment of interchanging Oseledets directions by small $L^p$-perturbations. \begin{lemma}\label{rot3} Let be given $A\in \mathcal{T}_{\!I\!C}$, $1\leq p < \infty$, $\epsilon>0$, $y\in X$ a nonperiodic point and a nontrivial splitting $\mathbb{R}^d= E\oplus F$ over $y$. Then, there exists $B\in \mathcal{T}_{\!I\!C}$, with $d_p(A,B)<\epsilon$, such that $B(y) u = v$ for some nonzero vectors $u\in E$ and $v\in A(y) F$. \end{lemma} \begin{proof} By Lemma~\ref{rot} there exists $K>0$ such that for $\hat{u},\hat{v}\in\mathbb RP^{d-1}$ with $u=\alpha\hat{u}\in E$ and $\hat{v}\in F$, there is $R_{\hat u,\hat v}\in S$, with $\|R_{\hat u,\hat v}^{\pm1}\|\leq K$ such that $R_{\hat u,\hat v}\hat u=\hat v$. Let $V_\epsilon$ be a small neighborhood of $y$ and we define the following perturbation of $A$: $$B(x)=\left\{\begin{array}{lll} A(x),&&\textrm{if}\, x\notin V_\epsilon \\ \frac{1}{\|u\|}A(x) R_{\hat u,\hat v},&&\textrm{if}\, x\in V_\epsilon \end{array}\right.,$$ It is clear that $d_p(A,B)<\epsilon$ if $V_\epsilon$ is sufficiently small. Moreover, $B(y) u\in A(y) F$. \end{proof} The following proposition if the adapted version of ~\cite[Proposition 7.2]{BV2}. Bearing in mind the aims we want to achieve we enumerate the main differences between them: \begin{enumerate} \item First of all we are using the $L^p$-like topology instead of the much more exigent $C^0$ topology. As a consequence, interchanging Oseledets' directions if a more simple task (compare Lemma~\ref{rot3} with ~\cite[Proposition 7.1]{BV2}); \item We observe that in \cite[Proposition 7.2]{BV2} is considered the subset $\Gamma^*_p(A,m)$ of points without an $m$-dominated splitting of index $k$. In our setting the dominated splitting is no more an obstruction to cause a decay on the Lyapunov exponents. For this reason we perform the perturbations in a full measure subset of $X$; \item In \cite[Proposition 7.2]{BV2} the change of Oseledets directions are performed using several perturbations. On the contrary, due to Lemma~\ref{rot3}, in the present paper we only need one single perturbation which is done, more or less, on a half time iterate: \end{enumerate} \begin{proposition}\label{P1} Consider $A\in \mathcal{T}_{\!I\!C}$, $\epsilon>0$, $\delta>0$ and $k\in\{1,\ldots,d-1\}$. There exists a measurable function $N\colon X\rightarrow \mathbb{N}$ such that for $\mu$-a.e. $x\in X$ and every $n\geq N(x)$ there exists a linear map $B(T^{\frac{n}{2}}(x))$ (or $B(T^{\frac{n+1}{2}}(x))$ if $n$ is odd) such that: $$\frac{1}{n}\log\|\wedge^k(A^{\frac{n}{2}-1}(T^{\frac{n}{2}+1}(x))\cdot B(T^{\frac{n}{2}}(x))\cdot A^{\frac{n}{2}}(x))\|\leq \delta+\frac{\hat\lambda_{k-1}(A,x)+\hat\lambda_{k+1}(A,x)}{2}.$$ \end{proposition} We notice that $\|B(T^{\frac{n}{2}}(x))-A(T^{\frac{n}{2}}(x))\|$ can be, in general, very large. However, this is not a problem because the whole cocycle $B$ will be equal to $A$ outside a small neighborhood, thence $d_p(A,B)$ will be arbitrarily small for $1\leq p < \infty$. Moreover, let us note that the function $N$ above depends only on the a.e. asymptotic estimates given by Oseledets' theorem. The following proposition is the adapted version of ~\cite[Proposition 7.3 and Lemma 7.4]{BV2} which fulfills the global picture of Proposition~\ref{P1}. We observe that its proof follows the same steps traversed in ~\cite{BV2}. \begin{proposition}\label{P2} Let be given $A\in \mathcal{T}_{\!I\!C}$, $1\leq p < \infty$, $\epsilon>0$, $\delta>0$ and $k\in\{1,\ldots,d-1\}$. There exists $B\in \mathcal{T}_{\!I\!C}$, with $d_p(A,B)<\epsilon$, such that $$\Lambda_k(B)<\delta+\frac{\Lambda_{k-1}(A)+\Lambda_{k+1}(A)}{2}.$$ \end{proposition} The end of the proof of Theorem~\ref{ops} is now a direct consequence of the arguments described in ~\cite[\S 4.3]{BV2} and ~\cite{AB} and the results proved above. We will present them now for the sake of completeness. For each $k=1,\ldots,d-1$ we define the \emph{discontinuity jump} by: $$J_k(A)=\int_X\frac{\lambda_k(A,x)-\lambda_{k+1}(A,x)}{2}\,d\mu.$$ The following result is Proposition~\ref{P2} rewritten. \begin{proposition}\label{P3} Given $A\in \mathcal{T}_{\!I\!C}$, $1\leq p < \infty$, $\epsilon>0$, $\delta>0$ and $k\in\{1,\ldots,d-1\}$, there exists $B\in \mathcal{T}_{\!I\!C}$, with $d_p(A,B)<\epsilon$, such that $$ \Lambda_k(B)<\delta-J_k(A)+ \Lambda_{k}(A).$$ \end{proposition} We are now in conditions to finish the proof of Theorem~\ref{ops}: \begin{proof}(of Theorem~\ref{ops}) Let $A\in \mathcal{T}_{\!I\!C}$ be a continuity point of the functions $\Lambda_k$ for all $k$. Then $J_k(A)=0$ for all $k$, i.e., $\lambda_k(A,x)=\lambda_{k+1}(A,x)$ for all $k$ and $\mu$-a.e. $x\in X$. Thence, the cocycle $A$ has one-point spectrum for $\mu$-a.e. $x\in X$. Finally, we recall that the set of continuity points of an upper semicontinuous function (cf. \S\ref{ArBo}) is a residual subset. \end{proof} \subsection{Simple spectrum is dense}\label{proof simple} In this section we prove Theorem~\ref{simple} by borrowing the simple spectrum part of ~\cite[\S 4]{AC0}. We start by establishing in Lemma~\ref{split spec} the adaptation of ~\cite[Lemma 4.1]{AC0} with some adjustments that reflect our assumptions for the cocycle. This result allows us to split a one-point Lyapunov spectrum by an $L^p$-small perturbation of the cocycle. The remaining part of the proof of Theorem~\ref{simple} follows \emph{ipsis verbis}~\cite{AC0}. Throughout this section we will assume that $T:X\to X$ is ergodic. \begin{lemma}\label{split spec} Assume that $A\in \mathcal{S}_{\!I\!C}\subset \GL(d,\mathbb{R})$, with $d\geq2$, has one-point spectrum. Then, for any small $\epsilon >0$ and $1\leq p < \infty$, there exists $B\in\mathcal \mathcal{S}_{\!I\!C}$, with $d_p(A,B)<\epsilon$, such that $B$ has at least two different Lyapunov exponents. \end{lemma} \begin{proof} Consider $M>1$ and a Borel subset $V\subset X$, such that $\mu(V)>0$, $V\cap T(V)=\emptyset$, and $$\sup_{x\in V\cup T(V)}\|A^{\pm1}(x)\|\leq M,$$ and let $$k(x):=\min\{ n\geq 1: T^{-n}(x)\in T(V)\}.$$ Fix a unitary vector $e\in \mathbb RP^{d-1}$ and define the following vector which is a normalized image under the cocycle $A$ of the vector $e$, in the fiber corresponding to $x\in X$: $$v(x):=\left\{\begin{array}{lll} e,&& \textrm{if}\, x\in T(V)\\ \frac{A^{k(x)}(T^{-k(x)}(x))e}{\|A^{k(x)}(T^{-k(x)}(x))e\|},&& \textrm{otherwise} \end{array}\right., $$ and set $E(x)=\textrm{span}\{v(x)\}$. For each $u\in\mathbb RP^{d-1}$ fix some $R_u:=R_{u,e}$ given by Lemma \ref{rot}, with $\|R_{u}^{\pm1}\|\leq C_1$ and such that $R_{u}u=e$. For $x\in V$ define $q(x)\in\mathbb RP^{d-1}$ given by $$q(x)=\frac{A(x)v(x)}{\|A(x)v(x)\|}.$$ Define now the following perturbation of $A$ in $V$: $$C_1(x)=\left\{\begin{array}{lll} A(x),&&\textrm{if}\, x\notin V \,\textrm{or}\, q(x) = e\\ R_{q(x)}A(x),&&\textrm{if}\, x\in V \,\textrm{and}\, q(x)\neq e \end{array}\right.$$ Since for $x\notin V$, $C_1(x)=A(x)$ and for $x\in V$ we have \begin{equation*}\label{dist C1 A} \|C_1^{\pm 1}(x)-A^{\pm 1}(x)\|\leq \|A^{\pm1}(x)\|\cdot\|R_{q(x)}^{\pm 1}-\textrm{Id}\|, \end{equation*} it follows that $d_p(A,C_1)\leq \Delta_p(A,C_1)\leq 2MK\mu(V)^{1/p}$, which can be smaller than any small $\epsilon>0$ just considering $V$ small enough $\mu$-measure. If $C_1$ has two or more distinct Lyapunov exponents we take $B=C_1$ and we are done. Let us consider now that $C_1$ has only one Lyapunov exponent $\lambda_{C_1}$. Then it must be equal to the unique Lyapunov exponent $\lambda_A$ for $A$ (and both have multiplicity $d$). Indeed, since $$\det A(x)=\det {C_1}(x)$$ for all $x\in X$, by the multiplicative ergodic theorem we have $$d .\lambda_{C_1}=\int \log |\det {C_1}(x)|\,d\mu=\int \log |\det A(x)|\,d\mu=d.\lambda_A.$$ Now, let $\delta\in(0,1)$. Since our group has the saddle-conservative property, we can find $A_\delta\in \SL(d,\mathbb{R})$ such that $A_\delta e=(1+\delta)e$. We define now: $$C_2(x)=\left\{\begin{array}{lll} \textrm{Id}& & \textrm{if}\, x\notin T(V)\\ A_\delta(x) & & \textrm{if}\, x\in T(V) \end{array}\right..$$ Finally, set $$D(x)=C_2(x) C_1(x).$$ Since, for all $x\in X$ \begin{equation*}\label{inv subspaces} D(x)E(x)=C_1(x)E(x)=E(T(x)), \end{equation*} by Birkhoff's ergodic theorem we have for any $\delta>0$ \begin{align}\lambda(D,x,v(x))&:=\lim_{n\to\infty}\frac1n\log\|D^n(x)v(x)\|\nonumber\\&=\lim_{n\to\infty}\frac1n\log\|(1+\delta)^{\sum_{j=0}^{n-1}\mathbbm l_V(T^j(x))}C_1^n(x)v(x)\|\nonumber\\ & = \lambda(C_1,x,v(x)) + \log(1+\delta)\mu(V)\label{lyap exponent for Dflow}. \end{align} Let $\lambda_{D,1}> \lambda_{D,2}>\ldots>\lambda_{D,r_\delta}$ be the distinct Lyapunov exponents for $D$, with the corresponding multiplicities $m_1, \ldots, m_{r_\delta}$. Since for all $x\in X$, $$ \det D(x)=\det C_1(x)=\det A(x)\label{dets}, $$ by the multiplicative ergodic theorem we also have $$ \sum_{i=1}^{r_\delta}\lambda_{D,i}. m_{i}=d.\lambda_A. $$ By \eqref{lyap exponent for Dflow}, for any $\delta>0$ the cocycle $D$ has a Lyapunov exponent equal to $\lambda_A+\log(1+\delta)\mu(V)$, so we must have $r_\delta\geq2$. Moreover, for all $\delta>0$ \begin{align*} \|D^{\pm 1}(x)-A^{\pm 1}(x)\|&\leq \|C_2^{\pm 1}(x)-\textrm{Id}\|\cdot\|A^{\pm 1}(x)\|\\ &\leq 2M \quad\textrm{for}\, x\in T(V),\\ \|D^{\pm 1}(x)-A^{\pm 1}(x)\|&\leq \|C_1^{\pm 1}(x)-A^{\pm 1}(x)\|\\ &\leq MK \quad\textrm{for}\, x\in V,\\ D(x)&=A(x)\quad \textrm{for}\, x\notin V\cup T(V), \end{align*} which implies $$d_p(A,D)\leq \Delta_p(A,D)\leq 2(2+K)M\mu(V)^{1/p}.$$ For any given $\epsilon>0$ we can consider $V$ such that $2(2+K)M\mu(V)^{1/p}<\epsilon$ and\ we just have to consider $B=D$. \end{proof} In the next lemma \cite[Lemma 4.3]{AC0} we see that, under a small perturbation, we can change slighly the Lyapunov spectrum: \begin{lemma}\label{lit change} Assume that $A\in \mathcal{S}_{\!I\!C}$ has Lyapunov exponents $\lambda_{A,1}>\ldots>\lambda_{A,r}$ with multiplicities $m_1,\ldots,m_r$. Then, for any $\epsilon, \delta\in(0,1)$ and Borel $U\subset X$ with $\mu(U)>0$, there exist $\epsilon_1\in(0,1)$ and $B\in \mathcal{S}_{\!I\!C}$, with $d_p(A,B)<\epsilon$, $1\leq p\leq \infty$, such that $B(x)=A(x)$, for $x\in X\setminus U$, and $B$ has Lyapunov exponents $\lambda_{A,1}+\epsilon_1\log(1+\delta)>\ldots>\lambda_{A_r}+\epsilon_1\log(1+\delta)$, with multiplicities $m_1,\ldots,m_r$. \end{lemma} We are now in a position to argue for the proof of Theorem~\ref{simple}: \begin{proof}(of Theorem~\ref{simple}) Let $\{E_1(x),\ldots,E_r(x)\}$ be the Oseledets splitting of $\mathbb{R}^d$ generated by $A\in\mathcal{S}_{\!I\!C}$ and let $\{A_1(x),\ldots,A_r(x)\}$ be the corresponding decomposition of $A(x)=\bigoplus_{i=1}^rA_i(x)$. The idea is to apply Lemma~\ref{split spec} and Lemma~\ref{lit change} (if necessary) on the sub-bundles $E_i$. We stress that the proofs of Lemmas~\ref{split spec} and~\ref{lit change} allow us to perturb the original cocycle on a set of small $\mu$-measure of our choice, and can be taken to each of the blocks $A_i$ separately, without influencing the other blocks. The procedure is to look if ${\rm dim}(E_1(x))\geq2$ and, in this case, apply Lemma~\ref{split spec} to split this sub-bundle by a perturbation $B_1'$ of $A_1$ with at least to different Lyapunov exponents and, if necessary, combine it with Lemma~\ref{lit change} to get $B_1\in\mathcal{S}_{\!I\!C}$, with $d_p(A,B_1)<\epsilon/d$ with at least $r+1$ distinct Lyapunov exponents in its spectrum. We continue this procedure and after at most $d-1$ steps we obtain $B\in\mathcal{S}_{\!I\!C}$ with $d_p(A,B)<\epsilon$ and with simple spectrum.\end{proof} \section{The continuous-time case}\label{cc} \subsection{Definitions and statement of the results}\label{cont results} \subsubsection{Linear differential systems and Lyapunov exponents}\label{LDS} Let $X$ be a compact Hausdorff space, $\mu$ a Borel regular measure and $\varphi^{t}:X\rightarrow{X}$ a one-parameter family of continuous maps for which $\mu$ is $\varphi^{t}$-invariant. A cocycle based on $\varphi^{t}$ is defined by a flow $\Phi^{t}(x)$ differentiable on the time parameter $t\in{\mathbb{R}}$, measurable on space-parameter $x\in{X}$, and acting on $\GL(d,\mathbb{R})$. Together they form the linear skew-product flow: $$ \begin{array}{cccc} \Upsilon^{t}: & X\times{\mathbb{R}^{d}} & \longrightarrow & X\times{\mathbb{R}^{d}} \\ & (x,v) & \longmapsto & (\varphi^{t}(x),\Phi^{t}(x){v}) \end{array} $$ The flow $\Phi^{t}$ satisfies the so-called \emph{cocycle identity}: $\Phi^{t+s}(x)=\Phi^{s}(\varphi^{t}(x)){\Phi^{t}(x)}$, for all $t,s\in{\mathbb{R}}$ and $x\in{X}$. If we define a map $A\colon X\rightarrow{{\mathfrak {gl}}(d,\mathbb{R})}$ in a point $x\in{X}$ by: $$A(x)=\frac{d}{ds}\Phi^{s}(x)|_{s=0}$$ and along the orbit $\varphi^{t}(x)$ by: \begin{equation}\label{lvi} A(\varphi^{t}(x))=\frac{d}{ds}\Phi^{s}(x)|_{s=t} {[\Phi^{t}(x)]^{-1}}, \end{equation} then $\Phi^{t}(x)$ will be the solution of the linear variational equation (or equation of first variations): \begin{equation}\label{lve} \frac{d}{ds}{u(x,s)|_{s=t}}=A(\varphi^{t}(x)) u(x,t), \end{equation} and $\Phi^{t}(x)$ is also called the \emph{fundamental matrix} or the \emph{matriciant} of the system (\ref{lve}). Given a cocycle $\Phi^{t}$ we can induce the associated \emph{infinitesimal generator} $A$ by using~\eqref{lvi} and given $A$ we can recover the cocycle by solving the linear variational equation~\eqref{lve}, from which we get $\Phi_{A}^{t}$. In view of this, sometimes we refer for $A$ as a \emph{linear differential system}. Moreover, if in addition, $A$ is continuous with respect to the space variable $x$, we call $A$ a \emph{continuous linear differential system}. Several type of linear differential system are of interest, the ones with invertible matriciants, for all $x\in X$ and $t\in \mathbb{R}$, denoted by $\mathfrak{gl}(d,\mathbb{R})$, the \emph{traceless} ones with volume-preserving matriciant, for all $x\in X$ and $t\in \mathbb{R}$, which we denote by $\mathfrak{sl}(d,\mathbb{R})$, and also the systems with matriciant evolving in the symplectic group $\Sp(2d,\mathbb{R})$, denoted by $\mathfrak{sp}(2d,\mathbb{R})$. \medskip \begin{example} An illustrative example is the linear differential system associated to flows $X^t$ with $\|X(x)\|\not=0$, where $X(x)=\frac{d}{dt}X^t(x)|_{t=0}$ and $x\in X$. In this case we have $\Phi^{t}(x)\in{\GL(d,\mathbb{R})}$, and so the infinitesimal generator, given by relation (\ref{lvi}), belongs to $\mathfrak{gl}(d,\mathbb{R})$. Another example is the linear differential system associated to incompressible flows $X^t$ where $\|X(x)\|=1$ for any $x\in X$. In this case we have $\Phi^{t}(x)\in{\SL(d,\mathbb{R})}$, and so the infinitesimal generator belongs to $\mathfrak{sl}(d,\mathbb{R})$. \end{example} \medskip Consider the subset $\mathscr{G}_{\!I\!C}$ of maps $A\colon X\rightarrow\mathfrak{gl}(d,\mathbb{R})$ belonging to $L^1(\mu)$ that is: $$\int_X \|A(x)\|\,d\mu<\infty.$$ For such infinitesimal generators there is a unique, up to indistinguishability, linear differential system $\Phi_A^t$ satisfying, for $\mu$-a.e. $x$, \begin{equation}\label{solu}\Phi_A^t(x)=\textrm{Id}+\int_0^t A(\varphi^s(x))\Phi_A^s(x)\,ds.\end{equation} In this conditions, the time-one solution satisfies the \emph{integrability condition} \begin{equation*}\label{IC} \int_X\log^+\|\Phi_A^{\pm1}(x)\|\,d\mu<\infty, \end{equation*} and, consequently, Oseledets' theorem guarantees that for $\mu$-a.e. $x\in X$, there exists a $\Phi_{A}^{t}$-invariant splitting called \emph{Oseledets' splitting} of the fiber $\mathbb{R}^{d}_{x}=E^{1}(x)\oplus \ldots \oplus E^{k(x)}(x)$ and real numbers called \emph{Lyapunov exponents} $\tilde{\lambda}_{1}(x)>\ldots>\tilde{\lambda}_{k(x)}(x)$, with $k(x)\leq d$, such that: \begin{equation*}\label{limit} \underset{t\rightarrow{\pm{\infty}}}{\lim}\frac{1}{t}\log{\|\Phi_{A}^{t}(x) v^{i}\|={\tilde\lambda}_{i}(x)}, \end{equation*} for any $v^{i}\in{E^{i}(x)\setminus\{\vec{0}\}}$ and $i=1,\ldots,k(x)$. If we do not count the multiplicities, then we have $\lambda_{1}(x)\geq \lambda_{2}(x)\geq\ldots\geq\lambda_{d}(x)$. Moreover, given any of these subspaces $E^{i}$ and $E^{j}$, the angle between them along the orbit has subexponential growth, meaning that \begin{equation*}\label{angle} \lim_{t\rightarrow{\pm{\infty}}}\frac{1}{t}\log\sin(\measuredangle(E^{i}({\varphi^{t}(x)}),E^{j}({\varphi^{t}(x)})))=0. \end{equation*} If the flow $\varphi^{t}$ is ergodic, then the Lyapunov exponents and the dimensions of the associated subbundles are $\mu$-a.e. constant. For this results on linear differential systems see \cite{A} (in particular, Example 3.4.15). See also ~\cite{JPS}. As before, we say that $A\in\mathscr{G}_{\!I\!C}$ has \emph{one-point (Lyapunov) spectrum} if all Lyapunov exponents are equal. If, moreover, the linear differential system $A$ takes values in $\mathfrak{sl}(d,\mathbb R)$, then $A$ has one-point spectrum if and only if all Lyapunov exponents are zero. On the other hand, we say that $A\in\mathscr{G}_{\!I\!C}$ has \emph{simple (Lyapunov) spectrum} if all Lyapunov exponents are different. \subsubsection{Topologies on linear differential systems}\label{top} Consider the set $\mathscr{G}$ of the measurable maps $A:X\to \mathfrak{gl}(d,\mathbb R)$, $d \geq2$, endowed with its Borel $\sigma$-algebra. For $A,B\in\mathscr{G}$ and $1\leq p\leq\infty$ set \begin{equation*} \|A\|_p:=\left\{\begin{array}{lll} \Big(\displaystyle\int_X \|A(x)\|^p d\mu\Big)^{1/p}, & &\textrm{if}\, 1\leq p < \infty \\ \displaystyle\esssup_{x\in X}\|A(x)\|, & & \textrm{if}\, p=\infty \\ \end{array} \right.. \end{equation*} and \begin{equation}\label{metric} d_p(A,B)=\frac{\|A-B\|_p}{1+\|A-B\|_p},\end{equation} where $d_p(A,B)=1$ if $\|A-B\|_p=\infty$. Note that $A(x)\in \mathfrak{gl}(d,\mathbb R)$ do not need to be invertible. As in the discrete-time setting, the equality \eqref{metric} defines a metric on the space of infinitesimal generators, which is complete with respect to this metric. We refer for the metric/norm/topology induced by \eqref{metric} has the \emph{$L^p$ infinitesimal generator metric/norm/topology}. \begin{remark}\label{metric rel2} It follows from the definition of the metric and from H\"older inequality that, for all $A, B\in\mathscr{G}$ and $1\leq p\leq q\leq\infty$, we have $d_p(A,B)\leq d_q(A,B)$. \end{remark} \begin{remark}\label{small dist implies ic} If $A\in\mathscr{G}_{\!I\!C}$ and $B\in\mathscr{G}$ with $d_p(A,B) < 1$, $1\leq p\leq\infty$, then $B\in\mathscr{G}_{\!I\!C}$; see~\cite{AC0}. \end{remark} \subsubsection{Families of linear differential systems} Like we did in the discrete case we are interested in elements $A$ taking values in specific subgroups of $\mathfrak{gl}(d,\mathbb{R})$. In the greater generality we consider subgroups that satisfy an accessibility condition: \begin{definition} We call a non-empty closed subalgebra $\mathscr{T}\subset \mathfrak{gl}(d,\mathbb R)$ \textbf{accessible} if its associated Lie subgroup acts transitively in the projective space $\mathbb RP^{d-1}$. \end{definition} \begin{example} The subalgebras $\mathfrak{gl}(d,\mathbb R)$, $\mathfrak{sl}(d,\mathbb R)$, $\mathfrak{sp}(2d,\mathbb R)$ are accessible. \end{example} \begin{lemma}\label{rot2} Let $\mathscr{T}$ be an accessible subalgebra of $\mathfrak{gl}(d,\mathbb R)$. Then, there exists $K>0$ such that for all $u,v\in\mathbb RP^{d-1}$ there is $\{\mathfrak{R}_{u,v}(t)\}_{t\in[0,1]}\in \mathscr{T}$, with $\|\mathfrak{R}_{u,v}(t)\|\leq K$ such that $\Phi_{\mathfrak{R}_{u,v}}^1 u=v$, where $\Phi^t_{\mathfrak{R}_{u,v}}$ is the solution of the linear variational equation $\dot{u}(t)=\mathfrak{R}_{u,v}(t)\cdot u(t)$. \end{lemma} \begin{proof} The proof is analog to the one in Lemma~\ref{rot}. In order to comply the continuous-time formalization we just have to consider a smooth isotopy on $\mathscr{T}$ from the identity to the rotation $R_{u,v}$ (which sends the direction $u$ into the direction $v$) given by $\zeta(t)$, with $\zeta(t)=\textrm{Id}$ for $t\leq 0$ and $\zeta(t)=R_{u,v}$ for $t\geq 1$. We consider the linear variational equation $$\dot{u}(t)=\left[\frac{d}{dt}\zeta(t)\cdot \zeta(t)^{-1}\right]\cdot u(t)$$ with initial condition $u(0)=\textrm{Id}$ and unique solution equal to $\zeta(t)$. Define $\mathfrak{R}_{u,v}(t)=\frac{d}{dt}\zeta(t)\cdot \zeta(t)^{-1}$. Clearly, $\mathfrak{R}_{u,v}(t)$ is bounded. Moreover, the solution of $\dot{u}(t)=\mathfrak{R}_{u,v}(t)\cdot u(t)$ defined by $\Phi^t_{\mathfrak{R}_{u,v}}$ is, such that, $$\Phi^1_{\mathfrak{R}_{u,v}} u=\zeta(1) u=v.$$ \end{proof} \begin{definition} We say that a closed Lie subalgebra $\mathscr{S}\subseteq\mathfrak{gl}(d,\mathbb R)$ is \textbf{saddle-conservative} if its associated Lie subgroup is saddle-conservative in the sense of Definition~\ref{SC}. \end{definition} \begin{example} Analogous to the discrete-time case we have that the Lie algebras $\mathfrak{gl}(d,\mathbb R)$, $\mathfrak{sl}(d,\mathbb R)$, $\mathfrak{sp}(2q,\mathbb R)$ display the saddle-conservative property. The orthogonal Lie algebra and the special orthogonal Lie algebra do not display the saddle-conservative property. \end{example} \medskip Denote by $\mathscr{T}_{\!I\!C}\subset \mathscr{G}_{\!I\!C}$ the maps $A\colon X\rightarrow \mathscr{T}\subset\mathfrak{gl}(d,\mathbb R)$ where $\mathscr{T}$ is an accessible subalgebra. Denote by $\mathscr{S}_{\!I\!C}\subset\mathscr{T}_{\!I\!C}$ the maps $A\colon X\rightarrow \mathscr{S}\subset\mathscr{T}$ where $\mathscr{S}$ is a saddle-conservative accessible subalgebra. \subsubsection{Conservative perturbations} Considering the same notation as before we recall the Os\-tro\-grad\-sky-Jacobi-Liouville formula: \begin{equation}\label{OJL} \exp\left({\int_{0}^{t}\text{Tr}\,A(\varphi^{s}(x))\,ds}\right)=\det \Phi^{t}_A(x), \end{equation} where $\text{Tr}(A)$ denotes the trace of the matrix $A$. Therefore, we may speak about conservative perturbations of systems $A$ evolving in $\mathfrak{gl}(d,\mathbb{R})$ along the orbit $\varphi^{t}(x)$ as $A+H$ where $H(\varphi^{t}(x))\in \mathfrak{sl}(d,\mathbb{R})$. Denote by $\Phi_A^t$ the solution of (\ref{lve}) and by $\Phi_{A+H}^t$ the solution of the perturbed system: \begin{equation*}\label{lve3} \frac{d}{ds}{u(x,s)|_{s=t}}=[A(\varphi^{t}(x))+H(\varphi^{t}(x)]\cdot u(x,t), \end{equation*} By a direct application of formula (\ref{OJL}) we obtain \begin{eqnarray*} \det(\Phi^t_{A+H}(x))&=&exp\left({\int_{0}^{t}\text{Tr}A(\varphi^{s}(x))+\text{Tr}H(\varphi^{s}(x))\,ds}\right)\\ &=&exp\left({\int_{0}^{t}\text{Tr}A(\varphi^{s}(x))\,ds}\right)\\ &=&\det(\Phi^{t}_A(x)), \end{eqnarray*} which allows us to conclude that the perturbation leaves the volume form invariant. \subsubsection{Statement of the results} We intend to obtain the continuous-time version of the discrete results treated in the first part of this paper. We start by establishing the existence of a $L^p$-residual of the accessible linear differential systems with one-point spectrum: \begin{maintheorem}\label{ops2} There exists an $L^p$-residual subset $\mathcal R\in \mathscr{T}_{\!I\!C}$, $1\leq p < \infty$ such that, for any $B\in \mathcal{R}$ we have that $B$ has one-point spectrum. \end{maintheorem} However, there are no $L^p$-open subsets of the saddle-conservative accessible linear differential systems, since the simple spectrum is a dense property: \begin{maintheorem}\label{simple2} For any $A\in \mathscr{S}_{\!I\!C}$, $1\leq p < \infty$ over an ergodic flow and $\epsilon>0$, there exists $B\in \mathscr{S}_{\!I\!C}$, with $d_p(A,B)<\epsilon$ and $B$ has simple Lyapunov spectrum. \end{maintheorem} \subsection{The Arbieto and Bochi theorem for linear differential systems} Let us consider the following function where $\mathscr{L}$ is one of the subsets of linear differential systems $\mathscr{T}_{\!I\!C}$, $\mathscr{S}_{\!I\!C}$ or $\mathscr{G}_{\!I\!C}$: \begin{equation*}\label{entropy} \begin{array}{cccc} \Lambda_{k}\colon &\mathscr{L} & \longrightarrow & [0,\infty) \\ & A & \longmapsto & \int_{X}\lambda_{1}(\wedge^{k}(A),x)\, d\mu. \end{array} \end{equation*} With this function we compute the integrated \emph{largest} Lyapunov exponent of the $k^{th}$ exterior power operator. Let us denote $\hat\lambda_{k}(A,x)=\lambda_{1}(A,x)+\ldots+\lambda_{k}(A,x)$. By using Lemma~\ref{arnauld} we conclude that for $k=1,\ldots,d-1$ we have $\hat\lambda_{k}(A,x)=\lambda_{1}(\wedge^{k}(A),x)$ and therefore we obtain $\Lambda_{k}(A)=\Lambda_{1}(\wedge^{k}(A))$. In order to prove that $\Lambda_k$ is an upper semicontinuous function if we endow $ \mathscr{L}$ with the $L^p$ infinitesimal generator topology (Proposition \ref{upper sc}), we give a preliminary result which allows us to control different solutions taking into account the closeness of the respective infinitesimal generators. In what follows we use the same notation for the $L^1$-norm of the infinitesimal generators introduced in \S\ref{top} and for the usual $L^1$-norm $\|f\|_1$ of functions $f:X\to\mathbb R$, given by $\int_X |f(x)|\,d\mu$. \begin{lemma}\label{cont solu} For $A,B\in\mathscr{G}_{\!I\!C}$ we have $$\left\|\log^+\|\Phi_A^t(x)\|-\log^+\|\Phi_B^t(x)\|\right\|_1\leq t\|A-B\|_1,\,\,\text{for all}\,\,\, t\in\mathbb R^+.$$ \end{lemma} \begin{proof} From \eqref{solu}, Gronwall's lemma (see, e.g., \cite{A}) implies that, with $C=A,B$, for $\mu$-a.e. $x\in X$ and for all $t\in\mathbb R^+$ we have $$\log^+\|\Phi_{C}^t(x)\|\leq\int_{0}^t\|C(\varphi^s(x))\|\,ds,$$ and, consequently, \begin{align*} \left|\log^+\|\Phi_A^t(x)\|-\log^+\|\Phi_B^t(x)\|\right| &\leq \left|\int_0^t \|A(\varphi^s(x))\|-\|B(\varphi^s(x))\|\,ds\right|\\ &\leq \int_0^t \|A(\varphi^s(x))-B(\varphi^s(x))\|\,ds=:\alpha_t(x). \end{align*} By \cite[Lemma 2.2.5]{A} $\alpha_t(x)\in L^1(X)$, and by Tonelli-Fubini's theorem, the change of variables theorem and the $\varphi^s$-invariance of $\mu$, we have for all $t\in\mathbb R^+$ \begin{align*}\label{pseudocont} \left\|\log^+\|\Phi_A^t(x)\|-\log^+\|\Phi_B^t(x)\|\right\|_1 &\leq\int_X \int_0^t \|A(\varphi^s(x))-B(\varphi^s(x))\|\,ds\,d\mu\\ &\leq \int_0^t \|A-B\|_1\,ds\\ &= t\|A-B\|_1. \end{align*} \end{proof} Recall that, for any $A\in\mathscr{G}_{\!I\!C}$ we have \begin{equation}\label{exp via inf} \Lambda_k(A)=\underset{t\rightarrow{\pm{\infty}}}{\lim}\frac{1}{t}\int_X\log\|\wedge^k(\Phi_{A}^{t}(x))\|\,d\mu=\underset{n\in\mathbb{N}}{\inf}\,\frac{1}{n}\int_X\log\|\wedge^k(\Phi_{A}^{n}(x))\|\,d\mu. \end{equation} \begin{proposition}\label{upper sc} For each $k=1,\ldots, d$, the function $\Lambda_k$ is upper semicontinuous when we endow $\mathscr{L}$ with the $L^p$ infinitesimal generator topology, $1\leq p\leq\infty$. Moreover, in these conditions $\Lambda_d$ is a continuous function. \end{proposition} \begin{proof} Let $A\in\mathscr{G}_{\!I\!C}$, $k\in\{1,\ldots,d\}$ and $\epsilon>0$ be given. We start by assuming that \begin{equation}\label{hatlambda geq 0} \hat\lambda_k(A,x)\geq 0 ,\,\,\text{for}\,\, \mu\text{-a.e.} \,\,x\in X. \end{equation} By \eqref{exp via inf}, \eqref{hatlambda geq 0} and the subbaditive ergodic theorem, it is possible to find $N\in\mathbb{N}$ large enough in order to have \begin{equation}\label{LBA} \frac{1}{N}\int_X\log^+\|\wedge^k(\Phi_{A}^{N}(x))\|\,d\mu<\Lambda_k(A)+\frac\epsilon2. \end{equation} We will see that we can find $\delta>0$ such that for any $B$ satisfying $d_p(A,B)<\delta$ we have that $B\in \mathscr{G}_{\!I\!C}$ (this follows from Remarks \ref{metric rel2} and \ref{small dist implies ic}) and $\Lambda_k(B)< \Lambda_k(A)+\epsilon$. Indeed, since $\|\wedge^k\Phi_{A,B}^N(x)\|\leq \|\Phi_{A,B}^N(x)\|^k$, from \eqref{exp via inf}, \eqref{LBA} and Lemma \ref{cont solu} we get \begin{align*} \Lambda_k(B) &\leq \frac{1}{N}\int_X\log^+\|\wedge^k(\Phi_{B}^{N}(x))\|\,d\mu\\ &\leq \frac{1}{N}\int_X\log^+\|\wedge^k(\Phi_{A}^{N}(x))\|\,d\mu+ \frac{1}{N}\int_X\left|\log^+\|\wedge^k(\Phi_{B}^{N}(x))\|-\log^+\|\wedge^k(\Phi_{B}^{N}(x))\|\right|\,d\mu\\ &\leq\Lambda_k(A)+\frac\epsilon2+\frac{k}N N\|A-B\|_1. \end{align*} If $\delta<\epsilon/({2k+\epsilon})$ then $d_p(A,B)<\delta$ implies $\|A-B\|_1\leq\|A-B\|_p<\epsilon/({2k})$, and the result follows. Let us prove now the general case. Again, let $A\in\mathscr{G}_{\!I\!C}$, $k\in\{1,\ldots,d\}$ and $\epsilon>0$ be given. For $\alpha>0$ we define the $\varphi^t$-invariant set $L_\alpha=\{x\in X: \hat\lambda_k(A,x)<-\alpha\}$. Consider $\alpha$ large enough such that \begin{equation}\label{gen case 1} k\int_{L_\alpha}\log^+\|\Phi_A^1(x)\|\,d\mu < \frac\epsilon8\,\,\,\, \text{and} \,\,\,\, \int_{L_\alpha} \hat\lambda_k(A,x)\,d\mu>-\frac\epsilon8. \end{equation} Set $\beta\geq \alpha>0$, denote by $\textrm{Id}$ the identity $d\times d$ matrix and define $\tilde A(x) = A(x)+\beta.\textrm{Id}$, $\tilde B(x) = B(x)+\beta.\textrm{Id}$. Then $\hat\lambda_k(\tilde A,x)=\hat\lambda_k(A,x)+\beta,$ which is greater or equal than zero for $x\in L_\alpha^C$. Moreover, if $d_p(A,B)$ is sufficiently small then also is $d_p(\tilde A,\tilde B)$, and by the previous case we have \begin{equation*}\int_{L_\alpha^C}\hat\lambda(\tilde B,x)\,d\mu\leq \int_{L_\alpha^C}\hat\lambda(\tilde A,x)\,d\mu +\frac\epsilon2, \end{equation*} which implies \begin{equation}\label{gen case 2}\int_{L_\alpha^C}\hat\lambda(B,x)\,d\mu\leq \int_{L_\alpha^C}\hat\lambda(A,x)\,d\mu +\frac\epsilon2. \end{equation} From Lemma \ref{cont solu}, if $d_p(A,B)$ is sufficiently small then $$\left\|\log^+\|\Phi_A^1(x)\|-\log^+\|\Phi_B^1(x)\|\right\|_1\leq \frac{\epsilon}{4k},$$ which, with \eqref{gen case 1} implies \begin{eqnarray} \int_{L_a}\hat\lambda(B,x)\,d\mu&=& \inf_n\frac1n\int_{L_a}\log^+\|\wedge^k\Phi_B^n(x)\|\,d\mu\nonumber\\ &\leq& k\int_{L_a}\log^+\|\Phi_B^1(x)\|\,d\mu\nonumber\\ &\leq&k\int_{L_a}\log^+\|\Phi_A^1(x)\|\,d\mu+k\int_{L_a}\left|\log^+\|\Phi_A^1(x)\|-\log^+\|\Phi_B^1(x)\|\right|\,d\mu\nonumber\\ &\leq&\int_{L_a} \hat\lambda_k(A,x)\,d\mu+\frac\epsilon2\label{gen case 4}. \end{eqnarray} The proof for this general case follows now from \eqref{gen case 2} and \eqref{gen case 4}. Finally, in order to prove the continuity of $\Lambda_d$ we just have to note that $$A\mapsto\tilde\Lambda_k(A):=\int_X\lambda_{d-k+1}(A,x)+\cdots+\lambda_{d}(A,x)\,d\mu=-\Lambda_k(-A)$$ is lower semicontinuous for each $k=1,\ldots,d$, so that $\Lambda_d=\tilde\Lambda_d$ is continuous. \end{proof} \subsection{One-point spectrum is residual}\label{ops cont} The proof of Theorem~\ref{ops2} is a straightforward application of the scheme described in \S\ref{BVrevisited} to prove Theorem~\ref{ops}. The only novelty is the perturbation toolbox which we will develop in the sequel (Lemma~\ref{rot3cont}). We consider the perturbations within the \emph{continuous} linear differential systems because the estimates are more easily established. Once we have a perturbation framework developed the proof of Theorem~\ref{ops2} will have a further simple additional step. \begin{proof}(of Theorem~\ref{ops2}) Let $A\in \mathscr{T}_{\!I\!C}$ be a continuity point of the functions $\Lambda_k$, for all $k=1,...,d$, defined in Proposition~\ref{upper sc}, and with respect to the $L^p$-topology. \begin{enumerate} \item [Case 1:] $A$ is a continuous linear differential system. We proceed as in the proof of Theorem~\ref{ops} and use the perturbation Lemma~\ref{rot3cont} to mix Oseledets direction and so cause a decay of the Lyapunov exponents and finally we use Proposition~\ref{upper sc} to complete the argument. \item [Case 2:] $A$ is not a continuous linear differential system. It follows from Lusin's theorem (see e.g. ~\cite[\S 2 and \S 3]{Ru}) that the continuous linear differential systems over flows on compact spaces $X$ and on manifolds like the Lie subgroups we are considering, are $L^p$-dense in the $L^p$ ones. \end{enumerate} Now, we take a sequence of continuous linear differential systems $A_n\in \mathscr{T}_{\!I\!C}$ converging to $A$ in the $L^p$-sense. Since $A$ is a continuity point we must have $\underset{n\rightarrow \infty}{\lim} \Lambda_k(A_n)=\Lambda_k(A)$. Like we did in Proposition~\ref{P3}, but this time in the flow setting, given $\epsilon_n\rightarrow 0$ and $\delta>0$, there exists $B_n\in \mathscr{T}_{\!I\!C}$, with $d_p(A_n,B_n)<\epsilon_n$, such that $$\Lambda_k(B_n)<\delta-J_k(A_n)+\Lambda_{k}(A_n),$$ where the jump is defined like we did in the discrete case by $$J_k(A_n)=\int_X\frac{\lambda_k(A_n,x)-\lambda_{k+1}(A_n,x)}{2}\,d\mu.$$ Considering limits we get: $$\underset{n\rightarrow \infty}{\lim}\Lambda_k(B_n)<\delta-\underset{n\rightarrow \infty}{\lim}J_k(A_n)+\Lambda_{k}(A).$$ Since $A$ is a continuity point of $\Lambda_k$ we obtain that $J_k(A_n)=0$ for all $k$ and all $n$ sufficiently large, i.e., $\lambda_k(A_n,x)=\lambda_{k+1}(A_n,x)$ for all $k$ and $\mu$-a.e. $x\in X$. Therefore, the linear differential system $A_n$ must have one-point spectrum for $\mu$-a.e. $x\in X$ and the same holds for $A$ because $\underset{n\rightarrow \infty}{\lim} \Lambda_k(A_n)=\Lambda_k(A)$. Once again we finalize the proof recalling that the set of continuity points of an upper semicontinuous function is a residual subset. \end{proof} The next result is the basic perturbation tool which allows us to interchange Oseledets directions. \begin{lemma}\label{rot3cont} Let be given a continuous linear differential system $A$ evolving in a closed accessible Lie subalgebra $\mathscr{T}\subseteq\mathfrak{gl}(d,\mathbb R)$ and over a flow $\varphi^t\colon X\rightarrow X$, $\epsilon>0$, $1\leq p < \infty$ and a non-periodic $x\in{X}$ (or periodic with period larger than $1$). There exists $r>0$ (depending on $\epsilon$) such that for all $\sigma\in(0,1)$, all $y\in B(x,\sigma r)$ (the ball transversal to $\varphi^t$ at $x$) and any continuous choice of a pair of vectors $u_y$ and $v_y$ in $ \mathbb{R}^{d}_{y}\setminus\{\vec0\}$: \begin{enumerate} \item there exists a continuous linear differential system $B\in \mathscr{T}$, with $d_p(A,B)<\epsilon$ such that $\Phi^{1}_{B}(y)u_y=\Phi^{1}_{A}(y) \mathbb{R}v_y$, where $\mathbb{R}v_y$ stands for the direction of the vector $v_y$; Moreover, \item there exists a traceles system $H$, supported in the flowbox $\mathcal{F}:=\{\varphi^t(y)\colon t\in[0,1], y\in B(x, r)\}$, such that $\|H\|_p<\epsilon$, $B(y)=A(y)+H(y)$ for all $y\in B(x,\sigma r)$, and $B(z)=A(z)$ if $z\notin\mathcal{F}$. \end{enumerate}\end{lemma} \begin{proof} We begin by taking $K:=\max_{z\in X}\|(\Phi_{A}^{t}(z))^{\pm 1}\|$ for $t\in[0,1]$. For a given small $r>0$ we take the closed ball centered in $x$ and with radius $r$ transversal to the flow direction and denoted by $B(x,r)$. We fix $\sigma\in(0,1)$. Let $\eta\colon \mathbb{R} \rightarrow [0,1]$ be a $C^{\infty}$ function such that $\eta(t)=0$ for $t\leq 0$ and $\eta(t)=1$ for $t\geq 1$. Let also $\rho\colon \mathbb{R} \rightarrow [0,1]$ be a $C^{\infty}$ function such that $\rho(t)=0$ for $t\leq \sigma$ and $\rho(t)=1$ for $t\geq 1$. In what follow, for $y\in B(x,r)$ we are going to define the 1-parameter family of linear maps $\Psi^{t}(y)\colon \mathbb{R}^{d}_{y} \rightarrow \mathbb{R}^{d}_{y}$ for $t\in[0,1]$. For $t\in[0,1]$ we let $u_y^t=(1-\eta(t))u_y+\eta(t)v_y$ and, by the transitive property, we choose a smooth family $\{\mathcal{R}^t_y\}_{t\in[0,1]}$ such that $\mathcal{R}^t_y\in \mathscr{T}$ and $\mathcal{R}^t_y \,u_y=u_y^t$. Let $L>0$ be sufficiently large in order to get $\|\dot{\mathcal{R}_y^{t}}(\mathcal{R}^{t}_y)^{-1}\|<L$ for all $t\in[0,1]$ and $y\in B(x,r)$. Finally, we normalize the volume by taking $\mathfrak{R}_y^t=\zeta(t,y)\mathcal{R}^t_y$ such that $\det(\mathfrak{R}_y^t)=1$ for all $t\in[0,1]$ and $y\in B(x,r)$. Now, we take $\kappa>0$ such that $\zeta(t,y)>\kappa$ and $\dot{\zeta}(t,y)=\frac{d\zeta (t,y)}{dt}<\kappa^{-1}$ for all $t\in[0,1]$ and $y\in B(x,r)$. Then, we consider the 1-parameter family of linear maps $\Psi^{t}(y)\colon \mathbb{R}^{d}_{y} \rightarrow \mathbb{R}^{d}_{\varphi^{t}(y)}$ where $\Psi^{t}(y)=\Phi_{A}^{t}(y) \mathfrak{R}^{t}_y$. In order to simplify the heavy notation we consider $\mathfrak{R}^t=\mathfrak{R}^t_y$, $\mathcal{R}^t=\mathcal{R}^t_y$, $\Phi^t_A=\Phi^t_A(y)$, $\zeta=\zeta(t,y)$ and $\dot{\zeta}=\frac{d\zeta(t,y)}{dt}$. We take time derivatives and we obtain: \begin{eqnarray*} \dot{\Psi}^{t}(y)&=& \dot{\Phi}_{A}^{t}\mathfrak{R}^{t}+\Phi_{A}^{t}\dot{\mathfrak{R}}^t=A(\varphi^{t}(y))\Phi_{A}^{t}\mathfrak{R}^{t}+\Phi_{A}^{t}\dot{\zeta}{\mathcal{R}}^t+\Phi_{A}^{t}{\zeta}\dot{\mathcal{R}}^t=\\ &=& A(\varphi^{t}(y))\Psi^{t}(y)+[\Phi_{A}^{t}\dot{\zeta}\zeta^{-1}(\Phi^t_A)^{-1}+\Phi_{A}^{t}\zeta\dot{\mathcal{R}}^{t}(\Psi^{t}(y))^{-1}]\Psi^{t}(y)\\ &=& \left[A(\varphi^{t}(y))+H(\varphi^{t}(y))\right]\cdot \Psi^{t}(y). \end{eqnarray*} Hence, we define, for all $y\in B(x,r)$ and $t\in[0,1]$, the perturbation $H$ in the \emph{flowbox coordinates} $(t,y)$ by \begin{eqnarray*} H(\varphi^{t}(y))&=&\Phi_{A}^{t}\dot{\zeta}\zeta^{-1}(\Phi^t_A)^{-1}+\Phi_{A}^{t}\zeta\dot{\mathcal{R}}^{t}(\Psi^{t}(y))^{-1}\\ &=&\frac{\dot{\zeta}}{\zeta}\textrm{Id}+\Phi_{A}^{t}\zeta\dot{\mathcal{R}}^{t}(\Phi_{A}^{t} \mathfrak{R}^{t})^{-1}\\ &=&\frac{\dot{\zeta}}{\zeta}\textrm{Id}+\Phi_{A}^{t}\dot{\mathcal{R}}^{t}(\mathcal{R}^{t})^{-1}(\Phi_{A}^{t} )^{-1}. \end{eqnarray*} By Jacobi's formula on the derivative of the determinant we have \begin{eqnarray*} \frac{d(\det (\zeta\mathcal{R}^t))}{dt}&=&\text{Tr}\left(\text{adj} (\zeta\mathcal{R}^t)\frac{d(\zeta\mathcal{R}^t)}{dt}\right)=\text{Tr}\left(\det (\zeta\mathcal{R}^t)(\zeta\mathcal{R}^t)^{-1}\frac{d(\zeta\mathcal{R}^t)}{dt}\right)\\ &=&\text{Tr}(\zeta^{-1}(\mathcal{R}^t)^{-1}(\dot\zeta\mathcal{R}^t+\zeta\dot{\mathcal{R}}^t))=\text{Tr}\left(\frac{\dot{\zeta}}{\zeta}\textrm{Id}+(\mathcal{R}^t)^{-1}\dot{\mathcal{R}}^t\right)\\ &=&\text{Tr}\left(\frac{\dot{\zeta}}{\zeta}\textrm{Id}\right)+\text{Tr}[(\mathcal{R}^t)^{-1}\dot{\mathcal{R}}^t]. \end{eqnarray*} But we also have, for all $t\in[0,1]$ and $y\in B(x,r)$, $\det (\zeta\mathcal{R}^t)=1$ and so $$\text{Tr}\left(\frac{\dot{\zeta}}{\zeta}\textrm{Id}+(\mathcal{R}^t)^{-1}\dot{\mathcal{R}}^t\right)=\text{Tr}\left(\frac{\dot{\zeta}}{\zeta}\textrm{Id}+\dot{\mathcal{R}}^t (\mathcal{R}^t)^{-1}\right)=0.$$ Since the trace is invariant by any change of coordinates we obtain $\text{Tr}(H(\varphi^{t}(y)))=0$. At this time, we consider the flowbox $\mathcal{F}:=\{\varphi^t(y)\colon t\in[0,1], y\in B(x,r)\}$ and we are able to define the linear continuous differential system \begin{equation}\label{B}B(z)= \left\{\begin{array}{ccc} A(z), \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{if } z\notin\mathcal{F} \\ A(z)+\left(1-\rho\left(\frac{\|x-y\|}{r}\right)\right)H(z), \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{if } z=\varphi^t(y)\in\mathcal{F} \end{array}\right..\end{equation} In order to estimate $d_p(A,B)$ it suffices to compute the $L^p$ infinitesimal generator norm of $H$. For that we consider Rokhlin's theorem (see ~\cite{R}) on disintegration of the measure $\mu$ into a measure $\hat{\mu}$ in the transversal section and the length in the flow direction, say $\mu=\hat{\mu}\times dt$. Go back into the beginning of the proof and pick $r>0$ such that $$\hat{\mu}(B(x,r))<\left(\frac{\epsilon}{\kappa^{-2}+K^2L}\right)^p.$$ We have then \begin{eqnarray*} \|H\|_p&=&\left(\int_\mathcal{F} \|H(z)\|^p d\mu(z)\right)^{1/p}\\ &=&\left(\int_0^1\int_{B(x,r)} \|H(\varphi^t(y))\|^p d\hat{\mu}(y)dt\right)^{1/p}\\ &=&\left(\int_0^1\int_{B(x,r)} \left\|\frac{\dot{\zeta}(t,y)}{\zeta(t,y)}\textrm{Id}+\Phi_{A}^{t}(y)\dot{\mathcal{R}}^{t}(\mathcal{R}^{t})^{-1}(\Phi_{A}^{t}(y) )^{-1}\right\|^p d\hat{\mu}(y)dt\right)^{1/p}\\ &\overset{{\tiny Minkowski }}{\leq}&\left(\int_0^1\int_{B(x,r)} \left\|\frac{\dot{\zeta}(t,y)}{\zeta(t,y)}\textrm{Id}\right\|^p\right)^{1/p}\\ &~&+\left(\int_0^1\int_{B(x,r)}\left\|\Phi_{A}^{t}(y)\dot{\mathcal{R}}^{t}(\mathcal{R}^{t})^{-1}(\Phi_{A}^{t}(y) )^{-1}\right\|^p d\hat{\mu}(y)dt\right)^{1/p}\\ &\leq&(\kappa^{-2}+K^2L)\hat{\mu}(B(x,r))^{1/p}<\epsilon. \end{eqnarray*} Note that the perturbed system $B$ generates the linear flow $\Phi_{A+H}^{t}(y)$ which is the same as $\Psi^{t}$ by unicity of solutions with the same initial conditions, hence given $u_y\in \mathbb{R}^d_{y}$ we have $$\Phi_{B}^{t}(y) u_y =\Psi^{t}(y) u_y=\Phi_{A}^{t}(y)\mathfrak{R}^t_y\,u_y=\zeta(t,y)\Phi_{A}^{t}(y)\mathcal{R}^t_y\,u_y=\zeta(t,y)\Phi_{A}^{t}(y) u^t_y.$$ To finish the proof, we take $t=1$ and obtain $$\Phi_{B}^{1}(y) u_y =\zeta(1,y)\Phi_{A}^{1}(y) u_y^1=\Phi_{A}^{1}(y) [\zeta(1,y)\,v_y].$$ \end{proof} \medskip \begin{remark}\label{rot33cont} Using Lemma~\ref{rot3cont} we can also ``view'' the exchange of directions in $\mathbb{R}^{d}_{\varphi^{1}(y)}$ instead of in $\mathbb{R}^{d}_{y}$. Hence, for any two vectors $u_y^1$ and $v_y^1$ in $ \mathbb{R}^{d}_{\varphi^1(y)}\setminus\{\vec0\}$ and defining $u^0_y:=\Phi_A^{-1}(\varphi^1(y))u_y^1$, $v_y^0:=\Phi_A^{-1}(\varphi^1(y))v_y^1$, we get $\Phi^{1}_{A+H}(y)u_y^0=\Phi^{1}_{A}(y) \mathbb{R}v_y^0$, where $\mathbb{R}v_y^0$ stands for the direction of the vector $v_y^0$. Moreover, if the choice of a pair of vectors $u_y$ and $v_y$ in $ \mathbb{R}^{d}_{y}\setminus\{\vec0\}$ is only measurable, then the linear differential system $B\in \mathscr{T}$ satisfying (1) and (2) of Lemma~\ref{rot3cont} do not need to be continuous. \end{remark} \subsection{Simple spectrum is dense}\label{simple cont} In this section we will obtain the continuous-time counterpart of section \S\ref{proof simple}. For that we must develop a perturbation implement in the language of differential equations which plays the role of the cocycle $C_2$ in the proof of Lemma~\ref{split spec}. This is precisely what next result assures. \begin{lemma}\label{rot4} Let be given a continuous linear differential system $A$ evolving in a closed Lie accessible subalgebra $\mathscr{S}\subseteq\mathfrak{gl}(d,\mathbb R)$ which displays the saddle-conservative property and over a flow $\varphi^t\colon X\rightarrow X$, $\epsilon>0$, $1\leq p < \infty$ and a non-periodic $x\in{X}$ (or periodic with period larger than $1$). There exists $r>0$, such that for all $\sigma\in(0,1)$, all $y\in B(x,\sigma r)$, any $\delta>0$ and any continuous choice of directions $e_y\in \mathbb{R}^d_y$ \begin{enumerate} \item there exists a continuous linear differential system $B\in \mathscr{S}$, with $d_p(A,B)<\epsilon$ such that $\Phi^{1}_{B}(y)\, e_y=(1+\delta)\Phi^{1}_{A}(y)\, e_y$; Moreover, \item there exists a traceless system $H$, supported in the flowbox $\mathcal{F}:=\{\varphi^t(y)\colon t\in[0,1], y\in B(x, r)\}$, such that $\|H\|_p<\epsilon$, $B(y)=A(y)+H(y)$ for all $y\in B(x,\sigma r)$, and $B(z)=A(z)$ if $z\notin\mathcal{F}$. \end{enumerate} \end{lemma} \begin{proof} We will perform the continuous perturbations along a time-one segments of time-one orbits of $y\in B(x,r)$ for some sufficiently thin flowbox. The construction is similar to the one we did in the proof of Lemma~\ref{rot3cont}. Take $K:=\max_{z\in X}\|(\Phi_{A}^t(z))^{\pm 1}\|$ for $t\in[0,1]$. Let $\mathcal{S}\subseteq \GL(d,\mathbb{R})$ be the saddle-conservative Lie subgroup associated to $\mathscr{S}$, $y\in B(x,r)$ and $e_y\in \mathbb{R}^d_y$ varying continuously with $y$. Fix $\delta>0$ and let $\eta\colon \mathbb{R} \rightarrow [0,1]$ be any $C^{\infty}$ function such that $\eta(t)=0$ for $t\leq 0$ and $\eta(t)=\delta$ for $t\geq 1$. Take a smooth family $\{\mathcal{E}^{t}_y\}_{t>0}\subset \mathcal{S}$ such that: \begin{enumerate} \item [(i)] $\mathcal{E}^{t}_y \in \SL(d,\mathbb R)$ and \item [(ii)] $\mathcal{E}_y^{t}\, e_y=(1+\eta(t))e_y$. \end{enumerate} Consider the $1$-parameter family of linear maps $\Psi^{t}(y)\colon \mathbb{R}^{d}_{y} \rightarrow \mathbb{R}^{d}_{\varphi^{t}(y)}$ where $\Psi^{t}(y)=\Phi_{A}^{t}(y) \mathcal{E}^{t}_y$. We take time derivatives and we obtain: \begin{eqnarray*} \dot{\Psi}^{t}(y)&=& \dot{\Phi}_{A}^{t}(y)\mathcal{E}_y^{t}+\Phi_{A}^{t}(y)\dot{\mathcal{E}}^t_y=A(\varphi^{t}(y))\Phi_{A}^{t}(y)\mathcal{E}^{t}_y+\Phi_{A}^{t}(y)\dot{\mathcal{E}}^t_y\\ &=& A(\varphi^{t}(y))\Phi_{A}^{t}(y)\mathcal{E}^{t}_y+\Phi_{A}^{t}(y)\dot{\mathcal{E}}^t_y (\mathcal{E}^{t}_y)^{-1} (\Phi_{A}^{t}(y))^{-1}\Phi_{A}^{t}(y)\mathcal{E}^{t}_y\\ &=& [A(\varphi^{t}(y))+\Phi_{A}^{t}(y)\dot{\mathcal{E}}_y^t (\mathcal{E}^{t})_y^{-1} (\Phi_{A}^{t}(y))^{-1}]\Psi^{t}(y). \end{eqnarray*} The perturbation is then defined by: $$H(\varphi^t(y))=\Phi_{A}^{t}(y)\dot{\mathcal{E}}_y^t (\mathcal{E}^{t}_y)^{-1} (\Phi_{A}^{t}(y))^{-1}.$$ We can define now the continuous linear differential system $B$ as in \eqref{B}. Now it is time to choose the thickness $r>0$. Let $L>0$ be such that $\|\dot{\mathcal{E}}_y^{t}(\mathcal{E}_y^{t})^{-1}\|^p\leq L$, for all $y\in B(x,r)$ and $t\in[0,1]$. Finally, take $r>0$ such that: $$\hat{\mu}(B(x,r))<\left(\frac{\epsilon}{L K^2}\right)^p.$$ To estimate $d_p(A,B)\leq \|H\|_p$, we have \begin{eqnarray*} \|H\|_p&=&\left(\int_\mathcal{F} \|H(z)\|^p d\mu(z)\right)^{1/p}\\ &=&\left(\int_0^1\int_{B(x,r)} \|H(\varphi^t(y))\|^p d\hat{\mu}(y)dt\right)^{1/p}\\ &=&\left(\int_0^1\int_{B(x,r)} \left\|\Phi_{A}^{t}(y)\dot{\mathcal{E}}_y^t (\mathcal{E}_y^{t})^{-1} (\Phi_{A}^{t}(y))^{-1}\right\|^p d\hat{\mu}(y)dt\right)^{1/p}\\ &\leq&L K^2\hat{\mu}(B(x,r))^{1/p}<\epsilon. \end{eqnarray*} Finally, we observe that $$\Phi^{1}_{B}(y)\, e_y=\Psi^{1}(y)\,e_y=\Phi_{A}^{1}(y) \mathcal{E}^{1}_y \,e_y=\Phi_{A}^{1}(y) (1+\eta(1))\,e_y=(1+\delta)\Phi^{1}_{A}(y) e_y.$$ \end{proof} The proof of Theorem~\ref{simple2}, which asserts the density of cocycles with simple spectrum in continuous-time cocycles, follows by similar arguments as the proof of Theorem~\ref{simple}. Since Lemma~\ref{lit change}~\cite[Lemma 4.3]{AC0} holds trivially for continuous-time cocycles, we only need the flow version of Lemma~\ref{split spec}, which we write down for completeness: \begin{lemma}\label{split spec flow} Assume that $A\in \mathcal{S}_{\!I\!C}$ over a flow $\varphi^t\colon X\rightarrow X$ has one-point spectrum and $d\geq2$. Then, for any small $\epsilon >0$ and $1\leq p < \infty$, there exists $B\in\mathcal{S}_{\!I\!C}$, with $\|A-B\|_p<\epsilon$, such that $B$ has at least two different Lyapunov exponents. \end{lemma} \begin{proof} We will consider $A$ to be continuous because we can always approximate, in the $L^p$-sense, the linear differential system $A$ by another one which is continuous. Consider a transversal section to the flow $\Sigma\subset X$, such that the time-one flowbox $V:=\varphi^{[0,1]}(\Sigma)$ is such that $\mu(V)>0$, $V\cap \varphi^1(V)=\varphi^1(\Sigma)$. Let $L_A:=\max_{x\in X}\|A(x)\|$ and $k(x):=\min\{ t>0: \varphi^{-t}(x)\in \varphi^1(\Sigma)\}$. For the sake of simplicity of presentation we assume that $\Sigma$ is a transversal closed ball $B(p,r)$. Fix a unitary vector $e\in \mathbb RP^{d-1}$ and define the following \emph{vector field} which is a normalized image under the cocycle associated to $A$ of the direction associated to the vector $e$, in the fiber corresponding to each $x\in X$: $$v(x):=\left\{\begin{array}{lll} e,&& \textrm{if}\, x\in \varphi^1(\Sigma)\\ \frac{\Phi_A^{k(x)}(\varphi^{-k(x)}(x))e}{\|\Phi_A^{k(x)}(\varphi^{-k(x)}(x))e\|},&& \textrm{otherwise} \end{array}\right., $$ and set $E(x)=\textrm{span}\{v(x)\}$. For $x\in \Sigma$ define $q(x)\in\mathbb RP^{d-1}=\mathbb{R}^d_x$ given by $$q(x)=\frac{\Phi^1_A(x)v(x)}{\|\Phi^1_A(x)v(x)\|}.$$ Let $H_{q(\cdot)}\colon X\rightarrow{{\mathfrak {sl}}(d,\mathbb{R})}$ be a linear differential system, supported in $V$ and constructed following the steps of Lemma \ref{rot3cont} and Remark~\ref{rot33cont}, such that, if $x\in \Sigma$ and $e\notin \langle q(x)\rangle$ we have: $$\Phi^1_{A+H_{q(x)}}(x)v(x)=\Phi^1_A(x)\mathbb{R}w(x),$$ where $w(x):=[\Phi^1_A(\varphi^1(x))]^{-1} e$. Clearly, $d_p(A,A+H_{q})$ can be smaller than any small $\epsilon>0$ just considering $V$ with small enough $\mu$-measure. If $A+H_{q}$ has two or more distinct Lyapunov exponents we take $B=A+H_{q}$ and we are done. Let us consider now that $A+H_{q}$ has only one Lyapunov exponent $\lambda_{A+H_{q}}$. Then, it must be equal to the unique Lyapunov exponent $\lambda_A$ for $\Phi^1_A$ (and both have multiplicity $d$). Indeed, by Os\-tro\-grad\-sky-Jacobi-Liouville formula in (\ref{OJL}) we get $$\det \Phi^t_A(x)=\det \Phi^t_{A+H_{q(x)}}(x)$$ for all $x\in X$, and by the multiplicative ergodic theorem we have $$d.\lambda_{A+H_{q}}=\int \log |\det {\Phi^1_{A+H_{q(x)}}}(x)|\,d\mu=\int \log |\det \Phi^1_A(x)|\,d\mu=d.\lambda_A.$$ Fix $\delta\in(0,1)$. Since our algebra has the saddle-conservative property, we let $J\colon X\rightarrow{{\mathfrak {sl}}(d,\mathbb{R})}$ be a linear differential system, supported in $\varphi^1{(V)}$ and constructed following the steps of Lemma \ref{rot4}, such that, for $x\in \varphi^1{(\Sigma)}$, we have $$\Phi^1_{A+J}(x)e=(1+\delta)\Phi^1_A(x)e.$$ Finally, define the continuous linear differential system, supported in $\varphi^{[0,2]}(V)$, by $$D(x)=A(x)+H_{q(x)}(x)+J(x).$$ Since, for all $x\in X$ \begin{equation*} \Phi^1_{D}(x)E(x)=\Phi^1_{A+H_{q(x)}}(x)E(x)=E(\varphi^1(x)) \end{equation*} by Birkhoff's ergodic theorem we have: \begin{align}\lambda(D,x,v(x))&:=\lim_{t\to\infty}\frac1t\log\|\Phi_{D}^t(x)v(x)\|\nonumber\\ &=\lim_{n\to\infty}\frac1n\log\|\Phi_{D}^n(x)v(x)\|\nonumber\\&=\lim_{n\to\infty}\frac1n\log\|(1+\delta)^{\sum_{j=0}^{n-1}\mathbbm l_{V}(\varphi^j(x))}\Phi_{A+H_{q(x)}}^n(x)v(x)\|\nonumber\\ & = \lambda(A+H_{q(x)},x,v(x)) + \log(1+\delta)\mu(V)\label{lyap exponent for D2flow}. \end{align} Let $\lambda_{D,1}\geq \lambda_{D,2}\geq\ldots\geq\lambda_{D,r_\delta}$ be the distinct Lyapunov exponents for $D$, with the corresponding multiplicities $m_1, \ldots, m_{r_\delta}$. Since for all $x\in X$, $$ \det \Phi_{D}^1(x)=\det \Phi^1_{A+H_{q(x)}}(x)=\det \Phi^1_A(x)\label{dets}, $$ by the multiplicative ergodic theorem we also have $$ \sum_{i=1}^{r_\delta}\lambda_{D,i}\cdot m_{i}=d.\lambda_A. $$ By \eqref{lyap exponent for D2flow}, for any $\delta>0$ the linear differential system $D$ has a Lyapunov exponent equal to $\lambda_A+\log(1+\delta)\mu(V)$, so we must have $r_\delta\geq2$. Moreover, for all $\delta>0$, we have \begin{itemize} \item $J$ is supported in $\varphi^1(V)$ and is bounded; \item $H_q$ is supported in $V$ and is bounded and so, \item $D(x)=A(x)$ in $x\notin V\cup \varphi^1(V)$ and $D$ is bounded. \end{itemize} which implies that $d_p(A,D)$ can be made as small as we want by decreasing $r>0$. We just have now to consider $B=D$. \end{proof} \section{Applications to discrete systems}\label{app} \subsection{Dynamical cocycles}\label{dynamical} We would like to present now an application to the so-called \emph{dy\-na\-mi\-cal cocycle}. In this case we consider that the base dynamics and the fiber dynamics are related. In fact, the fibered action is given by the tangent map on the tangent bundle of the action defined in the base. Of course that these systems are much more delicate than the ones studied along this paper since the perturbations in the fiber have to be obtained by the effect of a perturbation in the base. Let us present briefly the setting we are interested in. From know on we let $M$ be a closed Riemannian surface and $\mu$ the Lebesgue measure arising from the area-form in $M$. Let $\text{Hom}_{\mu}(M)$ stands for the set of homeomorphisms in $M$ which keep the Lebesgue measure invariant and $\text{Diff}^1_\mu(M)$ the set of diffeomorphisms of class $C^1$ supported on $M$. Finally, we let $\text{Hom}_{\mu}^p(M)$ denote the set of elements $f\in\text{Hom}_{\mu}(M)$ such that for $\mu$-a.e. $x\in M$ the map $f$ has well defined derivative $Df(x)$ and it is $L^p$-integrable, i.e, $$\left(\int_M \|Df(x)\|^p\,d\mu\right)^{1/p}<\infty.$$ Moreover, we topologize $\text{Hom}_{\mu}^p(M)$ with the topology (denominated by $L^p$-topology) defined by the maximum of the $C^0$-topology (cf. ~\cite{BS}) and the one analog to the one constructed in \S\ref{topologies}. Then, we take the $L^p$-completement of $\text{Hom}_{\mu}^p(M)$ which we still denote by $\text{Hom}_{\mu}^p(M)$. By Baire's category theorem $\text{Hom}_{\mu}^p(M)$ is a Baire space. Each map $f\in \text{Diff}^1_\mu(M)$ generates a linear (dynamical) cocycle $F_f\colon TM\to TM$ generated by: $${F_f}(x,v)=(f(x), Df(x)v),$$ and the same holds for $f\in \text{Hom}_{\mu}^p(M)$ at least for a full measure subset $\hat{M}\subseteq M$. Since these maps preserve the Lebesgue measure $Df(x)\in\SL(2,\mathbb{R})$. From now on we endow $\text{Hom}_{\mu}(M)$ with the $C^0$-topology, $\text{Hom}_{\mu}^p(M)$ with the $L^p$-topology and $\text{Diff}^1_\mu(M)$ with the $C^1$-Whitney topology. In ~\cite{BS} it was proved that $C^0$-densely elements in $\text{Hom}_{\mu}(M)$ have one-point spectrum. On the other hand, in ~\cite{B}, it was proved that $C^1$-generic elements in $\text{Diff}^1_\mu(M)$ are Anosov or else have one-point spectrum. Here, we describe what behavior occurs in the middle: \begin{maintheorem}\label{dc} There exists an $L^p$-residual subset $\mathcal{R}$ of $\text{Hom}_{\mu}^p(M)$, $1\leq p < \infty$ such that, for any $f\in \mathcal{R}$ we have that $\mu$-a.e. $x\in M$ has all Lyapunov exponents equal to zero. \end{maintheorem} Let us now see the highlights of the proof of previous theorem. \medskip \emph{(i) On the entropy function:} Given a set of measurable and Lebesgue invariant maps $\mathscr{T}$ endowed with a certain topology $\tau$ we consider the function that associated to each $f\in \mathscr{T}$ the integral over $M$ of its upper Lyapunov exponent with respect to the Lebesgue measure: $$ \begin{array}{cccc} \Lambda\colon & (\mathscr{T},\tau) & \longrightarrow & [0,\infty[ \\ & f & \longmapsto & \int_M \lambda_1(f,x)\,d\mu, \end{array} $$ It was proved in ~\cite[\S 4]{BS} that when $\mathscr{T}=\text{Hom}_{\mu}(M)$ and $\tau$ the $C^0$-topology, then $\Lambda$ cannot be upper semicontinuous. Moreover, in \cite[Proposition 2.1]{B} is was proved that when $\mathscr{T}=\text{Diff}_{\mu}^1(M)$ and $\tau$ the $C^1$-topology, then $\Lambda$ is upper semicontinuous. When $\mathscr{T}=\text{Hom}_{\mu}^p(M)$ and $\tau$ the $L^p$-topology, then $\Lambda$ is upper semicontinuous by using the arguments described in ~\cite{AB} which, we recall, do not require $f$ to be ergodic. \medskip \emph{(ii) On the perturbations:} In ~\cite[\S3.1]{B} it was developed the concept of \emph{realizable sequences} (in the $C^1$-sense) and in ~\cite[\S 2.4]{BS} the the concept of \emph{topological realizable sequences} (in the $C^0$-sense). Here, we need an $L^p$-version of it. Then, since we can rotate any angle we like, on the action of $Df$, by making an arbitrarily small $L^p$-perturbation the uniform hyperbolicity cannot be an obstacle in order to decay the Lyapunov exponent as it is in Bochi's setting. Therefore, we can proceed like in \cite{BS} and obtain a map with arbitrarily small Lyapunov exponent near any map (even an Anosov one). Recall the points (1), (2) and (3) in \S \ref{BVrevisited}. Once again we emphasize that the use of Bochi's strategy is crucial because Arnold and Cong's arguments assume the ergodicity of the base map and in our dynamical cocycle context the base dynamics change and may eventually be non ergodic\footnote{ We observe that, despite the fact that Oxtoby and Ulam theorem (\cite{OU}) assures that $C^0$-generic volume-preserving maps are ergodic, the set of $C^0$-stably ergodic (and also $L^p$-stably ergodic) ones is empty.}. \medskip \emph{(iii) End of the proof:} We pick a point of continuity $f$ of the function $\Lambda\colon (\text{Hom}_{\mu}^p(M),L^p)\rightarrow [0,\infty[$. We claim that $\Lambda(f)=0$ otherwise, if $\Lambda(f)=\alpha>0$, then, by (ii) we consider $g\in \text{Hom}_{\mu}^p(M)$ arbitrarily $L^p$-close to $f$ and such that $\Lambda(g)=0$ which contradicts the fact that $f$ is a continuity point of $\Lambda$. Finally, we use (i), and the fact that the points of continuity of an upper semicontinuous function if a residual subset. \medskip \emph{(iv) A final remark:} Other strategy which simplify considerably the previous argument needs to assume that $\text{Diff}_{\mu}^1(M)$ is $L^p$-dense in $\text{Hom}_{\mu}^p(M)$. First, we approximate by a $C^1$-diffeomorphism $f$, and then reasoning in the following way using Bochi's theorem: if $f$ has all its Lyapunov exponent equal to zero we are over arguing like we did before using (i). Otherwise, $f$ is Anosov (or in the $C^1$-boundary of it), and a small $L^p$-perturbation send us to the interior of the non-Anosov ones (Anosov is no longer open w.r.t. the $L^p$-topology). \subsection{Infinite dimensional discrete cocycles}\label{infinite} We denote by $\mathscr{H}$ an infinite dimensional separable Hilbert space and by $\mathcal{C}(\mathscr{H})$ the set of linear compact operators acting in $\mathscr{H}$ endowed with the uniform operators norm. We fix a map $T:X\rightarrow{X}$ as before and $\mu$ an $f$-invariant Borel regular measure that is positive on non-empty open subsets. Given a family $(A_{x})_{x \in X}$ of operators in $\mathcal{C}(\mathscr{H})$ and a continuous vector bundle $\pi: X \times \mathscr{H} \rightarrow {X}$, we define the cocycle by $$ \begin{array}{cccc} F_A: & X\times{\mathscr{H}} & \longrightarrow & X\times{\mathscr{H}} \\ & (x,v) & \longmapsto & (T(x),A(x) v). \end{array} $$ It holds $\pi \circ {F}=f\circ{\pi}$ and, for all $x\in X$, $F_{A}(x,\cdot):\mathscr{H}_x\rightarrow{\mathscr{H}_{f(x)}}$ is a linear operator. We let $C^0_I(X,\mathcal{C}(\mathscr{H}))$ stand for the continuous integrable cocycles evolving in $\mathcal{C}(\mathscr{H})$ and endowed with the $C^0$-topology. Let also $L^p_I(X,\mathcal{C}(\mathscr{H}))$ stand for the continuous integrable cocycles evolving in $\mathcal{C}(\mathscr{H})$ and endowed with the $L^p$-topology. These infinite dimensional cocycles display some properties similar to the ones in finite dimension. For instance, the existence of an asymptotic spectral decomposition with asymptotic uniform rates like the ones given in Oseledets theorem also holds by an outstanding result by Ruelle (see ~\cite{Ru}). Moreover, in ~\cite{BeC} was obtained the Ma\~n\'e-Bochi-Viana dichotomy for $C^0_I(X,\mathcal{C}(\mathscr{H}))$ equipped with the $C^0$-topology. Here, we intend to get the $L^p$-version of ~\cite{BeC} for $L^p_I(X,\mathcal{C}(\mathscr{H}))$ cocycles with the $L^p$ topology. We point out that such infinite dimensional systems have been the focus of attention (cf. \cite{BeC,BeC2,LY,LY2}) not only because of its intrinsic interest but also due to its potential applications to partial differential equations (see ~\cite[\S1.3 and \S2]{LY}). As is expected we do drop the dichotomy in ~\cite[Theorem 1.1]{BeC} and reach the one-point spectrum statement. \begin{maintheorem}\label{BeCLp} There exists a $L^{p}$-residual subset $\mathcal{R}$ of the set of integrable compact cocycles ${L_{I}^{p}(X,\mathcal{C}(\mathscr{H}))}$ such that, for $A\in{\mathcal{R}}$ and $\mu$-almost every $x\in{X}$ $$\underset{n\rightarrow{\infty}}{\text{lim}}({A(x)^{*}}^{n}A(x)^{n})^{\frac{1}{2n}}=[0],$$ where $[0]$ stands for the null operator. \end{maintheorem} The strategy to obtain the proof of Theorem~\ref{BeCLp} is much like to the one described \S\ref{dynamical} which follows the three steps (i), (ii) and (iii). Once again we are free to input rotations on the fiber $\mathscr{H}_{X}$ by small $L^p$-perturbation highlighting the key point for this kind of systems. It is interesting to observe that the strategy of Arnold and Cong cannot be adapted directly to this setting. Actually, their argument is based on a \emph{finite} circular permutation on the fiber directions which have already simple spectrum (see ~\cite[Theorem 4.5]{AC0}) which we can not see how to implement to the infinite dimensional context. Once again our choice of using Bochi and Viana strategy is crucial to obtain our results. \bigskip \textbf{Acknowledgements:} The authors were partially supported by National Funds through FCT - ``Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia'', project PEst-OE/MAT/UI0212/2011. We would like to thank Borys Alvarez-Samaniego for some suggestions given.
1,116,691,501,081
arxiv
\section{Introduction} Protoplanetary disks are the birthplace of planets. With a significant amount of substructure observed both in the dust and the gas \citep[e.g.][]{Andrews18}, it is highly likely that with the Atacama Large (sub-)Millimeter Array (ALMA) we are witnessing first-hand the assembly of planetary systems. Relating these observed structures to the planet formation process, however, requires intimate knowledge of the underlying physical conditions, namely the gas temperature and densities, both of which strongly influence the pace of planet formation and the subsequent interaction of the protoplanets with the disk. Observations of multiple transitions of common molecular tracers offer an opportunity to measure these physical conditions. Although excitation analyses \citep[such as a population diagram analyses;][]{Goldsmith99} are commonplace in studies of the ISM and earlier stages of star formation, application to protoplanetary disks is hampered by the low intensities of the molecular lines and the high spatial resolution required to spatially resolve the disk. Nonetheless, the significant improvement in sensitivity afforded by ALMA has allowed for multiple such analyses, such as \citet{Schwarz16}, \citet{Bergner18}, \citet{Loomis18b} and \citep{Teague18a}, which are demonstrating the power of such methods. \section{Extracting Spectroscopy-Ready Spectra} \label{sec:method} Although ALMA has undoubtedly revolutionised the study of planet formation and protoplanetary disks, the intrinsically weak emission of most molecular species due to their low column-densities limits the type of spectral analyses possible. The development of new techniques to detect weak lines, such as match filtering \citep{Loomis18a}, are essential to maximise the information we are able to extract from observations. As protoplanetary disks are observed to be predominantly azimuthally symmetric \citep[see the DSHARP survey, for example;][]{Andrews18}, and have a well defined dynamical structure, it is possible to leverage this to improve the quality of spectra extracted from the observations. Due to the rotation of the disk, emission lines will be Doppler shifted relative to the systemic velocity by, \begin{equation} v_{\rm los}(r,\,\phi) = v_{\phi}(r) \cdot \cos \phi \cdot \sin i, \label{eq:vlos} \end{equation} \\ \noindent where $v_{\phi}$ is dominated by Keplerian rotation, $v_{\rm kep}$, $\phi$ is the polar angle at the disk midplane relative to the redshifted major-axis of the disk, and $i$ is the inclination of the disk. As the geometrical properties of the disk are readily measured from continuum observations, it is possible to infer $v_{\rm los}$ for each pixel and thus shift spatially resolved spectra back to the systemic velocity, as shown in Fig.~\ref{fig:first_moment_example}. With all the spectra aligned, it is then possible to stack spectra over a given region, for example around an annulus of constant radius, and thus improve the signal-to-noise ratio of the spectrum. \citet{Yen16} applied this technique to the disk of HD~163296, extracting detections of H$_2$CO which were not found with traditional imaging techniques, and significantly improving the significance of detection for both DCO$^+$ and N$_2$D$^+$. \begin{figure} \centering \includegraphics[width=\textwidth]{first_moment_and_spectra.pdf} \caption{A cartoon demonstrating how spectra are Doppler shifted due to the rotation of the disk. The left panel shows a rotation map of a model disk, showing both the near and far sides of a flared emission surface. The panels on the right show in black the spectra extracted at three locations which are able to be shifted back to the line center given Eqn.~\ref{eq:vlos}. Taken from \citet{Teague18a}.} \label{fig:first_moment_example} \end{figure} An additional advantage of this process is that the spectra can be super-sampled when stacking the multiple components. As $v_{\rm los}$ is not discretized, unlike the spectral resolution of the telescope, the shifted spectra will end up sampling the intrinsic line profile at a spectral resolution roughly a factor $\sqrt{N}$ better, where $N$ is the number of independent spectra used in the stacking \citep{Teague18b}. It is important to note that this super-sampled spectrum will still contain any systemtic effects found in the intrinsic spectrum, for example any broadening due to the spectral response function \citep[e.g.][]{Koch18}, and will therefore only be able to recover the true intrinsic spectrum if the original data was taken at a spectral resolution sufficient to Nyquist-sample the line profile. The Python package \texttt{GoFish} \citep{Teague19} provides the necessary functions to split a disk into annuli and then align the spectra given a $v_{\phi}$ profile. \section{Application to TW~Hya: Two Case Studies} \label{sec:TWHya} TW~Hya is the closest protoplanetary disk to Earth at a distance of $60.1 \pm 0.1$~pc \citep{BailerJones18}, and is therefore an object of intense study. With an inclination of $i \sim 7^{\circ}$, the geometry is exceptionally favourable for excitation analyses as confusion from radial gradients in temperature and densities are minimised. In the following section, we present two case studies of how physical properties of the planet forming disk were extracted through molecular excitation analyses. \subsection{Surface density perturbations traced with CS} \label{sec:TWHya:CS} TW~Hya is known to host a significant amount of gap and ring substructures, both in the mm~continuum \citep{Andrews16} and in the NIR scattered light \citep{vanBoekel17}, suggestive of embedded protoplanets. However, as the distribution of dust is strongly dependent on the gas pressure gradient, it is essential to constrain the true gas density profile by using molecular emission to more accurately constrain the depth of the gap and thus the mass of any potential planet. \citet{Teague17} showed that the CS $J = 5-4$ transition exhibits a gap in its radial intensity profile, consistent in location with the $\approx 95$~au gap observed in the scattered NIR light. Thermo-chemical modelling of the line emission suggested that a surface density depletion of 55\% was the most likely scenario, requiring a planet ranging between 12 -- $38~M_{\rm Earth}$. However, with only a single transition, breaking the degeneracy between temperature and column density was impossible, meaning that a chemical or excitation scenario could not be ruled out. With the addition of the CS $J = 3-2$ and $J = 7-6$ transitions, \citet{Teague18a} were able to perform an excitation analysis assuming local thermodynamic equilibrium (LTE) to place limits on $T_{\rm ex}$, finding temperature of $\approx 40$~K in the inner disk, dropping to $\lesssim 20$~K at the disk edge of 200~au. Drops in the column density of CS were found to coincide with the previously claimed drop in total gas surface density. Using the resampling technique described in \S\ref{sec:method}, high resolution spectra were extracted for annuli spanning the full radius of the disk, each with a width of $0.15^{\prime\prime}$ ($\approx 9$~au). With these spectra, a full non-LTE excitation analysis was performed using \texttt{RADEX} \citep{vanderTak07}, the collisional rates from \citet{Lique06} and assuming a thermal H$_2$ ortho-to-para ratio. Radial profiles of $T_{\rm ex}$ and $N({\rm CS})$ consistent with the LTE analysis were found over most of the disk, demonstrating that these transitions were thermalised and that CS emission arises from a relatively dense region of the disk. Interestingly, it was found that in the outer 20~au of the disk, the $J = 7-6$ transition appeared to no longer be thermalised, requiring a collider density of $n({\rm H_2}) \sim 10^6~{\rm cm^{-3}}$. It was possible to extrapolate this to a gas surface density under the assumption that CS arises from a region close to the disk midplane \citep[as suggested by observations of the edge-on disk, the Flying Saucer;][]{Dutrey17}, finding a minimum $\Sigma_{\rm gas} \gtrsim 10^{-2}~{\rm g\,cm^{-2}}$ at 200~au. This value is almost two orders of magnitude larger than that from \citet{Bergin13}, who were modelling HD emission from the inner disk, but consistent with the models of \citet{vanBoekel17} who were able to reproduce scattered light out to the disk edge. This suggests that disks may contain more mass in their outer regions than previously thought, supporting a large reservoir of planet-building material at large radii. \subsection{Where does CN emission arise from?} \label{sec:TWHya:CN} Constraining the location of the CN emission is essential for up-coming observations of polarized CN emission which aim to trace the projected magnetic field strength and morphology. The structure of the magnetic field is believed to change as a function of height through the disk as the initially poloidal fields are dragged into a toroidal morphology at the disk midplane due to the large gas densities. Therefore, knowing the region traced by the observed emission is fundamental in interpreting such observations of polarized emission. It is currently under debate where CN arises in a protoplanetary disk. \citet{HilyBlant17} present an LTE analysis of the $N = 3-2$ transition in TW~Hya finding a low $T_{\rm ex}$ ranging between 17 -- 27~K, consistent with a similar LTE analysis of the $N = 2-1$ transition in \citet{Teague16} \citep[and those from other disks;][]{Chapillon12}, suggesting an emission region closer to the disk midplane where $z/r \lesssim 0.25$. Conversely, \citet{Cazzoletti18} found that the emission morphology, a single bright ring, was best reproduced when CN was formed via vibrationally excited H$_2$, requiring high levels of FUV radiation and therefore favouring a higher emission surface of $z/r \sim 0.4$. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{CN_observations.pdf} \caption{CN emission from TW~Hya. The left panels show the integrated flux, while the right panels show the zeroth moment maps with the synthesized beam in the bottom left of each panel. The rows show the $N = 3-2$, $N = 2-1$ and $N = 1-0$ transitions, top to bottom.} \label{fig:CN_observations} \end{figure} Archival ALMA data now exists for CN with data spanning the $N = 1-0$, $N = 2-1$ and $N = 3-2$ transitions allowing for a full excitation analysis in order to place tighter limits on the emission region. Fig.~\ref{fig:CN_observations} shows the three transitions and the mutliple spatially and spectrally resolved hyerpfine transitions detected (Loomis et al., in prep.). All three transitions display a similar emission morphology, a ring centered at 55~au, suggesting that this structure is due to a chemical effect rather than an excitation effect. Applying the LTE excitation analysis to the super-sampled spectra using the level structure from \citet{Kalugina15}, it was found that the weaker satellite hyperfine components were unable to be reproduced, most significantly for the $N = 3-2$ transition, and to a lesser extent for the two lower frequency transitions. This can be attributed to the fact that in all previous analyses, the Rayleigh-Jeans approximation was assumed to convert between flux density units and brightness temperature. However, at these frequencies, $h\nu \lesssim kT$, limiting the accuracy of this transformation. Using the full Planck law, it was found that large optical depths, $\tau \gg 1$, were needed to explain the integrated intensities, producing saturated line profile inconsistent with the high spectral-resolution data. Furthermore, the measured linewidths for all three transitions suggest $T_{\rm kin} > T_{\rm ex}$, even after correcting for systematic effects \citep{Teague16}. Previous estimates of the non-thermal broadening in TW~Hya show that $v_{\rm turb} \, / \, c_s < 10^{-1}$, where $c_s$ is the local gas sound speed \citep{Teague16, Flaherty18}, insufficient to explain the discrepancy between these two temperatures. Combined with the inability of the LTE model to reproduce the emission, this strongly suggests that these transitions are sub-thermally excited, and thus require lower densities of $n({\rm H_2}) \lesssim 10^6~{\rm cm^{-3}}$ \citep{Shirley15}, suggesting an emission layer of $z \, / \, r \gtrsim 0.3$ \citep{HilyBlant17}. A non-LTE excitation analysis would provide tighter constraints on the local H$_2$ density and will be presented in future work (Teague et al., in prep.) \section{Summary} \label{sec:summary} We have demonstrated a new technique to extract high-spectral resolution data from lower spectral resolution observations by exploiting the azimuthal symmetry of the protoplanetary disk and the dynamics of the gas known to be dominated by Keplerian rotation \citep{Teague18b}. We have also shown two case studies of how observations of multiple transitions of molecular line emission allow us to begin to uncover the underlying physical structure of the planet formation environment. Lines of CS were able to confirm the existence of a surface density depletion in the disk of TW~Hya, likely opened by an unseen embedded planet \citep{Teague18a}. In addition, with high spectral resolution observations of CN we were able to demonstrate that the transitions were sub-thermally excited, and thus must originate from higher regions in the disk, unlike previous estimations (Teague et al., in prep.).
1,116,691,501,082
arxiv
\section{\label{sec:intro}Introduction} Solving for the spectrum of Hamiltonians is a very important scientific problem with applications to the study of molecules (quantum chemistry), atomic physics, solid state physics, etc. Certain applications also require very high precision in the spectrum if one is to understand theoretical aspects of non-perturbative information, like those that appear when studying resurgent series \cite{Dunne:2014bca}. Novel methods that compute the spectrum of Hamiltonians to high precision are very useful in these applications. Recently, the numerical bootstrap has enjoyed renewed attention in its application to quantum mechanical systems, starting with \cite{Han:2020bkb}. In previous work, we demonstrated the efficiency of the numerical bootstrap in finding rigorous, precise bounds on the energies of eigenstates in one dimensional Schr\"{o}dinger problems \cite{Berenstein:2021dyf,Berenstein:2021loy,Berenstein:2022ygg}. The same setup for other 1-d problems has been studied in \cite{Bhattacharya:2021btd,Aikawa:2021eai,Tchoumakov:2021mnh,Hu:2022keu,Du:2021hfw,Khan:2022uyz,Blacker:2022szo}. The algorithmic approach (following the ideas of \cite{Lin:2020mme}) performs a search of possible solutions to the truncated bootstrap problem and gives a `Yes/No' answer to their validity. If a solution survives, one can increase the size of the truncation and keep searching more finely in the set of possible solutions. This search is in a space of many variables which can grow as the size of the truncated problem increases. This type of search is impractical except on search spaces of low dimension, $d_{search}\leq 3$. In this letter we describe and implement a semi-definite programming algorithm to numerically find an arbitrary subset of the spectrum of a Hamiltonian which overcomes the problem of searching in a high dimensional search space. We implement it for problems in 1D with a polynomial potential. At each fixed value of the energy $E = \langle H\rangle$, the algorithm is a linear semidefinite program which may be solved polynomially in the size (depth) $K$ of the constraint matrices. One then scans only over $E$. \section{\label{sec:sdps}The bootstrap as an SDP} The quantum mechanical bootstrap works as follows. We start with a Hamiltonian $H$ with a point spectrum. For simplicity we will assume that the potential is polynomial and that the system is one dimensional so that \begin{equation} H= p^2 +V(x) \end{equation} From this, we assume that we have an eigenstate of the Hamiltonian with energy $E$. The question of the bootstrap is to decide if $E$ is an allowed eigenvalue of the Hamiltonian or not. To do so we generate a recursion for the positional moments $\ex{x^n}$ from the two constraints \begin{equation}\label{eq:cnstrs} \ex{[H,\mathcal{O}]} = 0;\quad \ex{H\mathcal{O}} = \ex{H}\ex{\mathcal{O}} = E\ex{\mathcal{O}} \end{equation} which assume that the state is an eigenstate of energy $E$. Given a collection of such moments, any positive function $|\sum \alpha_i x^i|^2$ will have a positive expectation value. This is a unitary constraint: it states that the probability density associated to the state with energy $E$ is non-negative. The constraint is a quadratic function of the $\alpha_i$ and gives rise to a positive definite matrix $M \succeq 0$ computed from the positional moments. A solution is an allowed state if $E$ and the moments satisfy all the constraints and the positive condition on $M$. More generally, beyond 1D, for any operator ${\cal O}$, the expectation value of the positive operator $\langle {\cal O}^\dagger{\cal O}\rangle\geq 0$ must be non-negative. This gives rise to a positive definite matrix when we pick $\cal O$ from the span of a subset of basis operators. In 1-$D$ problems the expectation values appearing in $M$ defined above are generally strong enough to determine uniquely the solutions. If $E$ is a variable determined from the other ones, these latter constraints are nonlinear in the moments $x_n \equiv \ex{x^n}$. One may choose to omit the nonlinear constraints and be left with a linear problem; this is the route in \cite{PhysRevLett.108.200404,Lawrence:2021msm}, where one minimizes the value of the energy given some positivity constraints. The tradeoff is that one is only able to solve for the ground state in the absence of the nonlinear constraints. An alternative approach to linearization to the one we take here is to apply a convex relaxation of the non-linear constraints in \eqref{eq:cnstrs}. Such a method has been applied in the study of the large $N$ bootstrap \cite{kazakovYM,kazakovMM} to relax non-linearities in the Yang-Mills (or matrix model) loop equations that arise from factorization. {\bf Fixed-energy recursion.} A simple way to linearize the problem is to fix the value of energy $E$ and test if $E$ is an allowed value. At each fixed value of the energy the recursion is linear in the $x_n$. Consider an arbitrary potential of even degree $d$: \begin{gather*} V(x) = \sum_{n=1}^d a_nx^n \end{gather*} The recursion relates moments $x_n$ with $n \geq d$ to lower moments. For $m \geq 0$ it may be written \begin{multline} \label{eq:recursion} x_{d+m} = \frac{1}{2a_d(d+2m + 2)} \left[ \frac{}{}4(m+1)Ex_m \right. \\+ \left. m(m^2-1)x_{m-2} -2\sum_{n=1}^{d-1}(n+2m + 2)a_nx_{n+m}\right] \end{multline} Generically, initializing the recursion requires the energy as well as the first $d-1$ moments, with $x_0 = 1$. The basic object of interest in the bootstrap is the $K \times K$ Hankel matrix with elements $M^{(K)}_{ij} = x_{i+j}$, where $0 \leq i,j \leq K-1$. The unitarity constraint is that $M \succeq 0$; $M$ defines a covariance matrix which must be positive semidefinite. Before applying the recursion, we may write $M^{(K)}$ as a linear function of the first $2K-2$ moments: \begin{gather*} M^{(K)} = \sum_{m=0}^{2K-2}x_m\mathcal{B}_m \succeq 0 \end{gather*} where the matrices $\mathcal{B}_m$ define the Hankel structure: \begin{equation*} \mathcal{B}_m = \begin{cases} 1 \text{ if } i+j = m,\ 0 \leq i,j \leq K-1\\ 0 \text{ otherwise} \end{cases} \end{equation*} The recursion \eqref{eq:recursion} relates the $x_m$ with $m \geq d$ to those with $m < d$; it thus defines another set of symmetric $K \times K$ matrices $F_n(E)$ by the equality \begin{gather*} \sum_{m=0}^{2K-2}x_m\mathcal{B}_m = M^{(K)} = \sum_{n=0}^{d-1}x_nF_n(E) \end{gather*} As $K \to \infty$, the Hankel matrix $M^{(K)}$ defined above will be positive definite only for $E$ in the spectrum of $H$: this has been shown in examples and is expected to be true. No complete proof exists. For the purposes of this paper we will take that statement at face value. Finite $K$ is a truncation of an infinite set of constraints. We expect the Hankel matrix to be positive definite in some disjoint set $S_{K} \subset \mathbb{R}$ which strictly contains the spectrum of $H$. Moreover, the $x_n$ are uniquely determined by $E$. Numerical experiments \cite{Berenstein:2021loy,Lawrence:2021msm} have shown that the convergence to the eigenvalues (and the moments) is exponentially fast in the size of the truncation. Furthermore, $S_{K+1} \subset S_K$, etc. This same weak convergence property allows efficient search strategies in a bootstrapping algorithm. The main problem in previous explorations of the quantum mechanical bootstrap is that a search is done both in $E$ and in the moments. If there are many moments that are undetermined from the recursion, the search for solutions of the bootstrap equations and constraints is done in a high dimensional space and becomes very inefficient. Our goal then is to find an optimal value of the moments for fixed energy $E$ rather than doing a blind search. Moreover, if the problem fails to find a solutions of the constraints, we want a numerical measure of how far we are from satisfying the constraints. Our new proposal addresses these issues, so that in end then one is left only with a scan over energies $E$. {\bf Optimization.} How do we test if a symmetric matrix $M^{(K)}$ is positive? If the matrix is Hermitian, then the condition of being positive (definite) is equivalent to the minimal eigenvalue of $M^{(K)}$ being positive. We test positive definiteness by considering the minimal eigenvalue of $M^{(K)}$ as a function of the primal variables $x_i$. Define an optimization problem \begin{equation} \operatorname{maximize }\ \lambda_{\text{min}}( M^{(K)}(x_i),E) \label{eq:maxprob} \end{equation} If the optimal value is negative, the energy value $E$ can be safely excluded from the set $S_K$. The goal is to solve this optimization problem for a range of energies and to thereby determine the set $S_K$. The algorithm proceeds by searching this set at depth $K + 1$, and iteratively converges to the spectrum (or a subset thereof). The problem \eqref{eq:maxprob} defines an objective function which is highly nonlinear in the primal $x_i$. However, the problem of eigenvalue extremization is well-known to have an equivalent formulation as an SDP with linear objective \cite{boyd}. First, introduce a slack variable $t$ and write \begin{equation*} \maxprob{t}{\lambda_{\text{min}}(M(x_i))\geq t} \end{equation*} which is equivalent to \eqref{eq:maxprob}. It is convenient to introduce the matrix $M-t I$. If the minimal eigenvalue of $M-t I$ is positive, the matrix in question must be positive definite $M-t I \succeq 0$. This allows us to write a problem equivalent to \eqref{eq:maxprob} in SDP form: \begin{equation} \maxprob{t}{M(x_i) - tI \succeq 0}\label{eq:sdp} \end{equation} This is an SDP in linear matrix inequality (LMI) form with primal variables $\mathbf{x} = (t,x_1,...,x_{d-1})$ \footnote{In some SDP solvers, the algorithm must be written as a minimization problem. This is done by minimizing $-t$ instead.}. Notice that even if the energy is not allowed, the optimization problem will find a solution: a sufficiently large negative $t$ will always make it possible to satisfy the positive matrix constraint. We thus obtain for the $K$ we are testing a value $t$ that is negative and an optimal value of the moment variables. The maximum $t$, which we label $t_{max}$, is a measure of how close to success we are. As we scan over $E$ (at fixed $K$), $t_{max}$ will depend continuously on $E$ and it is possible to estimate when it will become positive. It thus serves not only as a diagnostic of failure, but it also gives a way to scan intelligently in $E$. {\bf Problems on other domains.} For problems on the half line, the interval, or a circle, light modifications of the approach are needed. In the case of the circle, one uses periodic functions in the bootstrap (a trigonometric moment problem). The goal in that case is to find the band structure of the potential. There are two main differences from problems adapted to the real line and the interval: certain terms in the recursion are modified and one has two or more matrix positivity constraints to contend with. In \cite{Berenstein:2022ygg}, we showed how solving Schr\"{o}dinger problems on the half line requires adding anomalous terms to the recursion which depend on the boundary conditions $\psi(0),\psi'(0)$. One must include these terms, which generally modify the recursion \eqref{eq:recursion}. The same will be true in the interval, where each boundary will modify the recursion relations depending on the boundary conditions. On the half line, the other difference is due to the result of Stieltjes on the moment problem for measures on $\mathbb{R}_+$. Positive semidefiniteness is required for the matrix $M_{ij} = x_{i+j}$ as well as the matrix $M'_{ij} = x_{1 + i + j}$. To account for this, we simply make the replacement \begin{gather*} M(x_i) \mapsto \left[\begin{array}{cc} M^{(K)}(x_i) & 0 \\ 0 & M'^{(K)}(x_i) \end{array}\right] \succeq 0 \end{gather*} Positive definiteness of the block matrix above is equivalent to positive definiteness of its block-diagonal components. The rest of the algorithm is unchanged, though the size of the constraint matrices will double as they also reflect the block structure. In the interval $(0,1)$, the polynomial $(1-x)$ is also positive definite and there will be additional blocks required for solving the dynamics. For problems in higher dimensions, we expect that the constraints are not enough to determine recursively all the moments from a finite search space. We are currently investigating this issue. Conceptually, there is no obstacle to proceed in these higher dimensional setups. The main issue will be on understanding the optimal way to eliminate variables and how different truncation schemes might perform. \subsection{\label{sec:algo}The algorithm} With the SDP formulation, the bootstrap algorithm proceeds as follows. Given a potential $V$, take an initial set of energy values $S_0 = \{E_i\} \subset \mathbb{R}$. For each fixed value of the energy, solve the SDP \eqref{eq:sdp} at some initial depth $K_0$. Energies $E_i$ for which the $t_{max}$ is positive form the set $S_{K_0}$, which serves as the search set at depth $K' > K_0$. Iterating this procedure will result in a set of intervals within $S_0$. These intervals define sharp bounds on the exact spectrum of $H$, in the sense that the bounds are rigorous and can only shrink as $K$ increases. A persistent issue with the bootstrap is the rapid growth of the matrix elements. The magnitude of the largest matrix entries scales super-exponentially with $K$. For example, in the harmonic oscillator, $\ex{x^n} \sim \Gamma(n/2)$ in eigenstates. As a result, using single or double precision floats results in serious numerical error after $K \sim 10$. Similar issues were encountered in the conformal bootstrap program, which necessitated the use of an arbitrary-precision SDP solver \cite{simons-duffin}. We found the same to be necessary in order to obtain comparably high precision to finite-element methods. To numerically solve the problem, we used SDPA-GMP \cite{sdpa}, a primal dual interior point SDP solver built on the GMP (GNU multiple precision) arithmetic library. For a given energy, the $F_n(E)$ were generated in Python and the SDP \eqref{eq:sdp} was fed into SDPA-GMP. The outputs defined a refined search space at the next depth. We worked with $\sim60$ digit (200 significant bits) precision. The main benefit of the SDP approach is that we can search a very high dimensional space very efficiently. In our previous work, we were constrained to potentials of degree $\leq4$ due to the brute-force nature of the algorithm. Now, potentials of essentially arbitrary degree can be solved in comparable time. \section{\label{sec:results}Results for an example} To show that this method is able to obtain high-precision results for excited states in a search space of large dimension, we considered as a simple example the degree 8 potential \begin{equation} \label{eq:testpot} V(x) = \frac{1}{2}x^2 - x^4 + \frac{1}{8}x^8 \end{equation} This has 8 primal variables (including $t$); although since the potential is even, the number effectively reduces to 4 primal variables. We search over the energy range $[0,15]$ which we know to contain the first five excited states. We started the search at matrices of size $K_0=10$ and terminate at $K=30$. At each depth, the algorithm requires us to look for the negative values of the objective function of \eqref{eq:sdp}. We can visualize the convergence by plotting $\log(|t^\star|)$, where $t^\star$ is the optimal value, versus the fixed energy $E$. Inverted `spikes' in this plot show the zero crossings. As the intervals of positive $t$ shrink with increasing $K$, two spikes seem to join around the exact value of the eigenstate energy, as shown in Fig. \ref{fig:inaction}. The structure is always a double spike structure around each allowed value: two spikes can become so close to each other that the plot can no longer distinguish them. \begin{figure}[!h] \centering \includegraphics[width = 8 cm]{in_action.pdf} \caption{The (log of the) objective function evaluated over a range of energies for the potential \eqref{eq:testpot}. Exact energies (computed in \textit{Mathematica} by FEM) shown as dashed lines. Results shown for $K = 12,14,18$.} \label{fig:inaction} \end{figure} The numerical estimates for the eigenenergies at $K = 30$ are shown in Table \ref{table:1}. \begin{table}[h!] \centering \begin{tabular}{c||c|c} \hline $n$ & Bootstrap & \textit{Mathematica} FEM \\ [0.5ex] \hline\hline 1 & 0.446987(6) & 0.44698(8) \\ 2 & 1.975515(7)& 1.9755(2)\\ 3 & 4.89758(7) & 4.8975(9) \\ 4 & 9.0514(4) & 9.0514(4) \\ 5 & 14.1008(2) & 14.100(8)\\ [1ex] \hline \end{tabular} \caption{Energies for the potential \eqref{eq:testpot} at $K = 30$, compared to the finite-element method (FEM) results. } \label{table:1} \end{table} This level of precision is beyond machine precision in \textit{Mathematica}, though its implementation of a FEM eigensolver works much faster for this class of 1d problems. {\bf Convergence.} The data from each depth $K$ is a set of valid energy intervals. It has been repeatedly observed that the widths of these intervals decreases exponentially in $K$. We find that result borne out again in Fig. \ref{fig:convergence}. \begin{figure}[!h] \centering \includegraphics[width = 8cm]{convergence.pdf} \caption{Width of allowed energy intervals vs. $K$, on a logarithmic scale.} \label{fig:convergence} \end{figure} The convergence is exponential and uniform in slope across energy levels, at least asymptotically in $K$. In the regime of constant exponential growth of Fig. \ref{fig:convergence}, the approximate slope is $\approx-0.83$. Thus the average width of the allowed intervals decreases like $ \Bar{w}(K) \propto e^{-0.83K} $. Hence at $K' > K$, the ratio of widths goes like $e^{-0.83(K'-K)}$. Obtaining one more decimal digit of precision requires changing the size of the truncation to $K' = K + \log(10)/0.83 \approx K + 3$. This shows the power of the bootstrap approach: the number of significant digits scales approximately linearly with the depth $K$. {\bf Conclusion} In this paper we proposed a method to solve for the energies of 1-dimensional Hamiltonian systems within the bootstrap approach. The method utilizes a semidefinite programming algorithm to find solutions of the (truncated) bootstrap equations. The method solves the problem of ``searches in a large dimension space" by considering the system at fixed energy (the guess) and extremizing over an additional slack variable as well as the other parameters of the original bootstrap equations. What we noticed was that once the energy was factored out, the recursive relations for moments become linear. The search space is effectively reduced to one dimension. If the slack variable is positive at the optimal value, the positive definite constraint is satisfied and the energy $E$ is allowed. If the slack variable is negative, in principle one can use a Newton-Raphson method to find the next crossing of zero and thus search effectively in the energy parameter as well. The method is able to obtain high precision data on the eigenvalues and in the example we studied, it is numerically seen that the method converges exponentially fast. It is clear that our method can be expanded to solving problems in higher dimensions, where the size of the search space might grow with the truncation. Applying these techniques might be useful in the study of many-body problems in quantum chemistry and other areas, with the possibility of not only finding ground state functions of electrons (like in other optimization algorithms \cite{PhysRevLett.93.213001}), but also finding excited states. {\em Acknowledgements:} D.B. would like to thank R. Brower, A. Joseph and J. Yoon for discussions. D.B. research was supported in part by the International Centre for Theoretical Sciences (ICTS) while participating in the program - ICTS Nonperturbative and Numerical Approaches to Quantum Gravity, String Theory and Holography (code: ICTS/numstrings-2022/9). Research supported in part by the Department of Energy under Award No. DE-SC0019139
1,116,691,501,083
arxiv
\section[Introduction]{Introduction} Accurate prediction and cost-effective containment of epidemics in human and animal populations are fundamental problems in mathematical epidemiology~\cite{Pastor-Satorras2015a,Diekmann2000,Kiss2017}. In order to achieve these goals, it is indispensable to develop effective mathematical models describing the spread of disease in human and animal contact networks~\cite{Funk2010,Gross2008}. In this direction, we find a broad literature on modeling, analysis, and containment of epidemic processes in static contact networks. However, these works neglect an important factor: the temporality of the interactions~\cite{Isella2011a,Stehle2011,Sun2013a}, which arises either independently of or dependent on epidemic propagations. A framework for modeling temporal interactions in human and animal populations is temporal networks (i.e., time-varying networks), where individuals and interactions are modeled as nodes and edges, respectively, which can appear and disappear over time~\cite{Holme2015b,Masuda2016b,Holme2012}. Under this framework, the effect of temporal interactions on epidemic propagations has been investigated numerically and theoretically~\cite{Masuda2017,Masuda2013}. For containing epidemic processes on temporal networks, we find a plethora of heuristic approaches~\cite{Prakash2010,Lee2012} and analytical methods~\cite{Ogura2015c,Liu2014a}. Adaptive networks refer to the case in which changes in nodes or edges occur in response to the state of the dynamics taking place on the network~\cite{Masuda2016b,Gross2008,Gross2009,Sayama2013}. A common temporality of agent-agent interaction in epidemic dynamics arises from social distancing behavior~\cite{Aledort2007,Bootsma2007,Bell2006}, which let the structure of contact networks change over time as a result of adaptation to the state of the epidemics. Several models of such adaptive networks have been proposed. For example, Gross et al.~proposed a rewiring mechanism where a healthy node actively avoids to be adjacent to infected nodes~\cite{Gross2006}. Extensions of this model are found in~\cite{Gross2008,Zanette2008a,Marceau2010,Lagorio2011,Tunc2014}. Guo et al.~proposed an alternative model in which links connecting an infected node and a healthy node are deactivated~\cite{Guo2013}. As for the containment of epidemic processes on adaptive networks, various heuristic~\cite{Bu2013,Maharaj2012} and analytical~\cite{Ogura2015i,Ogura2016l} approaches have been proposed. However, in many studies, the effects of exogenous temporal factors and endogenous adaptive measures on epidemic processes have been separately examined, leaving unclear how their combination affects the dynamics of the spread. In this paper, we study epidemic processes and containment strategies in a temporal network model where the effect of exogenous factors and that of adaptive measures are simultaneously present. Our model is based on the activity-driven temporal network model~\cite{Perra2012}. In this model, a node is stochastically activated and connects to other nodes independently of the dynamics taking place in the network. In order to analyze the joint effect of exogenous factors and endogenous adaptations, we add a mechanism of social distancing to the standard activity-driven model. In other words, we allow an infected node to endogenously adapt to the state of the epidemics by 1)~decreasing its activation probability and 2) refusing interactions with other activated nodes. On top of this temporal network, we adopt the standard susceptible-infected-susceptible (SIS) model of epidemic dynamics~(see, e.g., \cite{Pastor-Satorras2015a}) and derive an analytical upper bound on the decay rate of the number of infected nodes over time. Based on this result, we then propose an efficient strategy for tuning the social distancing rates in order to suppress the number of infected nodes. Our work is related to \cite{Rizzo2014}, in which an infected individual is allowed to decrease its activation probability. However, in \cite{Rizzo2014}, the durations of temporal interactions are assumed to be sufficiently short compared with the time scale of the epidemic dynamics, leaving out the interesting case where the time scale of the network dynamics and that of the epidemic process are comparable. In addition, our results hold true for networks of any size, while the results in~\cite{Rizzo2014} require the networks to be sufficiently large. This paper is organized as follows. In \Cref{sec:prbSetting}, we introduce a model of epidemic processes on temporal and adaptive networks. In \Cref{sec:decayRate}, we derive an upper bound on the decay rate of the infection size. Based on this bound, in \Cref{sec:optimizaiton} we formulate and solve optimization problems for containing the spread of epidemics. The obtained theoretical results are numerically illustrated in \Cref{ref:numerical}. \section{Problem setting}\label{sec:prbSetting} In this section, we first describe the activity-driven network proposed in~\cite{Perra2012}. We then introduce an adaptive SIS (A-SIS) model on activity-driven networks, which allows nodes to adapt to the state of the nodes (i.e., susceptible or infected) in their neighborhoods. \subsection{Activity-driven networks}\label{sec:adsis} Throughout this paper, we let the set of nodes in a network be given by $\mathcal V = \{v_1, \dotsc, v_n \}$. The activity-driven network is a temporal network in discrete time and is defined as follows. \begin{figure}[tb] \centering \includegraphics[clip,trim={2cm 4.2cm 1.2cm 6.8cm},width=.65\linewidth]{fig1.pdf} \caption{Schematic on an activity-driven network. We set $n=10$ and $m=2$. Filled circles represent active nodes. Open circles represent inactive nodes. The time is denoted by $t$.} \label{fig:adn} \end{figure} \begin{definition}[\cite{Perra2012}]\label{eq:ADM} For each $i=1, \dotsc, n$, let $a_i$ be a positive constant less than or equal to $1$. We call $a_i$ the \emph{activity rate} of node $v_i$. Let $m$ be a positive integer less than or equal to $n-1$. The \emph{activity-driven network} is defined as an independent and identically distributed sequence of undirected graphs created by the following procedure (see \cref{fig:adn} for a schematic illustration): \begin{enumerate} \item At each time $t = 0, 1, 2, \dotsc$, each node $v_i$ becomes ``activated'' with probability~$a_i$ independently of other nodes. \item Each activated node, say, $v_i$, randomly and uniformly chooses $m$ other nodes independently of other activated nodes. For each chosen node, say, $v_j$, an edge $\{v_i, v_j\}$ is created. These edges are discarded at time $t+1$ (i.e., do not exist at time $t+1$). \item Steps 2 and 3 are repeated for each time $t \geq 0$, independently of past realizations. \end{enumerate} \end{definition} \begin{remark}\label{rmk:} We do not allow multiple edges between a pair of nodes. In other words, even when a pair of activated nodes choose each other as their neighbors at a specific time, we assume that one and only one edge is spanned between those nodes. \end{remark} Although the activity-driven network is relatively simple, the model can reproduce an arbitrary degree distribution~\cite{Perra2012}. Several properties of activity-driven networks have been investigated, including structural properties~\cite{Perra2012,Starnini2013b}, steady-state properties of random walks~\cite{Perra2012a,Ribeiro2013}, and spreading dynamics~\cite{Perra2012,Rizzo2014,Speidel2016a}. However, the model does not allow nodes to adapt to the state of the epidemics and, therefore, is not suitable for discussing how social distancing affects the dynamics of the spread. In the next subsection, we extend the activity-driven network by incorporating social distancing behaviors of nodes. \subsection{Activity-driven A-SIS model} Building upon the activity-driven network described above, we consider the scenario where nodes change their neighborhoods in response to the state of the epidemics over the network~\cite{Ogura2015i}. Specifically, we propose the \emph{activity-driven adaptive-SIS model} (\emph{activity-driven A-SIS model} for short) as follows: \begin{figure}[tb] \centering \includegraphics[clip,trim={5.5cm 6.2cm 6.5cm 3.7cm},width=.55\linewidth]{fig2.pdf} \caption{Adaptation of nodes in the activity-driven A-SIS model. Filled and empty circles represent active and inactive nodes, respectively. (a) A susceptible node is activated with probability~$a_i$. (b) An infected node is activated with probability~$\chi_i a_i$. (c) An infected node ($v_j$) accepts an edge spanned from an activated node with probability~$\pi_j$.}\label{fig:adaptiveadn} \end{figure} \begin{definition}[Activity-driven A-SIS model]\label{defn:adasis} For each $i$, let $a_i, \chi_i, \pi_i \in (0, 1]$ be constants. We call $a_i$, $\chi_i$, and $\pi_i$ the \emph{activity rate}, \emph{adaptation factor}, and \emph{acceptance rate} of node $v_i$, respectively. Also, let $m \leq n-1$ be a positive integer, and $\beta, \delta \in (0, 1]$ be constants. We call $\beta$ and $\delta$ the \emph{infection rate} and \emph{recovery rate}, respectively. The \emph{activity-driven A-SIS model} is defined by the following procedures (see \cref{fig:adaptiveadn} for an illustration): \begin{enumerate} \item At the initial time $t=0$, each node is either \emph{susceptible} or \emph{infected}. \item\label{item:lessActivation} At each time $t = 0, 1, 2, \dotsc$, each node $v_i$ randomly becomes activated independently of other nodes with the following probability: \begin{equation} \Pr(\mbox{node $v_i$ becomes activated}) = \begin{cases} a_i,& \mbox{if $v_i$ is susceptible,} \\ \chi_ia_i,& \mbox{if $v_i$ is infected.} \end{cases} \end{equation} \item\label{item:cutting} Each activated node, say, $v_i$, randomly and uniformly chooses $m$ other nodes independently of other activated nodes. For each chosen node, say, $v_j$, an edge $\{v_i, v_j\}$ is created with the following probability: \begin{equation}\label{eq:acceptanceProb} \Pr(\mbox{$\{v_i, v_j\}$ is created}) = \begin{cases} 1, & \mbox{if $v_j$ is susceptible,} \\ \pi_j, & \mbox{if $v_j$ is infected. } \end{cases} \end{equation} These edges are discarded at time $t+1$ (i.e., do not exist at time $t+1$). As in \cref{rmk:}, we do not allow multiple edges between a pair of nodes. \item The states of nodes are updated according to the SIS model. In other words, if a node~$v_i$ is infected, it transits to the susceptible state with probability~$\delta$. If $v_i$ is susceptible, its infected neighbors infect node $v_i$ with probability~$\beta$ independently of the other infected neighbors. \item Steps 2--4 are repeated for each time $t \geq 0$. \end{enumerate} \end{definition} Steps~\labelcref{item:cutting,item:lessActivation} in \cref{defn:adasis} model social distancing behavior by infected nodes. In Step~\labelcref{item:lessActivation}, an infected node decreases its activity rate to avoid infecting other nodes. Step~\labelcref{item:lessActivation} can also be regarded as modeling reduction of social activity by infected nodes due to sickness. In Step~\labelcref{item:cutting}, an infected node, say, $v_j$, establishes a connection with an activated node only with probability~$\pi_j$ to avoid infecting other nodes (when $\pi_j < 1$). A susceptible node behaves in the same way as in the standard SIS model in the original activity-driven network. \section{Decay rate}\label{sec:decayRate} In order to quantify the persistence of epidemic infections in the activity-driven A-SIS model, in this section, we introduce the concept of decay rate of the epidemics. A direct computation of the decay rate requires computing the eigenvalues of a matrix whose size grows exponentially with the number of the nodes. To overcome this difficulty, we present an upper bound on the decay rate in terms of the eigenvalues of a $2\times 2$ matrix. \subsection{Definition} For each time $t$ and node $v_i$, define the random variable \begin{equation} x_i(t) = \begin{cases} 0, & \mbox{if $v_i$ is susceptible at time $t$,} \\ 1, & \mbox{if $v_i$ is infected at time $t$}. \end{cases} \end{equation} Define the vector $p(t) = [p_1(t)\ \cdots \ p_n(t)]^\top$ of the infection probabilities by \begin{equation}\label{eq:def:p_i} p_i(t) = \Pr(\mbox{$v_i$ is infected at time $t$}). \end{equation} In this paper, we measure the persistence of infection by the rate of convergence of infection probabilities to the origin. \begin{definition} We define the \emph{decay rate} of the activity-driven A-SIS model by \begin{equation} \alpha = \sup_{x(0)}\limsup_{t\geq 0} \frac{\log \norm{p(t)}}{t}, \end{equation} where $\norm{\cdot}$ denotes the $\ell_1$ norm. \end{definition} The infection-free equilibrium, $x_1 = \cdots = x_n = 0$, is the unique absorbing state of the Markov process $\{x_1(t), \dotsc, x_n(t)\}_{t\geq 0}$. Moreover, the infection-free equilibrium is reachable from any other states by our assumption $\delta > 0$. This implies $\alpha < 1$. In fact, $\alpha$ is difficult to compute for large networks for the following reason. The Markov process~$\{x_1(t), \dotsc, x_n(t)\}_{t\geq 0}$ has $2^n$ states. Let $Q$ denote its $2^n \times 2^n$ transition probability matrix. Since the disease-free state is the unique absorbing state, it follows that \begin{equation} \alpha = \max \{ \abs{\lambda}: \mbox{$\lambda$ is an eigenvalue of $Q$,\ $\abs{\lambda}<1$} \}. \end{equation} Because the size of the matrix~$Q$ grows exponentially fast with respect to the number of the nodes, a direct computation of the decay rate is difficult for large networks. \subsection{An upper bound} We start with the following proposition, which allows us to upper-bound the infection probabilities using a linear dynamics: \begin{proposition}\label{prop:dynamics} Let \begin{equation}\label{eq:def:barm} \bar m = m/(n-1) \end{equation} and, for all $i$, define the constants \begin{equation}\label{eq:def:phipsi} \phi_i = \bar m \chi_i a_i,\quad \psi_i = \bar m \pi_i a_i. \end{equation} Then, \begin{equation}\label{eq:upperDynamics} p_i(t+1) \leq (1-\delta) p_i(t) + \beta \sum_{j=1}^n[1 - (1-\psi_i)(1-\phi_j)] p_j(t) \end{equation} for all nodes $v_i$ and $t\geq 0$. \end{proposition} \begin{proof} By the definition of the A-SIS dynamics on the activity-driven network, the nodal states $x_1$, \dots, $x_n$ obey the following stochastic difference equation \begin{equation}\label{eq:originalDynamics} x_i(t+1) = x_i(t) - x_i(t) N_{\delta}^{(i)}(t) + (1-x_i(t)) \left[1-\prod_{j\neq i} \left(1-a_{ij}(t) x_j(t) N_{\beta}^{(ij)}(t)\right)\right], \end{equation} where \begin{equation} a_{ij}(t) = \begin{cases} 1,& \mbox{if an edge $\{v_i, v_j\}$ exists at time $t$,} \\ 0, & \mbox{otherwise,} \end{cases} \end{equation} and $\{N_{\delta}^{(i)}(t)\}_{t = 0}^\infty$ and $\{N_{\beta}^{(ij)}(t)\}_{t = 0}^\infty$ are independent and identically distributed random Bernoulli variables satisfying \begin{equation} N_{\delta}^{(i)}(t) = \begin{cases} 1, & \mbox{with probability~$\delta$}, \\ 0, & \mbox{with probability~$1-\delta$}, \end{cases} \end{equation} and \begin{equation} N_{\beta}^{(ij)}(t) = \begin{cases} 1, & \mbox{with probability~$\beta$}, \\ 0, & \mbox{with probability~$1-\beta$}. \end{cases} \end{equation} On the right-hand side of equation~\cref{eq:originalDynamics}, the second and third terms represent recovery and transmission events, respectively (a similar equation for the case of static networks can be found in~\cite{Chakrabarti2008}). By the Weierstrass product inequality, the third term on the right-hand side of \cref{eq:originalDynamics} is upper-bounded by $(1-x_i(t))\sum_{j=1}^n a_{ij}(t) x_{j}(t) N_{\beta}^{(ij)}(t)$. Since the expectation of $x_i(t)$ equals $p_i(t)$, taking the expectation in \cref{eq:originalDynamics} gives \begin{equation}\label{eq:p_i(t+1)<=...} p_i(t+1) \leq p_i(t) - \delta p_i(t) + \beta \sum_{j\neq i} E[(1-x_i(t))a_{ij}(t) x_j(t)], \end{equation} where $E[\cdot]$ denotes the expectation of a random variable. Now, assume $i\neq j$. By the definition of the variables $x_i$ and $a_{ij}$, it follows that \begin{equation}\label{eq:E[]=...} \begin{aligned} &E[(1-x_i(t)) a_{ij}(t) x_j(t)] \\ =& \Pr(\mbox{$v_i$ and $v_j$ are adjacent, $v_i$ is susceptible, and $v_j$ is infected at time $t$}) \\ =& \Pr(\mbox{$v_i$ and $v_j$ are adjacent at time $t$} \mid \Xi^t_{i, j}) \Pr(\Xi^t_{i, j}), \end{aligned} \end{equation} where the event $\Xi^t_{i, j}$ is defined by \begin{equation} \Xi^t_{i, j} = \mbox{``$v_i$ is susceptible and $v_j$ is infected at time $t$''}. \end{equation} If we further define the event \begin{equation} \Gamma_{i\to j} ^t = \mbox{``$v_i$ is activated and chooses $v_j$ as its neighbor at time $t$''}, \end{equation} then, we obtain \begin{equation}\label{eq:adjProbability} \begin{aligned} &\Pr(\mbox{$v_i$ and $v_j$ are adjacent at time $t$} \mid \Xi^t_{i, j}) \\ =& \Pr(\Gamma_{i\to j}^t \mid \Xi^t_{i, j}) + \Pr(\Gamma^t_{ j\to i} \mid \Xi^t_{i, j}) - \Pr(\Gamma_{i\to j}^t \mid \Xi^t_{i, j}) \Pr(\Gamma^t_{ j\to i} \mid \Xi^t_{i, j}) \\ =& 1 - \bigl[1-\Pr(\Gamma_{i\to j}^t \mid \Xi^t_{i, j})\bigr] \bigl[1 - \Pr(\Gamma^t_{ j\to i} \mid \Xi^t_{i, j})\bigr]. \end{aligned} \end{equation} The event $\Gamma_{i\to j}^t$ occurs when and only when $v_i$ is activated, chooses $v_j$ as a potential neighbor, and actually connects to~$v_j$ (according to the probability given by equation~\cref{eq:acceptanceProb}). Therefore, equation~\cref{eq:def:phipsi} implies \begin{equation}\label{eq:psi_i} \Pr(\Gamma_{i\to j}^t \arrowvert \Xi^t_{i, j}) = \psi_i. \end{equation} Similarly, the event $\Gamma^t_{ j\to i}$ occurs when and only when $v_j$ is activated (with probability~$\chi_j a_j$ if $v_j$ is infected at time $t$) and chooses $v_i$ as one of its $m$ neighbors. Therefore, we have \begin{equation}\label{eq:phi_j} \Pr(\Gamma^t_{ j\to i} \arrowvert \Xi^t_{i, j}) = \phi_j. \end{equation} Hence, for $i\neq j$, combination of equations \cref{eq:E[]=...,eq:adjProbability,eq:psi_i,eq:phi_j} yields \begin{equation}\label{eq:pre:upperDynamics} \begin{aligned} E[(1-x_i(t)) a_{ij}(t) x_j(t)] &= [1-(1-\psi_i)(1-\phi_j)]\Pr(\Xi_{i,j}^t) \\ &\leq [1-(1-\psi_i)(1-\phi_j)]p_j(t), \end{aligned} \end{equation} where we have used the trivial inequality~$\Pr(\Xi_{i,j}^t) \leq p_j(t)$. Moreover, inequality \cref{eq:pre:upperDynamics} trivially holds true also when $i= j$. Inequalities \cref{eq:pre:upperDynamics,eq:p_i(t+1)<=...} prove \cref{eq:upperDynamics}, as desired. \end{proof} Using Proposition~\ref{prop:dynamics}, we obtain the following theorem that gives an explicit upper bound on the decay rate of the activity-driven A-SIS model. For a vector~$\xi \in \mathbb R^n$, introduce the notations \begin{equation} \av{\xi}_{\!a} = \frac{1}{n}\sum_{i=1}^n a_i\xi_i ,\quad \av{\xi}_{\!a^2} = \frac{1}{n}\sum_{i=1}^n a_i^2\xi_i. \end{equation} \begin{theorem}\label{thm:eqanalysis} Define \begin{equation}\label{eq:upperBound} {\alpha_{\rm u}} = 1 - \delta + \kappa \bar m n \beta, \end{equation} where \begin{equation}\label{defn:anglers} \kappa = \frac{\av{\chi}_{\!a} + \av{\pi}_{\!a} - \bar m\av{\chi\pi}_{\!a^2} + \sqrt{ {(\av{\chi}_{\!a} + \av{\pi}_{\!a} - \bar m\av{\chi\pi}_{\!a^2})}^2 + 4(\av{\chi\pi}_{\!a^2} - \av{\chi}_{\!a} \av{\pi}_{\!a}) }}{2}. \end{equation} Then, the decay rate $\alpha$ satisfies \begin{equation} \alpha \leq \alpha_{\rm u}. \end{equation} \end{theorem} \begin{proof} Inequality \cref{eq:upperDynamics} implies that there exists a nonnegative variable $\epsilon_i(t)$ such that \begin{equation}\label{eq:pDynamics+epsilon} p_i(t+1) = (1-\delta) p_i(t) + \beta \sum_{j=1}^n\left(1 - (1-\psi_i)(1-\phi_j)\right) p_j(t) - \epsilon_i(t) \end{equation} for all nodes $v_i$ and $t\geq 0$. Let us define the vectors $\epsilon(t) = [\epsilon(t)\ \cdots\ \epsilon_n(t)]^\top$, $\phi = [\phi_1\ \cdots \ \phi_n]^\top$, and $\psi = [\psi_1\ \cdots \ \psi_n]^\top$. Equation~\cref{eq:pDynamics+epsilon} is rewritten as \begin{equation}\label{eq:originalDynamicsp} p(t+1) = \mathcal F p(t)- \epsilon(t), \end{equation} where \begin{equation}\label{eq:def:calF} \mathcal F = (1-\delta) I + \beta \left[\mathbbold{1}\onev^\top - (\mathbbold{1} - \psi)( \mathbbold{1} - \phi)^\top\right], \end{equation} $\mathbbold{1}$ denotes the $n$-dimensional vector whose entries are all one, and $I$ denotes the $n\times n$ identity matrix. Since $\mathcal F$ and $\epsilon(t)$ are nonnegative entrywise, equation~\cref{eq:originalDynamicsp} leads to $p(t) = \mathcal F^t p(0) - \sum_{\ell=0}^t \mathcal F^{k-\ell} \epsilon(\ell) \leq \mathcal F^t p(0)$. This inequality shows \begin{equation}\label{eq:r<=rho(F)} \alpha \leq \rho(\mathcal F), \end{equation} where $\rho(\cdot)$ denotes the spectral radius of a matrix. \begin{figure}[tb] \centering \includegraphics[width=.475\linewidth]{fig3.pdf} \caption{Characteristic equation \cref{eq:charEquation}} \label{fig:illustration} \end{figure} Now, we evaluate $\rho(\mathcal F)$. Equation~\cref{eq:def:calF} is rewritten as $\mathcal F = (1-\delta) I + \beta \mathcal A $, where $\mathcal A = \mathbbold{1} \mathbbold{1}^\top - (\mathbbold{1} - \psi)( \mathbbold{1} - \phi)^\top$. Since $\mathcal A$ is nonnegative entrywise and $1-\delta \geq 0$, we obtain \begin{equation}\label{eq:rho(F)=} \rho(\mathcal F) = 1 -\delta + \beta \rho(\mathcal A). \end{equation} Furthermore, as we prove in \cref{app:rhoA}, it holds that \begin{equation}\label{eq:rho(A)=rho(nB)} \rho(\mathcal A) = \rho (n\mathcal B), \end{equation} where \begin{gather}\label{eq:def:langphipsirnv} \mathcal B = \begin{bmatrix} 1 & 1 - \av \psi \\ -1+ \av \phi& -1+\av \phi + \av \psi - \av {\phi\psi} \end{bmatrix}, \\ \label{eq:def:langphipsirang} \langle\phi \rangle = \frac 1 n \sum_{i=1}^n \phi_i , \quad \langle\psi \rangle = \frac 1 n \sum_{i=1}^n \psi_i , \quad \langle\phi\psi \rangle = \frac 1 n \sum_{i=1}^n \phi_i\psi_i. \end{gather} As shown in \cref{fig:illustration}, matrix $\mathcal B$ has the characteristic equation \begin{equation}\label{eq:charEquation} (1-\lambda) \av{\phi\psi} = ( \lambda - \av \phi)( \lambda - \av \psi) \end{equation} having the roots \begin{equation}\label{eq:roots} \lambda = \frac{\av \phi + \av \psi - \av{\phi\psi} \pm \sqrt{ {(\av \phi + \av \psi - \av{\phi\psi})}^2 + 4(\av{\phi\psi} - \av \phi \av \psi) }}{2}. \end{equation} The roots are real because \begin{equation}\label{eq:suportRealness} {(\av \phi + \av \psi - \av{\phi\psi})}^2 + 4(\av{\phi\psi} - \av \phi \av \psi) \geq {(\av \psi - \av \phi)}^2 + \av{\phi\psi}^2 > 0, \end{equation} which follows from the trivial inequality~$4\av{\phi\psi} \geq 2\av \phi \av{\phi\psi} + 2\av \psi \av{\phi\psi}$. Therefore, by substituting equation~\cref{eq:def:phipsi} into equation~\cref{eq:roots}, we obtain $\rho(\mathcal B) = \kappa \bar m $. This equation and \cref{eq:r<=rho(F),eq:rho(A)=rho(nB),eq:rho(F)=} complete the proof of the \lcnamecref{thm:eqanalysis}. \end{proof} The following corollary shows that an epidemic will become extinct more quickly when the adaptation factor and acceptance rate are less correlated in a weighted sense. \begin{corollary}\label{cor:sensitivity} Let $(\chi, \pi)$ and $(\chi', \pi')$ be pairs of adaptation factors and acceptance rates of nodes, and denote the corresponding upper-bounds on the decay rates by $\alpha_{\rm u}$ and $\alpha_{\rm u}'$, respectively. If $\av{\chi}_{\!a} = \av{\chi'}_{\!a}$, $\av{\pi}_{\!a} = \av{\pi'}_{\!a}$, and $\av{\chi\pi}_{\!a^2} < \av{\chi'\pi'}_{\!a^2}$, then \begin{equation} \alpha_{\rm u} < \alpha_{\rm u}'. \end{equation} \end{corollary} \begin{proof} By the proof of \cref{thm:eqanalysis}, we have $\alpha_{\rm u} = 1 - \delta + \rho(\mathcal B) n\beta$. \cref{fig:illustration} implies that $\rho(\mathcal B)$ increases with $\langle \phi \psi \rangle$ when $\av \phi$ and $\av \psi$ are fixed. This proves the claim of the corollary because $\av{\chi\pi}_{\!a^2} = \langle \phi \psi \rangle/\bar m^2$, $\av{\chi}_{\!a} = \av \phi/\bar m$, and $\av{\pi}_{\!a} = \av \psi/\bar m$. \end{proof} As another corollary of \cref{thm:eqanalysis}, we also present an upper bound on the decay rate when nodes do not adapt to the states of the nodes. \begin{corollary}\label{cor:SIS} Assume $\chi_i = \pi_i = 1$ for all $i$. Let \begin{equation} \kappa_0 = \frac{2\av{a} - \bar m\av{a^2} + \sqrt{ 4\av{a^2} - 4 \bar m \av{a} \av{a^2} +\bar m^2\av{a^2}^2 }}{2}. \end{equation} Then, the decay rate of the activity-driven SIS model is at most $1 - \delta + \kappa_0 \bar mn \beta$. \end{corollary} \begin{remark} If $m$ is sufficiently small compared with $n$ and, furthermore, $n$ is sufficiently large (as implicitly assumed in~\cite{Perra2012}), the upper-bound in \cref{cor:SIS} reduces to $1 - \delta + (\av a + \sqrt{\av{a^2}})m\beta$, which coincides with the result in~\cite{Perra2012}. \end{remark} \section{Cost-optimal adaptations}\label{sec:optimizaiton} In this section, we study the problem of eradicating an epidemic outbreak by distributing resources to nodes in the activity-driven network. We consider the situation in which there is a budget that can be invested on strengthening the preventative behaviors of each node. We show that the optimal budget allocation is found using geometric programs, which can be efficiently solved in polynomial time. \subsection{Problem statement} We consider an optimal resource allocation problem in which we can tune the adaptation factors and acceptance rates of nodes. Assume that, to set the adaptation factor of node $v_i$ to $\chi_i$, we need to pay a cost~$f_i(\chi_i)$. Similarly we need to pay a cost~$g_i(\pi_i)$ to set the acceptance rate of node $v_i$ to $\pi_i$. The total cost for tuning the parameters to the values $\chi_1$, \dots, $\chi_n$, $\pi_1$, \dots, $\pi_n$ equals \begin{equation} C = \sum_{i=1}^n (f_i(\chi_i) + g_i(\pi_i)). \end{equation} Throughout this section, we assume the following box constraints: \begin{equation}\label{eq:boxConstraints} 0 < \ubar \chi_i \leq \chi_i \leq \bar \chi_i ,\quad 0< \ubar \pi_i \leq \pi_i \leq \bar \pi_i. \end{equation} In this paper, we consider the following two types of optimal resource allocation problems. \begin{problem}[Cost-constrained optimal resource allocation]\label{prb:} Given a total budget $\bar C$, find the adaptation rates and acceptance rates that minimize $\alpha_{\rm u}$ while satisfying the budget constraint \begin{equation}\label{eq:budgetConstraint} C \leq \bar C. \end{equation} \end{problem} \begin{problem}[Performance-constrained optimal resource allocation]\label{prb:pc} Given a largest tolerable decay rate $\bar \alpha$, find the adaptation rates and acceptance rates that minimize the total cost $C$ while satisfying the performance constraint \begin{equation}\label{eq:performanceConstraint} \alpha_{\rm u}\leq \bar \alpha. \end{equation} \end{problem} \subsection{Cost-constrained optimal resource allocation} In this subsection, we show that \cref{prb:} can be transformed to a geometric program~\cite{Boyd2007}, which can be efficiently solved. Before stating our main results, we give a brief review of geometric programs. Let $x_1$, \dots, $x_n$ denote positive variables and define $x = (x_1, \dotsc, x_n)$. We say that a real function~$q(x)$ is a \emph{monomial} if there exist $c \geq 0$ and $a_1, \dotsc, a_n \in \mathbb{R}$ such that $q(x) = c x_{\mathstrut 1}^{a_{1}} \dotsm x_{\mathstrut n}^{a_n}$. Also, we say that a function~$r(x)$ is a \emph{posynomial} if it is a sum of monomials of~$x$ (we point the readers to~\cite{Boyd2007} for more details). Given a collection of posynomials $r_0(x)$, \dots, $r_k(x)$ and monomials $q_1(x)$, \dots, $q_\ell(x)$, the optimization problem \begin{equation} \begin{aligned} \minimize\ \ \ \, & r_0(x) \\ \st\ \ & r_i(x)\leq 1,\quad i=1, \dotsc, k, \\ & q_j(x) = 1,\quad j=1, \dotsc, \ell, \end{aligned} \end{equation} is called a \emph{geometric program}. A constraint of the form $r(x)\leq 1$ with $r(x)$ being a posynomial is called a \emph{posynomial constraint}. Although geometric programs are not convex, they can be efficiently converted into equivalent convex optimization problems~\cite{Boyd2007}. We assume that the cost functions~$f_i$ and $g_i$ decrease with the adaptation factor~$\chi_i$ and acceptance rate~$\pi_i$, respectively. This assumption implies a natural situation in which it is more costly to suppress $\chi_i$ and $\pi_i$ to a larger extent. We also expect diminishing returns with increasing investments~\cite{Reluga2010}. For a fixed $\epsilon > 0$, let $\Delta f_i(\chi_i) = f_i(\chi_i-\epsilon) - f_i(\chi_i)$ denote the cost for improving the adaptation factor from $\chi_i$ to~\mbox{$\chi_i -\epsilon$}. Then, diminishing returns imply that $\Delta f_i$ decreases with $\chi_i$, which implies the convexity of $f_i$. Therefore, we place the following assumption on the cost functions. \begin{assumption}\label{assm:} For all $i \in \{1, \dotsc, n\}$, decompose $f_i$ and $g_i$ into the differences of their positive and negative parts as follows: \begin{align} f_i &= f_i^+ - f_i^-, \\ g_i &= g_i^+ - g_i^-, \end{align} where $f_i^+ = \max(f, 0)$, $f_i^- = \max(-f, 0)$, $g_i^+ = \max(g, 0)$, and $g_i^- = \max(-g, 0)$. Then, $f_i^+$ and $g_i^+$ are posynomials, and $f_i^-$ and $g_i^-$ are constants. \end{assumption} \cref{assm:} allows us to use any cost functions that are convex on the log-log scale because any function convex on the log-log scale can be approximated by a posynomial with an arbitrary accuracy~\cite[Section~8]{Boyd2007}. We now state our first main result in this section, which allows us to efficiently solve \cref{prb:} via geometric programming: \begin{theorem}\label{thm:bc} Let $\chi_i^\star$ and $\pi_i^\star$ be the solutions of the following optimization problem: \begin{subequations}\label{eq:opt} \begin{align} \minimize_{\tilde \lambda,\, {\chi_{i}},\, \pi_i,\,\zeta,\,\eta > 0}\ \ & 1/\tilde \lambda \\ \st\ \ \ &\mbox{\cref{eq:boxConstraints},} \\ & \bar m^2 \tilde \lambda \av{\chi \pi}_{\!a^2}\zeta\eta \leq 1, \label{eq:quadConstraint} \\ & \zeta^{-1} + \tilde \lambda + \bar m \av{\chi}_{\!a}\leq 1, \label{eq:zetaConstraint} \\ & \eta^{-1} + \tilde \lambda + \bar m \av{\pi }_{\!a}\leq 1, \label{eq:etaConstraint} \\ &\sum_{i=1}^n (f_i^+(\chi_i) + g_i^+(\pi_i)) \leq \bar C + \sum_{i=1}^n (f_{i}^- + g_{i}^-).\label{eq:tildeCost<barC} \end{align} \end{subequations} Then, the adaptation factor $\chi_i = \chi_i^\star$ and the acceptance rate $\pi_i = \pi_i^\star$ solve \cref{prb:}. Moreover, under \cref{assm:}, the optimization problem \cref{eq:opt} is a geometric program. \end{theorem} To prove this theorem, we show an alternative characterization of the decay rate in terms of inequalities. \begin{lemma}\label{lem:ineqanalysis} Let $\lambda > 0$. The upper bound $\alpha_{\rm u}$ satisfies \begin{equation}\label{eq:ineqChar} \alpha_{\rm u} \leq 1-\delta + \lambda n \beta \end{equation} if and only if \begin{align} (1-\lambda) \av{\phi\psi} &\leq ( \lambda - \av \phi)( \lambda - \av \psi), \label{eq:lambdaIneq1} \\ \av{\phi} &< \lambda, \label{eq:lambdaIneq2} \\ \av{\psi} &< \lambda. \label{eq:lambdaIneq3} \end{align} \end{lemma} \begin{proof} By the proof of \cref{thm:eqanalysis}, inequality~\cref{eq:ineqChar} holds true if and only if $\lambda \geq \rho(\mathcal B)$. \cref{fig:illustration} indicates that $\lambda \geq \rho(\mathcal B)$ is equivalent to conditions \cref{eq:lambdaIneq1,eq:lambdaIneq2,eq:lambdaIneq3}. \end{proof} We can now prove \cref{thm:bc}: \begin{proof}[Proof of \cref{thm:bc}] By \cref{lem:ineqanalysis}, the solutions of \cref{prb:} are given by those of the following optimization problem: \begin{subequations}\label{eq:optpre} \begin{align} \minimize_{\lambda,\, {\chi_{i}},\, \pi_i > 0}\ \ \ \, & 1-\delta + \lambda n \beta \\ \st\ \ &\mbox{\cref{eq:lambdaIneq1,eq:lambdaIneq2,eq:lambdaIneq3,eq:boxConstraints,eq:budgetConstraint}}. \end{align} \end{subequations} Define the auxiliary variables $\zeta = {1}/{(\lambda - \av \phi)}$ and $\eta = {1}/{(\lambda - \av \psi)}$. Then, conditions~\cref{eq:lambdaIneq1,eq:lambdaIneq2,eq:lambdaIneq3} hold true if and only if $(1-\lambda)\av{\phi\psi}\zeta\eta \leq 1$, $\zeta > 0$, and $\eta > 0$. Therefore, the optimization problem~\cref{eq:optpre} is equivalent to the following optimization problem: \begin{subequations}\label{eq:opt1} \begin{align \minimize_{\lambda,\, {\chi_{i}},\, \pi_i,\,\zeta,\,\eta > 0}\ \ & \lambda \\ \st\ \ \ &\mbox{\cref{eq:boxConstraints,eq:budgetConstraint}}, \\& (1-\lambda) \av{\phi\psi} \zeta\eta \leq 1, \\ & \zeta^{-1} - \lambda + \av{\phi}= 0, \label{eq:pre:zetaConst} \\ & \eta^{-1} - \lambda + \av{\psi} = 0 , \label{eq:pre:etaConst} \end{align} \end{subequations} where we minimize $\lambda$ instead of $1-\delta + \lambda n \beta$. We claim that the optimal value of the objective function is equal to the one in the following optimization problem: \begin{subequations}\label{eq:opt2} \begin{align} \minimize_{\lambda,\, {\chi_{i}},\, \pi_i,\,\zeta,\,\eta > 0}\ \ & \lambda \\ \st\ \ \ &\mbox{\cref{eq:boxConstraints,eq:budgetConstraint}}, \\ &(1-\lambda) \av{\phi\psi} \zeta\eta \leq 1, \label{eq:pre:quadConstraint} \\ & \zeta^{-1} - \lambda + \av{\phi} \leq 0, \label{eq:prepre:zetaConst} \\ & \eta^{-1} - \lambda + \av{\psi} \leq 0. \label{eq:prepre:etaConst} \end{align} \end{subequations} Let $\lambda_1^\star$ and $\lambda_2^\star$ be the optimal values of the objective functions in problems \cref{eq:opt1} and \cref{eq:opt2}, respectively. We have $\lambda_1^\star \geq \lambda_2^\star$ because the constraints in problem \cref{eq:opt1} are more strict than those in \cref{eq:opt2}. Let us show $\lambda_1^\star \leq \lambda_2^\star$. Assume that the optimal value~$\lambda_2^\star$ in problem~\cref{eq:opt2} is attained by the parameters $(\lambda, \chi_i, \pi_i, \zeta, \eta) = (\lambda^\star, \chi_i^\star, \pi_i^\star, \zeta^\star, \eta^\star)$. Since the left-hand sides of constraints \cref{eq:prepre:etaConst,eq:prepre:zetaConst} decease with~$\zeta$ and $\eta$, there exist nonnegative constants $\Delta \zeta$ and $\Delta \eta$ such that $\zeta = \zeta^\star - \Delta \zeta$ and $\eta = \eta^\star - \Delta \eta$ satisfy the equality constraints \cref{eq:pre:etaConst,eq:pre:zetaConst}. Moreover, since the left-hand side of the constraint~\cref{eq:pre:quadConstraint} increases with~$\zeta$ and~$\eta$, the new set of parameters $(\lambda, \chi_i, \pi_i, \zeta, \eta) = (\lambda^\star, \chi_i^\star, \pi_i^\star, \zeta^\star - \Delta \zeta, \eta^\star - \Delta \eta)$ still satisfies \cref{eq:pre:quadConstraint}. Furthermore, these changes of parameters do not affect the feasibility of the box constraints \cref{eq:boxConstraints} and the budget constraint~\cref{eq:budgetConstraint} because the constraints are independent of the values of $\zeta$ and $\eta$. Therefore, we have shown the existence of parameters achieving $\lambda = \lambda_2^\star$ but still satisfying the constraints in the optimization problem~\cref{eq:opt1}. This shows $\lambda_1^\star \leq \lambda_2^\star$, as desired. Now, by rewriting the optimization problem \cref{eq:opt2} in terms of the variables $\tilde \lambda = 1-\lambda$ and substituting \cref{eq:def:phipsi} in \cref{eq:opt2}, we obtain the optimization problem~\cref{eq:opt}. Notice that minimizing $\lambda$ is equivalent to maximizing $1-\tilde \lambda$, which is equivalent to minimizing $1/\tilde \lambda$. Let us finally show that \cref{eq:opt} is a geometric program. The objective function, $1/\tilde \lambda$, is a posynomial in $\tilde \lambda$. The constraints \cref{eq:quadConstraint,eq:etaConstraint,eq:zetaConstraint,eq:boxConstraints} are posynomial constraints. Finally, \cref{assm:} guarantees that constraint~\cref{eq:tildeCost<barC} is a posynomial constraint as well. This completes the proof of the \lcnamecref{thm:bc}. \end{proof} \begin{figure}[tb] \centering \includegraphics[width=1\linewidth,trim={0in 4.5in 0in 0in},clip]{fig4.pdf} \caption{Comparison between the numerically obtained decay rates $\alpha$ and their upper bounds $\alpha_{\rm u}$ in \labelcref{case:uniform} when $\beta = 0.8$. (a) $m=2$, (b) $m=10$, and (c) $m=50$.} \label{fig:analysisUniformComparison} \centering \includegraphics[width=1\linewidth,trim={0in 0in 0in 4.45in in},clip]{fig4.pdf} \caption{Discrepancy between the true decay rates and their upper bounds in \labelcref{case:uniform}. (a)~$m=2$, (b) $m=10$, and (c) $m=50$.} \label{fig:analysisUniformErrors} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=1\linewidth,trim={0in 4.5in 0in 0in},clip]{fig5.pdf} \caption{Comparison between the numerically obtained decay rates $\alpha$ and their upper bounds $\alpha_{\rm u}$ in \labelcref{case:power} when $\beta = 0.8$. (a) $m=2$, (b) $m=10$, and (c) $m=50$.} \label{fig:analysisPowerComparison} \centering \includegraphics[width=1\linewidth,trim={0in 0in 0in 4.45in in},clip]{fig5.pdf} \caption{Discrepancy between the true decay rates and their upper bounds in \labelcref{case:power}. (a)~$m=2$, (b) $m=10$, and (c) $m=50$.} \label{fig:analysisPowerErrors} \end{figure} \subsection{Performance-constrained optimal resource allocation} In the same way as in the previous section, we can efficiently solve \cref{prb:pc} via geometric programming: \begin{theorem}\label{thm:pc} Let $\chi_i^\star$ and $\pi_i^\star$ be the solution of the following optimization problem: \begin{subequations} \begin{align \minimize_{\tilde \lambda,\, {\chi_{i}},\, \pi_i,\,\zeta,\,\eta > 0}\ \ & \sum_{i=1}^n (f_i^+(\chi_i) + g_i^+(\pi_i)) \\ \st\ \ \ & \mbox{\upshape\cref{eq:quadConstraint,eq:etaConstraint,eq:zetaConstraint,eq:boxConstraints}}, \\ & \frac{\beta n + 1 - \delta - \bar \alpha}{\beta n}\tilde \lambda^{-1} \leq 1.\label{eq:tildeperformance<bargamma} \end{align} \end{subequations} Then, the adaptation factor $\chi_i = \chi_i^\star$ and the acceptance rate $\pi_i = \pi_i^\star$ solve \cref{prb:pc}. Moreover, under \cref{assm:}, the optimization problem is a geometric program. \end{theorem} \begin{proof} Constraint~\cref{eq:tildeperformance<bargamma} is equivalent to the performance constraint~\cref{eq:performanceConstraint}. The rest of the proof is almost the same as the proof of \cref{thm:bc} and is omitted. \end{proof} \section{Numerical simulations}\label{ref:numerical} In this section, we illustrate the theoretical results obtained in previous sections by numerical simulations. \begin{figure}[tb] \centering \includegraphics[width=.55\linewidth]{fig6.pdf} \caption{Cost function $f_i(\chi_i)$ for $p=0.01$, $1$, $10$, and $100$ when $\myubar{$\chi$}{1.5pt}_i=0.5$.} \label{fig:costFunctions} \end{figure} \subsection{Accuracy of the upper bound} We first illustrate the accuracy of the upper bound \cref{eq:upperBound} on the decay rate. We use an activity-driven network with $n=250$ nodes and study the following two cases: \begin{enumerate}[labelindent=\parindent, leftmargin=*, label=Case \arabic*., ref=Case \arabic*] \item\label{case:uniform} Activity rates following a uniform distribution over $[0, 10^{-2}]$; \item\label{case:power} Activity rates following a probability distribution $F(a)$ that is proportional to $a^{-2.8}$ in the interval~$[10^{-3}, 1]$ and equal to zero elsewhere~\cite{Perra2012}. \end{enumerate} We assume that both the adaptation factors~$\chi_i$ and acceptance rates~$\pi_i$ follow a uniform distribution over $[0, 1]$. For various values of the infection rate $\beta$, the recovery rate~$\delta$, and $m$, we compute the decay rate~$\alpha$ based on numerical simulations of the model and its upper bound~$\alpha_{\rm u}$. To compute the decay rate~$\alpha$, we use Monte Carlo simulations. For each triple $(\beta, \delta, m)$, we run \num[group-separator={,}]{10000} simulations of the activity-driven A-SIS model with all nodes being infected at time $t=0$. In each numerical simulation, we compute the probability vector~$p(t)$ for each $t=0$, $1$, $2$, $\dotsc$ until the norm $\lVert p(t) \rVert$ falls below $0.1$. We then estimate the decay rate by $\alpha = \max_{t}t^{-1}\log \lVert p(t) \rVert$. In \cref{fig:analysisUniformComparison}, we let $\beta = 0.8$ and compare the decay rates and their upper bounds in \labelcref{case:uniform} for various values of $\delta$ and $m$. We confirm that $\alpha_{\rm u}$ bounds the numerically obtained decay rates. The discrepancy $\alpha_{\rm u}-\alpha$ increases with $m$. To further examine how the discrepancy depends on the parameters, we present the discrepancy for various values of $\beta$, $\delta$, and $m$ in \cref{fig:analysisUniformErrors}. Besides the aforementioned dependence of the discrepancy on $m$, we also see that the discrepancy tends to be large when $\beta$ is large. We observe the same trend in the case of the power-law distribution of the activity rate (\labelcref{case:power}; shown in \cref{fig:analysisPowerComparison,fig:analysisPowerErrors}). \subsection{Optimal resource distribution} \newcommand{\myubar{$\ap$}{.25pt}}{\myubar{$\pi$}{.25pt}} \begin{figure}[tb] \centering \includegraphics[width=1\linewidth]{fig7.pdf} \caption{Optimal investments on adaptation factors (circles) and acceptance rates (squares) in \labelcref{case:uniform}. Left column: $m=2$, middle column: $m=10$, right column: $m=50$. Top row: $\myubar{$\ap$}{.25pt}_i = 0.2$, middle row: $\myubar{$\ap$}{.25pt}_i = 0.7$, bottom row: $\myubar{$\ap$}{.25pt}_i = 0.9$.} \label{fig:bc:random} \end{figure} We numerically illustrate our framework to solve the optimal resource allocation problems developed in \Cref{sec:optimizaiton}. We assume $\bar \chi_i = 1$ and $\bar \pi_i = 1$ in the box constraints~\cref{eq:boxConstraints}. We use the following cost functions (similar to the ones in~\cite{Preciado2014}): \begin{equation}\label{eq:costFunctions} f_i(\chi_i) = (1-\ubar \chi_i)\frac{\chi_i^{-p} - 1}{{\ubar \chi}_i^{-p} - 1},\quad g_i(\pi_i) = (1-\ubar \pi_i)\frac{\pi_i^{-q} - 1}{{\ubar \pi}_i^{-q} - 1}. \end{equation} These cost functions satisfy \cref{assm:}. Parameters $p, q>0$ tune the shape of the cost functions as illustrated in \cref{fig:costFunctions}. Because the cost functions are normalized as $f_i(\ubar \chi_i)=1-\ubar \chi_i$, $f_i(\bar \chi_i)=f_i(1)=0$, $g_i(\ubar \pi_i)=1-\ubar \pi_i$, and $g_i(\bar \pi_i)=g_i(1)=0$, the maximum adaptation $(\chi_i, \pi_i) = (\ubar \chi_i, \ubar \pi_i)$ ($1\leq i \leq n$) in the network is achieved with the budget \begin{equation} C_{\max} = 2n - \sum_{i=1}^n (\ubar \chi_i + \ubar \pi_i). \end{equation} As in our previous simulations, we consider the activity-driven A-SIS model over a network with $n=250$ nodes. We let $m$ be either $2$, $10$, or $50$. Let $\ubar \chi_i = 0.8$. We let the value of $\ubar \pi_i$ be either $0.2$, $0.7$, or $0.9$, and use $p=q=0.01$ for the cost functions~\cref{eq:costFunctions}. We use the fixed budget $\bar C = C_{\max}/4$. For each pair $(m, \ubar \pi_i)$, we determinate the adaptation factors and acceptance rates for the cost-constrained optimal resource allocation problem (\cref{prb:}) by solving the geometric program shown in \cref{thm:bc}. \begin{figure}[tb] \centering \includegraphics[width=1\linewidth]{fig8.pdf} \caption{Optimal investments on adaptation factors (red circles) and acceptance rates (blue squares) in \labelcref{case:power}. Left column: $m = 2$, middle column: $m = 10$, right column: $m = 50$. Top row: $\myubar{$\ap$}{.25pt}_i = 0.2$, middle row: $\myubar{$\ap$}{.25pt}_i = 0.7$, bottom row: $\myubar{$\ap$}{.25pt}_i = 0.9$.}\label{fig:bcpower} \end{figure} The optimal investments on the adaptation factors and acceptance rates (i.e., $f_i(\chi_i)$ and $g_i(\pi_i)$) are shown in \cref{fig:bc:random} for \labelcref{case:uniform}. We see that the smaller the lower limit of the acceptance rate $\ubar \pi_i$, the more we should invest on decreasing the acceptance rates. We also find that, in the case of $m=2$ and $10$, the resulting investment is almost independent of the activity rates of nodes. This trend disappears for larger values of~$m$. In the case of $m=50$, the optimal solution disproportionately invests on the nodes having high activity rates. This result reflects the structure of the optimization problem~\cref{eq:opt} for the following reason. If $m \ll n$, then $\bar m \ll 1$. Therefore, the set of constraints in the optimization problem~\cref{eq:opt} approximately reduces to the set of constraints~\cref{eq:boxConstraints}, \begin{gather} \zeta^{-1} + \tilde \lambda \leq 1, \\ \eta^{-1} + \tilde \lambda \leq 1, \end{gather} and \cref{eq:tildeCost<barC}. These four constraints do not involve activity rates, which leads to optimal adaptation factors and acceptance rates that are independent of the activity rates. On the other hand, for a large $m$, constraints~\cref{eq:quadConstraint,eq:zetaConstraint,eq:etaConstraint} involving the weighted sums~$\av{\chi}_{\!a}$, $\av{\pi}_{\!a}$, and $\av{\chi \pi}_{\!a^2}$ become tighter, rendering investments on high-activity nodes more effective. The optimal investments for \labelcref{case:power} are shown in \cref{fig:bcpower}. As in \labelcref{case:uniform}, the smaller $\ubar \pi_i$, the optimal solution invests more on decreasing the acceptance rates. However, the dependence of the optimal solution on the value of $m$ is not as strong as in \labelcref{case:uniform}. \section{Conclusion} In this paper, we have studied epidemic processes taking place in temporal and adaptive networks. Based on the activity-driven network model, we have proposed the activity-driven A-SIS model, where infected individuals adaptively decrease their activity and reduce connectivity with other nodes to prevent the spread of the infection. In order to avoid the computational complexity arising from the model, we have first derived a linear dynamics able to upper-bound the infection probabilities of the nodes. We have then derived an upper-bound on the decay rate of the expected number of infected nodes in the network in terms of the eigenvalues of a $2\times 2$ matrix. Then, we have shown that a small correlation between the two different adaptation mechanisms is desirable for suppressing epidemic infection. Furthermore, we have proposed an efficient algorithm to optimally tune the adaptation rates in order to suppress the number of infected nodes in networks. We have illustrated our results by numerical simulations.
1,116,691,501,084
arxiv
\section{Introduction}\label{sec:intro} It is essential to calibrate the true redshift distribution of galaxies in a photometric survey if the survey is to be utilized to its full potential. One application of survey data that requires a detailed understanding of the distribution of galaxies is weak gravitational lensing tomography. The shearing of the shapes of distant galaxies via weak gravitational lensing is a powerful cosmological probe that can be used to study the distribution of dark matter, the nature of dark energy, the formation of large scale structures in the universe, as well as fundamental properties of elementary particles and potential modifications to the general theory of relativity (recent studies include \cite{2008ARNPS..58...99H}, \cite{2009MNRAS.395..197T}, \cite{2009A&A...497..677K}, \cite{2009PhRvD..79b3520I}). Cosmic shear measurements statistically examine minute distortions in the orientations of high redshift galaxies, whose shapes have been sheared by intervening dark matter structures. Although weak gravitational lensing provides only an integrated measure of the intervening density field, using source populations at different redshifts permits some degree of three dimensional reconstruction, known as tomography. The distortions are small (at the 1\% level) and the intrinsic orientation of the source galaxies is unknown, thus large galaxy samples are required to map the density field and probe the growth of density fluctuations with precision. Existing cosmic shear measurements have already constrained the amplitude of the dark matter fluctuations at the 10\% level \cite{2004MNRAS.347..895H} \cite{2004ApJ...605...29R} \cite{2005MNRAS.359.1277M} \cite{2006A&A...452...51S} and there are many exciting galaxy survey proposals that will increase the available number of source galaxies by two orders of magnitude including DES, DUNE, Euclid, LSST, PanStarrs, SNAP, and Vista. Because these large galaxy surveys will have photometric rather than spectroscopic redshift identifications, the community has carefully attended to fine tuning the calibration of the photometric redshifts, minimizing biases and catastrophic errors \cite{2008MNRAS.386..781M}, \cite{2008MNRAS.390..118L}, \cite{2009MNRAS.398.2012F}, \cite{2009AJ....138...95X}, \cite{2009arXiv0908.4085G}. Unlike experiments that use the galaxy positions to directly trace the underlying dark matter distribution, such as baryon acoustic oscillation studies, weak lensing analyses do not require a precise redshift identification for each individual source galaxy. It is sufficient to accurately determine the redshift distribution of the sources. However, lensing measurements are extremely sensitive both to error and bias in the source distribution \cite{2002PhRvD..65f3001H}. Attaining an accurate source distribution will be crucial if weak lensing measurements are to be competitive with other cosmological probes in constraining the cosmological parameters. Another example where calibration of the true distribution of galaxies may be essential is in using the abundances and clustering of different galaxy populations to connect galaxies at late times to their potential progenitors at early times (as in e.g. \cite{2009ApJ...696..620C}). Such studies also utilize the luminosity functions of galaxies in each redshift slice, and sometimes divide these into different rest-frame color bins. To avoid potential systematics in inferences made about galaxy evolution, it will be necessary to know if some fraction of the population in a given photometric redshift bin is actually living at a different redshift, especially if there is an asymmetry in such errors that depends on color. Recently, an alternative approach to attaining an accurate source redshift distribution has been proposed in \cite{2008ApJ...684...88N}. This method is similar to cross-correlation techniques used in \cite{1985MNRAS.212..657P} and \cite{2006ApJ...644...54M}, and the idea has also been studied theoretically in \cite{2006ApJ...651...14S} and \cite{2009arXiv0902.2782B}. A similar technique was used in \cite{2009A&A...493.1197E} to check the redshift distribution for interlopers. Similar in spirit, the analysis of \cite{2009arXiv0910.2704Q} uses close angular pairs of photometric galaxies to constrain the photometric errors without the use of a spectroscopic sample. The cross-correlation method determines the photometric redshift distribution by utililizing the cross correlation of the galaxies in the photometric sample with an overlapping spectroscopic sample that traces the same underlying density field. One advantage of this approach is that the spectroscopic sample used to calibrate the photometric redshift distribution can be comprised of bright rare objects such as quasars or Luminous Red Galaxies (LRGs) whose spectra are relatively easy to obtain, and indeed may already exist in legacy data. Spectra could also be obtained for emission line galaxies (ELGs), which are easy to follow up but may not represent a fair subsample. Another advantage is that catastrophic redshift errors in the photometry do not systematically bias the redshift distribution estimate, they merely contribute to the noise. The cross-correlation method makes use of two observables, the line-of-sight projected angular cross-correlation between the photometric and spectroscopic samples $w_{ps}(\theta)$, and the three dimensional autocorrelation function of the spectroscopic sample $\xi_{ss}(r)$. By postulating a simple proportionality between the autocorrelation function of the spectroscopic objects and the three dimensional cross-corelation function between the two samples $\xi_{ps}(r) \propto \xi_{ss}(r)$, it is potentially possible to infer a very accurate redshift distribution for the photometric sample. This assumption is guaranteed to be valid if the spectroscopic sample is a sub-sample of the photometric population, but may be problematic if the two sets of tracers have different bias functions with respect to the dark matter. In this paper we develop a pipeline to apply the cross-correlation method. In section \ref{sec:theory} we review the theory and explain how the method works. We highlight its strengths and examine potential drawbacks and systematic effects. As a proof of concept, in section \ref{sec:sims} we use the halo model to populate N-body simulations with mock photometric and spectroscopic galaxy data to quantify the properties of this redshift distribution estimator. We examine the extent to which different bias functions interfere with the reconstruction of the true distribution of photometric galaxies. In section \ref{sec:disc} we discuss inherent tradeoffs, outline outstanding theoretical questions, and draw our conclusions. We leave a more detailed discussion of error propagation to the appendix. \section{Theory}\label{sec:theory} The goal of this paper is to demonstrate via numerical simulations that two spatially overlapping samples, one photometric and one spectroscopic, can be combined to infer the redshift distribution of the photometric sample to very high accuracy. The redshift distribution is \begin{eqnarray} \phi_p(z)=\frac{dN_p}{dz\,d\Omega}\,\left[ \int_0^\infty \frac{dN_p}{dz\,d\Omega} \, dz \right]^{-1} \end{eqnarray} where $\frac{dN_p}{dz\,d\Omega}$ is the number of photometric galaxies per unit redshift, per steradian, and the the quantity in brackets is the the total number of galaxies (per steradian) in the sample, ensuring that $\phi_p(z)$ integrates to one. If a survey is divided into a number of redshift bins $z_i$, then $\phi_p(z_i)$ gives the fraction of the total number of galaxies that live in the $i^{th}$ bin. Suppose we observe the angular cross-correlation function between all the photometric galaxies and the spectroscopic galaxies in a particular bin $z_i$. This angular cross correlation function is related to the photometric redshift distribution that we are attempting to calibrate. \begin{eqnarray}\label{eqn:wpsdef} w_{ps}(\theta,z_i)=\int_0^\infty \xi_{ps}(r(z,z_i,\theta))\,\phi_p(z) \, dz \end{eqnarray} Here $\xi_{ps}(r)$ is the three dimensional cross correlation function between the entire photometric sample and the spectroscopic galaxies that live in bin $i$, which is not observable because the redshifts of the photo-z sample are not known to sufficient accuracy to measure it. The key assumption of the cross correlation method is that $\xi_{ps}(r)\propto\xi_{ss}(r)$, where $\xi_{ss}(r)$, the 3D autocorrelation function of the spectroscopic calibrators, is observable. This is a reasonable assumption because on large (linear) scales, both $\xi_{ps}(r)$ and $\xi_{ss}(r)$ are related to the underlying dark matter power spectrum $\Delta_{\rm lin}^2(k)$ as \begin{eqnarray} \xi_{ss}(r) \approx \int_0^\infty \frac{dk}{k} b_s^2\Delta_{\rm lin}^2(k) j_0(kr) \\ \xi_{ps}(r) \approx \int_0^\infty \frac{dk}{k} b_s b_p\Delta_{\rm lin}^2(k) j_0(kr) \end{eqnarray} where $b_s$ and $b_p$ are the linear biases of the spectroscopic and photometric samples and $j_0$ is a spherical Bessel function. In asserting the proportionality, it is implicitly assumed that these biases are scale independent and that they evolve similarly with redshift. The former assumption is valid unless the correlation functions are being measured on scales smaller than $\sim 1$ Mpc/$h$, while the latter assumption may in fact present some difficulty unless the spectroscopic objects are a fair subsample of the photometric population. This is because in real life, the photometric sample may be apparent magnitude limited. Thus, the population of galaxies being examined at high redshift may be systematically brighter, rarer, more biased objects than those at low redshift, so the bias can be expected to evolve in a way that will be difficult to reliably calibrate. One principle goal of this paper is to examine the extent to which this impacts the cross correlation method, particularly whether the systematic biases involved are substantial compared to the resolution and accuracy of the photometric distribution that is recovered. To be very specific, on linear scales the relationship between the cross correlation function and the (observable) autocorrelation function of the spectroscopic sample is given in equation \ref{eqn:xiprop} (further corrections are required for translinear scales). \begin{eqnarray}\label{eqn:xiprop} \xi_{ps}(r)=\frac{b_p}{b_s}\xi_{ss}(r) \end{eqnarray} Thus we may write \begin{eqnarray}\label{eqn:wpsdef} w_{ps}(\theta,z_i)=\int_0^\infty \frac{b_p(z)}{b_s(z)}\xi_{ss}(r(z,z_i,\theta))\,\phi_p(z) \, dz \end{eqnarray} Since $b_s(z)$ can be fit with the spectroscopic data, this means that we can only invert the relation to solve for the product $b_p(z)\phi_p(z)$ in terms of observable quantities. This degeneracy cannot be resolved by measuring the angular correlation function of the photometric sample. We can express $w_{pp}$ in terms of $\xi_{ss}$ \begin{eqnarray} &w_{pp}(\theta)= \hspace{7cm} \nonumber \\ &\int dz_1 \int dz_2\, \phi_p(z_1) \phi_p(z_2)\, \left(\frac{b_p(z_1) b_p(z_2)}{b_s(z_1) b_s(z_2)} \right) \xi_{ss}(\theta,z_1,z_2) \end{eqnarray} Changing variables to a central redshift and a $\Delta z=z_1-z_2$, and taking note that $\xi_{ss}$ vanishes for large $\Delta z$, we find that \begin{eqnarray} w_{pp}(\theta) \propto \int dz \, \phi_p^2(z) \left( \frac{b_p^2(z)}{b_s^2(z)}\right) \xi_{ss}(\theta,z) \end{eqnarray} In terms of known observables, this relation can be inverted to determine the product $b_p^2(z)\phi_p^2(z)$. Unfortunately, this quantity has the same direction of degeneracy as the product $b_p(z)\phi_p(z)$. Thus the observable $w_{pp}(\theta)$ can only be used to improve the accuracy to which the product is determined, it cannot be used to break the degeneracy between the large scale bias and the selection function. In order to obtain $\phi_p(z)$ it will be necessary either to appeal to some model of the bias evolution or to find another observable that can be used to break the degeneracy. This is significant because estimators of e.g. the mean redshift of a sample will be affected by assuming a functional form for the bias that is incorrect. If we know we are recovering the product $b(z)\phi(z)$ then in our estimator of the mean redshift \begin{eqnarray} \bar{z}_{\rm est}=\int_0^\infty z\, \frac{b_{\rm true}(z)}{b_{\rm est}(z)}\, \phi(z) \, dz \end{eqnarray} while the true $\bar{z}$ is \begin{eqnarray} \bar{z}_{\rm true}=\int_0^\infty z\, \phi(z)\, dz \end{eqnarray} It is not unreasonable to suspect that the bias may not be a particularly smooth in its transitions if the sample of galaxies accessible to a photometric survey shifts abruptly as some redshift threshold is crossed. One example of a very rapidly changing bias function can be seen in table 2 of \cite{2009MNRAS.397.1862P} where the bias of a sample of LRG galaxies is computed in several photometric bins. At a redshift of $z\sim0.35$, the bias jumps dramatically from 1.77 to 2.36, and back down again (somwhat more smoothly) to 1.9 at higher $z$. As a quick illustrative example, we take the two functional forms of $\phi_p(z)$ used later in this paper (see sec. \ref{sec:mocks}) and compute the error in $\bar{z}$ that occurs if we assume a smooth transition in the galaxy bias from 1.7 to 1.9 in $b_{\rm est}$, but allow the true bias $b_{\rm true}$ to jump to 2.3 in between. We find that the fractional error in the mean redshift is 6\% and 11\% if the interval is $0<z<1$ and the jump is placed at $z=0.8$. If the jump is place at $z=0.3$, the fractional error is 0.05\% and 0.2\%, which is somewhat less significant, since there is substantially less volume at $z=0.3$. \section{Tests with mock data}\label{sec:sims} \subsection{Simulations and mock galaxy catalogs}\label{sec:mocks} We use the halo model to populate N-body cold dark matter simulations with mock galaxies to test the cross-correlation method. The simulations compute the evolution of large scale structure in a periodic, cubical box of side $1$ Gpc$/h$ using a Tree-PM code \cite{2002ApJS..143..241W}. There are $1024^3$ dark matter particles of mass $6 \times 10^{10} M_\odot/h$. The randomly generated Gaussian initial conditions are evolved from a starting redshift of $z=75$. The Plummer softening is $35$ kpc$/h$ (comoving). Halo catalogs are constructed from this simulation using a Friends of Friends algorithm \cite{1985ApJ...292..371D} with a linking length of b = 0.168 in units of the mean inter-particle spacing. There are approximately 7.5 million halos of mass greater than $5 \times 10^{11} M_\odot/h$ resolved with $\sim 8$ or more dark matter particles. The galaxy catalogs are constructed by populating these halos according to the halo-model prescription. It is assumed that very small halos host no galaxies. Halos that cross a mass threshold $M_{\rm min}$ are assumed to host one central galaxy. Halos of much higher mass are assumed to have formed through mergers of smaller halos and will host both central and satellite objects. The mean number of galaxies in a halo of mass $M$ is given by \begin{eqnarray}\label{eqn:hod} \left<N_{\rm gal}(M)\right>=\Theta(M-M_{\rm min}) \left(1+\frac{M-M_{\rm min}}{10\,M_{\rm min}}\right) \end{eqnarray} The position of the central galaxy is assumed to be at the halo center defined by the position of the most gravitationally bound dark matter particle. The number of satellite galaxies in any particular halo is drawn from a Poisson distribution, and the satellites are assumed to trace the dark matter. We use this prescription to populate the simulation with several different galaxy samples. One population, used as the mock photometric catalog, has a relatively low value of $M_{\rm min}=5\times10^{11} M_\odot/h$; these are fairly common galaxies with low value of the large-scale bias $b_p$ with respect to the dark matter. Although we know the true value of the redshifts of these objects from the simulations, we do not use any redshift information for the photometric sample when testing the reconstruction algorithm developed here. The other populations we have created are mock spectroscopic samples, with values of $M_{\rm min}= 1\times10^{12} M_\odot/h$ and $ 7\times 10^{12} M_\odot/h$ that generate much rarer mock galaxies with higher biases $b_s$ with respect to the underlying dark matter. For some tests of the cross-correlation method, we also use a fair subsample of 50\% or 80\% of the mock photometric population to define the spectroscopic sample. To test the cross-correlation method, we also need to impose both a selection function and an observation window on the photometric sample. For convenience, we choose to perform the reconstruction in bins of comoving distance $\chi_i$ rather than redshift bins $z_i$, but the method can be used with either choice of bins. We place the observer in the center of one face of our simulation cube, and assume that (s)he observes a conical volume with a 12 degree opening angle. The cone stretches the length of the box ($1$ Gpc$/h$) along the line of sight. We adopt a toy model for the selection function, the sum of two Gausseans, which defines the fraction of galaxies ``detected" $N_{\rm keep}(\chi)/N(\chi)$ at each of $70$ slices in comoving distance $\chi$ through the box. \begin{eqnarray}\label{eqn:sf} \frac{N_{\rm keep}(\chi)}{N(\chi)}\propto \hspace{5.5 cm} \nonumber \\ \frac{1}{\sqrt{2\pi\sigma_1^2}} \, e^{-(\chi-\chi_1)^2/2\sigma_1^2} + \frac{1}{\sqrt{2\pi\sigma_2^2}} \, e^{-(\chi-\chi_2)^2/2\sigma_2^2} \end{eqnarray} We have normalized this quantity so that the maximum value is 1, and we refer to it as the ``detection fraction" later in the paper. We present two models for comparison in this paper, $[\chi_1, \sigma_1,\chi_2, \sigma_2]=[0,0.15,0.8,0.16]$ and $[0.3,0.07,0.7,0.10]$. In the limit of small galaxy numbers in the slice, it is significant that we apply the selection function before the geometry for reasons of cosmic variance. The shape of the final photometric redshift distribution is affected both by the selection function and the geometry of of the mock observation. This is illustrated graphically in figure \ref{fig:phicartoon}, which shows the first of the two selection functions. \begin{figure} \begin{center} \resizebox{3.5 in}{!}{\includegraphics{fig1.jpg}} \end{center} \caption{The final photometric redshift distribution is affected by the selection function and also the survey geometry.} \label{fig:phicartoon} \end{figure} Once we have mock photometric and spectroscopic samples in place, we compute in each bin $\chi_i$ the autocorrelation function of the spectroscopic sample $\xi_{ss}(r,\chi_i)$, and the angular cross correlation between the spectro-z objects in the $i^{th}$ bin and the entire photometric sample $w_{ps}(\theta,\chi_i)$. We have used the algorithm of Landy and Szalay \cite{1993ApJ...412...64L} to compute the correlation functions. To compute $w_{ps}(\theta,\chi_i)$ we opt to measure the 3D correlation function $\xi_{ps}(r,\chi_i)$ and perform the integral of eqn. \ref{eqn:wpsdef} numerically, because the mock observation volume is small which renders direct calculation of $w_{ps}(\theta,\chi_i)$ noisy. This will not be a problem for surveys that cover a large fraction of the sky. We are eager to test how sensitive the reconstruction of the photometric distribution function $\phi_p(\chi)$ may be to evolution of the bias $b_p$ that is not accounted for. We addressed this question analytically in section \ref{sec:theory}, but it is useful to examine the issue with simulations to see if the systematic biases that occur are significant compared to the error bars on the re-constructed distribution. Evolution of the bias can occur because of evolution in the underlying large scale structure, and it can occur because the population of photometric objects detected by an instrument evolves as a function of redshift, as expected in a magnitude limited sample. Our simulation volume is made of a single time-slice, so we cannot examine the effects of the first mechanism at present, but we expect the effects of the second mechanism to dominate the bias evolution. We have devised a simple method inspired by the results of \cite{2006MNRAS.371.1173V} and \cite{2009ApJ...696..620C} to introduce this type of bias evolution into our mock galaxy sample. Whereas before we applied the selection function in eqn \ref{eqn:sf} randomly to the galaxies in each slice, now we choose to keep galaxies preferentially that live in the largest halos. We assume that the brightest galaxies live in the biggest halos and that central galaxies are brighter than satellite galaxies at a given redshift. First we rank order the halos in the slice. We determine the number of galaxies to be kept from eqn. \ref{eqn:sf}, and place one in the center of each halo beginning with the halo of highest mass. If the number of galaxies exceeds the number of halos in the slice, we begin placing satellite galaxies. We again start with the largest halo and continue populating each halo with satellites until we have placed all the galaxies. This procedure generates a mock catalog of photometric galaxies whose clustering strength will depend strongly on the selection function. The effect of this procedure on the largescale bias is somewhat complicated. In the regime where only satellites in the lowest mass halos are being eliminated, if the detection fraction from eqn. \ref{eqn:sf} is high the bias will be close to the bias of the whole population. However as the detection fraction gets lower, the sample will be more highly biased with respect to the dark matter because only satellites in the largest halos are being kept, thus weighting the largest halos more heavily than the smaller halos in the clustering measurement. However, if the detection fraction is so low that all of the satellites are eliminated and the population consists only of centrals, the largescale bias will be lower than for the whole population, because more weight from the largest halos has been discarded than from the smaller halos. In this analysis we are principally in the second regime, which means the large scale bias will tend to follow the shape of the selection function. This is demonstrated in fig. \ref{fig:bevol}, which plots the correlation function measurement of the mock photometric sample in each conical slice. The top panel shows the result if the galaxies are eliminated randomly, and the bottom panel shows the results from the procedure we just described. The inset shows the variation in the value of $\xi_{pp}(r)$ along the line of sight (where we have picked a particular scale, indicated with a vertical line). On the top we show that compared to the last slice (marked with blue squares), the value of $\xi_{pp}(r)$ varies by some tens of percent, and is a relatively flat function of the line-of-sight distance. The bottom inset on the other hand, clearly reflects the shape of the redshift distribution function used to create the sample (plotted later in the bottom panel of fig. \ref{fig:realist}) , and shows variations in the normalization of $\xi_{pp}(r)$ of up to 150\%. \begin{figure} \begin{center} \resizebox{3.7 in}{!}{\includegraphics{fig2.png}} \end{center} \caption{The 3D correlation function of the photometric sample in each slice along the line of sight. The closest and farthest slice plotted are labeled with symbols. The inset shows the relative variation among the curves at a single scale. On the top the there is much less variation among the curves than on the bottom. The inset on the bottom clearly reflects the shape of the selection function (and hence the bias) used to create the sample. } \label{fig:bevol} \end{figure} We emphasize that we do not expect this method to quantitatively capture the bias evolution in a real galaxy survey, but we expect it is qualitatively similar, and as long as the bias evolves significantly, it will be useful in testing the magnitude of the effect on the reconstruction. We will refer to this method of applying the selection function as the Ordered Selection Function (OSF), and the simple random method as the Random Selection Function (RSF). By comparing the reconstruction in the two cases, we determine how important it is to account for evolution in the bias. \subsection{Pipeline} Once the cross correlation $w_{ps}(\theta,\chi_i)$ and autocorrelation function $\xi_{ss}(r,\chi_i)$ have been measured, a procedure is needed to perform the inversion of the integral of eqn. \ref{eqn:wpsdef} to obtain the line of sight distribution of the photometric sample in some bins $\chi_j$, and to obtain error bars on those measurements. For a single angular scale $\theta$ and bin $i$ in spectroscopic redshift we approximate the integral in eqn. \ref{eqn:obs} as a discreet sum over $N$ redshift bins (ignoring bias evolution for the moment). \begin{eqnarray}\label{eqn:wpssum} w_{ps}(\theta,\chi_i)= \int_0^\infty \xi_{ss}(r(\chi,\chi_i,\theta))\,\phi_p(\chi) \, d\chi \nonumber \\ \approx \sum_{j=0}^{N-1} X_{ij} \phi_{p,j} \label{eqn:ximat} \end{eqnarray} \begin{eqnarray} &X_{ij}=\frac{b_p}{b_s}\xi_{ss}\left(\chi_i,r=\sqrt{\left(\theta \chi_i \right)^2+\left(\chi_j-\chi_i\right)^2}\right) \end{eqnarray} The observation will be performed in multiple angular bins $\theta$. The values of $r$ in $X_{ij}$ will not be at the exact positions where we have made the measurement of $\xi_{ss}$ (in our case in 60 bins between 5 and 50 comoving Mpc/h). If the $r$ falls within the domain of our data, we interpolate $\xi_{ss}$ with a spline, if it is larger than the biggest scale we measure, the value of $\xi_{ss}$ is extrapolated using a fit of the form $r^{-2}$ to a subset of data points (larger than 15 Mpc/h) measured in the volume. If the value of $r$ is less than the minumum value of $r$ measured, we reject the entire $\theta$ bin, since we do not trust the method for $\xi_{ss}$ measured on very small scales. Collecting all the observed $w_{ps}$ values in a single vector ${\bm w}$ (one element for each unique combination of $\chi_i$ and $\theta$), the solution for $\phi_p(\chi_j)$ will be obtained by inverting relation \begin{eqnarray}\label{eqn:linear} {\bm w}={\bf X} \cdot {\bm \phi} \end{eqnarray} Here, ${\bf X}$ is a matrix whose elements are given by eqn. \ref{eqn:ximat}, and whose dimension is \begin{eqnarray} {\rm (\# spec \, bins\,} i * {\rm \# theta\,bins)} \times N \nonumber \end{eqnarray} $\bm \phi$ will be a vector of length $N$,and $\bm w$ will be a vector of length (\# spec bins $i *$ \# theta bins). It is significant that measurements at different values of $\theta$ can mix in the inversion; since they are correlated it is important that they do so. We do not know the pre-factor $b_p/b_s$, but as long as it is assumed to be constant (as in the special case where the spectroscopic sample is a {\it fair} subsample of the photometric population) this is not relevant. The reconstructed $\phi_p(\chi_j)$ will not be correctly normalized after the matrix inversion because there is no information in eqn. \ref{eqn:wpssum} about the total number of galaxies in the photometric sample. The normalization of $\phi_p(\chi_j)$ will be set by requiring that it integrate to 1. In principle, solving equation \ref{eqn:linear} for $\phi_p(\chi_j)$ is a simple matter of inverting a matrix, but in practice doing so is numerically unstable and furthermore errors in the observables ${\bm w}$ and ${\bf X}$ must be accounted for in the solution. Since ${\bm w}$ is a projection through the same dark matter structures traced by ${\bf X}$, there will be non-negligible correlations between the errors in the two observables. To extract $\phi_p(\chi)$, we write down the expression for the $\chi^2$ statistic, and minimize it. Typically we are attempting to reconstruct $\phi_p(\chi_j)$ in as many bins $N$ as possible. There is often insufficient information in the correlation functions to completely constrain $\phi_p$ in the chosen number of bins, thus it becomes necessary to stabilize against solutions that have highly anti-correlated adjacent bins. Since it is reasonable to assume that the true distribution will not be highly oscillatory, we adopt a smoothness prior to regularize the solution, and add it to our expression for $\chi^2$ below. \begin{eqnarray}\label{eqn:chisq} \chi^2=({\bm w}-{\bf X}{\bm \phi})^T {\bf C}^{-1}_{\bm \phi}({\bm w}-{\bf X}{\bm \phi}) + \lambda \,{\bm \phi}^T {\bm B}^T {\bm B} {\bm \phi} \end{eqnarray} Here ${\bf C}_{\bm \phi}$ is the covariance matrix incorporating errors in both ${\bm w}$ and ${\bf X}$, and the subscript denotes that it is an explicit function of ${\bm \phi}$. For purposes of the present analysis, however, we will assume that the spectroscopic correlation function is perfectly determined, and we will propagate errors from $w_{ps}$ only, specifically ${\bf C}^{-1}_{\bm \phi}={\bf C}^{-1}=$ inverse covariance matrix of $w_{ps}(\theta,\chi_i)$. This is not a good approximation as we will soon show, and we refer the reader to appendix \ref{app:error} for a description of the full error propagation from ${\bf C}^{-1}_{\bm \phi}$ into errors on $\phi_p(\chi_j)$. The reader should consider all errors reported on $\phi_p(\chi_j)$ as lower limits on the total error. The regularization scheme we have adopted is described in detail in \cite{2002nrc..book.....P} section 18.5. The intention is to add a term to $\chi^2$ that gets large when neighboring points have widely different values. Minimization of $\chi^2$ will then tend to solutions that do not have anti-correlated neighboring points. The matrix ${\bm B}$ is given by \begin{eqnarray} {\bm B}=\left( {\begin{array}{ccccccccc} -1 & 1 & 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & -1 & 1 & 0 & 0 & 0 & 0 & \cdots & 0 \\ \vdots & & & & \ddots & & & & \vdots \\ 0 &\cdots & 0 & 0 & 0 & 0 & -1 & 1 & 0 \\ 0 & \cdots & 0 & 0 & 0 & 0 & 0 & -1 & 1 \\ \end{array} } \right) \end{eqnarray} Note that ${\bm B}$ has one fewer row than column. The factor $\lambda$ should be chosen such that the first and second terms in $\chi^2$ contribute roughly equal weight. This can be approximately arranged if $\lambda$ is taken to be \begin{eqnarray} \lambda=\frac{{\rm Tr}({\bm X}^T {\bm C}^{-1}{\bm X})}{{\rm Tr}({\bm B}^T{\bm B})} \end{eqnarray} but in practice, the weight in $\chi^2$ will also depend on the solution and the values of $w_{ps}$. We find that the quality of the reconstruction depends on how well the two terms in eqn. \ref{eqn:chisq} are balanced. Since this depends on the answer, we opt to refine the value of lambda and re-compute the solution iteratively as follows. \begin{itemize} \item Compute a tolerance parameter defined from the two terms in $\chi^2$ as tol=1-(term 1/term 2). \item If tol $>0$, $\lambda$ is increased by a factor of 10. If tol $<0$ we decrease it by 10. \item We recompute the solution and the tol \item If the absolute value of tol has decreased, we repeat the refinement of lambda, otherwise we exit and keep the previous value of the solution \end{itemize} In practice, the procedure usually requires only 1 refinement of $\lambda$. We observe that changing the algorithm to have smaller steps in lambda (e.g. a factor of 2 rather than 10) does not improve the solution, and occasionally over-smooths it. We emphasize that while we have identified an algorithm that works, we have not optimized the application of a smoothness prior. Since the solution is moderately sensitive to the smoothing, care should be taken to understand the properties of the smoothing before applying this technique to real data. We have not studied the effects of different smoothing algorithms because it is likely that the need for a smoothness prior will be eliminated in future analyses by using the photometric probability distributions as a prior instead. Since we have no mock photometric redshift probabilities, examining that technique is beyond the scope of this paper, but will be the subject of further research. To minimize $\chi^2$ we take the derivative with respect to ${\bm \phi}$ and set it equal to zero. Taking ${\bm C}^{-1}$ to be independent of ${\bm \phi}$ and noting that it is symmetric we find \begin{eqnarray} -2{\bm w}^T {\bm C}^{-1} {\bm X} + 2 {\bm \phi}^T {\bm X}^T {\bm C}^{-1} {\bm X} +2\lambda {\bm \phi}^T{\bm B}^T{\bm B} = 0 \hspace{0.5cm} \label{eqn:minchi2} \end{eqnarray} \begin{eqnarray} {\bm X}^T {\bm C}^{-1}{\bm w} = \left[{\bm X}^T {\bm C}^{-1} {\bm X} +\lambda {\bm B}^T{\bm B}\right] {\bm \phi} \\ \label{eqn:answer} {\bm \phi}= \left[{\bm X}^T {\bm C}^{-1} {\bm X} +\lambda {\bm B}^T{\bm B}\right]^{-1} {\bm X}^T {\bm C}^{-1}{\bm w} \end{eqnarray} The covariance matrix of the recovered ${\bm \phi}$ is given by \begin{eqnarray}\label{eqn:phierrs} {\rm Cov}[\bm \phi] \propto \left[{\bm X}^T {\bm C}^{-1} {\bm X}+\lambda {\bm B}^T{\bm B}\right]^{-1} \end{eqnarray} To recover the constant of proportionality, we will need to rescale the elements of the covariance matrix by the square of the factor used to rescale $\bm \phi$ (since we renormalize such that $\phi(\chi)$ integrates to 1). We caution that all matrix multiplications above are finite sum approximations to integral quantities, so care must be taken to ensure that phi and its error bars come out to scale. For clarity, let us refer to $w_{ps}(z_i)$ as a function of a continuous variable $z_i$ (which will label the spectroscopic bins). $\xi(z_i,\chi_i)$ will be a function of both $z_i$ and another continuous variable $\chi_i$ (which will label the bins in which $\phi$ is reconstructed). We are considering a single value of $\theta$, and $z$ and $\chi$ can represent either redshifts or comoving distances, we simply use both letters to distinguish them. Ignoring regularization, the continuous expression for $\chi^2$ is then \begin{eqnarray}\label{eqn:contin} \chi^2=\int dz_1 dz_2\left[ \left(w_{ps}(z_1) - \int d\chi_1 \phi(\chi_1) \xi(z_1,\chi_1) \right) \right. \nonumber \\ \left. C^{-1}(z_1,z_2) \left(w_{ps}(z_2) - \int d\chi_2 \phi(\chi_2) \xi(z_2,\chi_2) \right) \right] \hspace{0.2cm} \end{eqnarray} To minimize $\chi^2$ we must set the functional derivative $\delta \chi^2/\delta \phi(\chi)=0$. We leave the details to the reader, but note that in eqn \ref{eqn:minchi2}, the factors of $\Delta z$ that correspond to integrals over $z$ cancel but the factor of $\Delta\chi$ does not. \subsection{Redshift Distribution Reconstruction} In this section we present a series of reconstructions that examine various aspects of the problem and identify the potential difficulties in applying this method. We begin in fig. \ref{fig:one} with the simplest case and gradually add complexity. The heavy solid line shows the theoretical redshift distribution (selection function + geometry) that has been applied mock photometric catalog of galaxies with $M_{\rm min}=5 \times 10^{11}$ in eqn. \ref{eqn:hod} . This redshift distribution corresponds to the selection function shown on the left of fig. \ref{fig:phicartoon}. We have chosen galaxies randomly (the RSF method described in section \ref{sec:mocks}) in applying the selection function. The resulting catalog has 54,000 galaxies. We have measured the detection fraction in each slice for this particular realization, and plot it with a thin wavy line in fig. \ref{fig:one}. The spectroscopic calibrating sample is taken to be a different population of galaxies with $M_{\rm min}=7 \times 10^{12}$ yeilding around 5600 galaxies, i.e. a highly biased tracer of the density field compared to the photometric sample. We assume both the observables are measured with perfect accuracy. Since we have assumed perfect knowledge of the spectroscopic correlation function, we opt to measure it in the entire light cone, and us the result in each of the $i$ spectroscopic bins $\xi_{ss}(r,\chi_i)=\xi_{\rm whole \, volume}(r)$. This is only justified because there is no evolution in the light cone, and the calibrators are not affected by the selection function which changes along the line of sight. We perform the reconstruction by computing the matrices ${\bm X}$, ${\bm w}$, ${\bm C}^{-1}$, and ${\bm B}$ and using them in eqn. \ref{eqn:answer}. The result is plotted in fig. \ref{fig:one} with points. Notice that the reconstruction follows the values of this realization rather than the theoretical redshift distribution used to create the sample. \begin{figure} \begin{center} \resizebox{3.7 in}{!}{\includegraphics{fig3.png}} \end{center} \caption{The photometric distribution $\phi_p(\chi)$ and its reconstruction. The x axis is comoving line of sight form the observer, who is located at the origin. The thick solid line shows the theoretical value of the distribution, the thin wavy line shows the particular realization in this simulation volume, and the points show the reconstructed solution.} \label{fig:one} \end{figure} Fig. \ref{fig:realist} reveals a more realistic picture. The lines and points in fig. \ref{fig:realist} are all the same as in fig. \ref{fig:one}. We now measure the autocorrelation of the spectroscopic sample $\xi_{ss}(r,\chi_i)$ in each of the conic sections, and use those measurements to perform the reconstruction. The top and bottom panels show two different choices of selection function, whose parameters are marked in the caption. The top is difficult to reconstruct because it peaks at $\chi=0$ where the is no volume in the mock observation, and the bottom is a challenge because the feature coincides with the bin spacing, which will be a problem to reconstruct because of the smoothness prior. We have drastically decreased the number of bins, so that there are a reasonable number of calibrators in most of the bins. The normalization criterion, which comes from the condition that $\phi_p$ integrate to 1, becomes more sensitive to noise in the reconstruction when the number of bins is decreased. In this plot and the plots that follow we set the normalization by hand so that we can illustrate other points; we let the maximum value of the reconstruction (points) equal to maximum value of the theoretical input $\phi$ (thick solid line). While the reconstructions in fig. \ref{fig:realist} show the correct trends, they are not sufficiently high quality to recover the bimodal behavior in the reconstructed selection function. We now briefly discuss the error bars in fig. \ref{fig:realist} obtained from eqn. \ref{eqn:phierrs}. The ${\rm Cov}[\phi]$ matrix is not diagonal; we conservatively report $1/\sqrt{{\rm Cov}^{-1}[\phi]_{i,i}}$ as the error on the $i^{th}$ bin. These error bars represent a lower bound on the error because we have only propagated error from $w_{ps}$ and not from $\xi_{ss}$. The matrix ${\bm C}^{-1}$ in eqn. \ref{eqn:phierrs} is the inverse covariance matrix of the cross-correlation measurement ${\bm w}$, and with sufficient volume can be estimated by dividing the observation into a number of bins and bootstrapping. Here, however, we opt to follow \cite{1980lssu.book.....P} and assume that on these scales the errors are dominated by Poisson noise. In this case, ${\bm C}^{-1}$ is diagonal, and its elements are given by \begin{eqnarray} {\bm C}_{ii}=\frac{1}{\sqrt{n_{\rm pairs}}}=\bar{n}_s \bar{\Sigma}_p \chi_s^2 \Delta\chi_s \, \Omega \sin(\theta)\Delta\theta \end{eqnarray} We use survey parameters appropriate to large upcoming missions. We take $\bar{n}_s=1\times 10^5$ galaxies per $(Gpc/h)^3$, $\bar{\Sigma}_p=100$ photometric galaxies per square arcmin, and $\Omega=20,000$ square degrees on the sky. $\Delta\chi_s$ is the width of the bin of spectroscopic data. The resulting error bars in \ref{fig:realist} and those that follow display a surprising trend. Even though the number of pairs is dramatically larger for bins at farther comoving distance, the reconstruction of the photometric distribution is less accurate in the most distant bins. The key to understanding this trend is that for a fixed angular scale $\theta$, the physical scale being probed in distant bins is much larger than at nearby distances. Since the correlation function is a steeply dropping function of separation, the impact of shrinking values in $\bm X$ of eqn. \ref{eqn:phierrs} dominates over the growing number of pairs in $\bm C^{-1}$. By measuring $w_{ps}$ on smaller angular scales the errors in the farthest bins can be improved, but there is a limit to how far this can be pushed because measurements at small angular scales will send the near field into the trans-linear and 1-halo regime, for which there are corrections to the underlying assumption that $\xi_{ps} \propto \xi_{ss}$. \begin{figure} \begin{center} \resizebox{3.7 in}{!}{\includegraphics{fig4.png}} \end{center} \caption{The photometric distribution $\phi_p(\chi)$ and its reconstruction. The top and bottom panels show two different selection function choices. Each is the sum of two Gaussians with $[\chi_1,\sigma_1,\chi_2,\sigma_2] = [0.0,0.15,0.8,0.16]$ on the top and $[0.3,0.07,0.7,0.10]$ on the bottom. The errors come from Poisson error in the cross-correlation measurement. The spectroscopic sample is made of rare objects with $M_{\rm min}=7 \times 10^{12}$ in eqn. \ref{eqn:hod}. There are very few objects in each conic section so the correlation functions are poorly measured, and this reconstruction is not capturing the bimodal behavior.} \label{fig:realist} \end{figure} In figure \ref{fig:three} we switch to a larger calibration sample with $M_{min}=1 \times 10^{12}$. We see that the reconstruction now captures the bimodal behavior, but both the resolution (bin spacing) and errors are modest and the smoothing is still evident in the last bin. Notice that the error bars have increased: this is because this calibration sample has lower bias, so the elements in $\bm X$ in the covariance matrix of $\phi(\chi)$ (eqn. \ref{eqn:phierrs}) are all smaller. This is an illusion however. Had we propagated the error from $\xi_{ss}$, it would contribute larger errors to figure \ref{fig:realist} than to \ref{fig:three} because the measurment is noisier for the smaller sample. In moving to the larger sample, the number of spectra required has increased by an order of magnitude, and is now the same size as the mock photometric catalog (after the selection function has been applied). In summary, comparison of fig. \ref{fig:one}, fig. \ref{fig:realist}, and fig. \ref{fig:three} demonstrates that $\xi_{ss}(\chi)$ must be reasonably well determined for the shape of the reconstructed photometric distribution to be well captured. It is worth noting that we have used no priors in the determination of $\xi_{ss}(\chi)$, improving and applying theoretical priors may yield significantly better results with fewer spectra. \begin{figure} \begin{center} \resizebox{3.7 in}{!}{\includegraphics{fig5.png}} \end{center} \caption{The photometric distribution $\phi_p(\chi)$ and its reconstruction. Now the spectroscopic population has $M_{\rm min}=1 \times 10^{12}$ and is roughly the same size as the photometric one though it is more biased and comprised of different galaxies. Because the correlation function has been measured well, the reconstruction recaptures the bimodal behavior.} \label{fig:three} \end{figure} The fact that the more numerous calibrators generated such improved results suggested to us that perhaps the reconstruction with the rarer calibrators could be improved by simply fitting the spectroscopic observations with a 2-parameter power law, and performing the reconstruction in that way. This appears to discard too much information contained in $\xi_{ss}(\chi)$. The result was visually similar to fig. \ref{fig:realist}, namely, the bimodal behavior in the distribution is lost. We add another layer of complexity in fig. \ref{fig:four}. We apply our selection function in such a way that the bias of the remaining objects will evolve along the line of sight (the OSF method described in section \ref{sec:mocks}). We continue using the less rare calibrating sample with halo model $M_{\rm min}=1\times 10^{12}$ for this illustrative proof of concept. Although it is not very statistically significant, the eye can pick out a trend in the reconstruction: the reconstructed distribution is suppressed in areas of low detection fraction. Since the bias is tracking the detection fraction, the areas of low detection are enhanced less than the areas of high detection, thus they appear suppressed when we normalize to the highest point. At this level of accuracy, it is unlikely that this systematic will dominate the error in tomographic analyses, however for much larger surveys it could be a significant concern. \begin{figure} \begin{center} \resizebox{3.7 in}{!}{\includegraphics{fig6.png}} \end{center} \caption{The photometric distribution $\phi_p(\chi)$ and its reconstruction. Here the selection function applied to the photometric sample causes the bias to evolve as a function of comoving distance from the obsever. The calibrators are the $M_{\rm min}=1 \times 10^{12}$ sample, whose bias does not evolve. The result is a systematic shift of marginal significance in areas of low detection fraction.} \label{fig:four} \end{figure} In fig. \ref{fig:five} we show how the situation is altered if a fair subsample of the photometric population is used instead of a rarer biased tracer population. This is a special case because the bias of the calibrators will evolve with redshift identically to the photometric sample. We show two different subsamples in this plot, 50\% and 80\% of the photometric catalog, and we test the reconstruction for the distribution on the right in figs. \ref{fig:realist},\ref{fig:three},\ref{fig:four}. We see that for a survey of this volume, 50\% is too small to capture the bimodal behavior in the reconstruction, but with 80\%, the reconstruction works well. Indeed the systematic offset in figure \ref{fig:four} is remedied and the reconstruction follows the realization to much better accuracy. For larger surveys the fraction needed will be smaller than indicated here, because they will be able to measure the spectroscopic correlation functions with greater accuracy. For surveys large enough not to be limited by noisy spectroscopic correlation functions, it may be necessary to calibrate with a fair subsample of galaxies to avoid systematic bias in the redshift distribution that comes from evolution in the bias of the photometric sample. \begin{figure} \begin{center} \resizebox{3.7 in}{!}{\includegraphics{fig7.png}} \end{center} \caption{A fair subsample is used to reconstruct $\phi_p(\chi)$ instead of the $M_{\rm min}=1 \times 10^{12}$ sample. The bias of the subsample evolves identically to the bias o the photometric sample. The reconstruction is studied only for the distribution on the right of figs. \ref{fig:realist}, \ref{fig:three}, and \ref{fig:four}. The top panel shows a subsample of 50\%, which is not sufficient to capture the bimodal behavior, and 80\% on the bottom, which does well, and shows no systematic offset due to evolving bias. } \label{fig:five} \end{figure} \section{Discussion}\label{sec:disc} We have outlined the theory behind the cross-correlation method for calibrating the redshift distribution of objects with photometric redshifts, and developed a pipeline than can be used to apply the method to survey data. We have created mock simulations to test the pipeline. We have succeeded in reconstructing the redshift distribution of the mock photometric galaxies using the angular cross correlation of these galaxies with an overlapping spectroscopic sample (whose redshifts are known). We have not used any redshift information about the photometric sample. We have demonstrated the validity of the method. We have also identified the aspects that are likely to be the limiting factors in relying upon this method to provide accurate redshift distribution information. These limiting factors are 1) that the spectroscopic sample must be binned along the line of sight, causing their correlation functions to be noisy and interfering with the reconstruction, and 2) that the bias evolution of the photometric sample cannot be disentangled from the redshift distribution that is reconstructed, which may force the necessity for the follow up of a fair subsample of galaxies. Improved modeling of theoretical priors could yield large dividends if these factors can be mitigated. The analysis has revealed a number of trade-offs that exist in application of this method. For a given set of calibrators, there is a trade-off between the resolution (bin spacing) of the redshift distribution reconstruction, and the error bars on any individual point. Thus if a population of catastrophic outlier were discovered, for example, it would be interesting to investigate whether it is more important to know how many there are, or at exactly which redshift they lie. We also find a trade-off between the number of calibrators and the quality of the reconstruction. As the number of available spectra are decreased, the spectroscopic redshift bins need to widen to maintain equivalent signal strength. This in turn affects how finely the reconstructed redshift distribution can be sampled. Bimodal behavior may be lost, and with insufficient priors (such as the smoothness prior we implemented here), false bimodal behavior may appear in the form of anti-correlated adjacent points. Widening the bins to compensate for fewer spectra also introduces another difficulty. Recall that the error in the angular cross-correlation function goes as $\delta w_{ps} \sim 1/ \sqrt{n_{\rm pairs}}$. This means that \begin{eqnarray} \frac{\delta w_{ps}}{w_{ps}}=\frac{1}{\sqrt{n_{\rm pairs}}} \left[ \int_0^\infty \xi_{ps} \phi_p d\chi\right]^{-1} \end{eqnarray} For the purposes of this order of magnitude argument, suppose $\phi_{p}$ were constant, then to integrate to 1 requires $\phi_{p}(\chi) \propto 1/\Delta\chi_{\rm rb}$. The subscript rb is used to indicate the width of the reconstruction bin (not the spectroscopic bins). When we remove it from the integral we are left with an expression that is $w_p(r_p)$ \begin{eqnarray} \frac{\delta w_{ps}}{w_{ps}}=\frac{1}{\phi_p \sqrt{n_{\rm pairs}}} \left[ \int_0^\infty \xi_{ps} d\chi\right]^{-1} \nonumber \\ \sim \frac{\Delta\chi_{\rm rb}}{\phi_p w_p(r_p) \sqrt{n_{\rm pairs}}} \end{eqnarray} If the reconstruction bin is widened by a factor of two, the number density of photometric galaxies will have to be increased by a factor of 4 to preserve the same accuracy in the cross correlation measurement. Therefore in this method there is also a trade-off between the number of spectra that the survey can afford versus the number of photometric galaxies they can afford. In this analysis we have not made any use of the photometric redshifts, which although not sufficiently accurate to determine the true redshift distribution of the population, still contain significant information about it. Modern techniques such as in \cite{2009arXiv0908.4085G} have made it possible to assign a probability distribution for the redshift of each individual galaxy in a photometric survey, rather than a single best estimate and error bar. We propose that combining these probabilities for all the galaxies in a given redshift bin constitutes a reasonably powerful prior, that can take the place of the smoothing we have introduced in this analysis. This is fortunate because the solution is frequently wrecked by the smoothing criterion, although the analysis cannot be performed without it. Another detail that we leave for a future study is that the spectroscopic sample need not be comprised of a single rare population, it is quite conceivable to target certain regions of redshift space that are known to be problematic more heavily. It must also be possible to use less rare tracers in regions with smaller volume. We leave such optimization to the future. There are a few important factors that we have neglected in this analysis. As pointed out by \cite{2009arXiv0902.2782B}, weak gravitational lensing will induce correlations between the positions of calibrator galaxies in the foreground with photometric galaxies in the background, and vice versa. This will need to be carefully controlled for the method to reliably calibrate redshift distributions. We also have not mentioned the complication of the integral constraint, which as discussed in e.g. \cite{2007APh....26..351H} can lead to very significant errors when the volume and the scales in the correlation function are of comparable size. This may be as significant an issue as evolution in the large scale bias of the photometric population, though it may be mitigated if integral constraint errors are correlated between photometric and spectroscopic samples. Fortunately, improved estimators exist (in \cite{2007MNRAS.376.1702P} for example) and should certainly be incorporated into the pipeline. Having now demonstrated that the method is viable, with further refinement the cross-correlation method applied in conjunction with direct follow up surveys may significantly reduce the number of spectra that are required to calibrate photometric redshift distributions to the desired accuracy. There are a few other advantages that we have not touched upon in detail. As we have shown, the cross correlation method can be used to calibrate the redshift distribution all the way along the line of sight, and as such is uniquely suited to detection and calibration of catastrophic redshift errors, even if these errors are so rare that they are missed by conventional follow up. Also, the reconstruction should not be adversely affected by redshift deserts, regions where no spectroscopic redshifts are available. We are optimistic that this technique will provide a useful complementary approach to conventional calibration techniques, and every effort should be made to refine the method further. \section*{Acknowledgments} Many thanks to Martin White, for the use of his simulations and for pointing the way out of a number of tight corners. Conversations and e-mails with Rachel Mandelbaum, Jeff Newman, Nikhil Padmanabhan, Doug Rudd, William Schulz, David Shih, and many other were also incredibly helpful. A.E. Schulz is supported by the Corning Glassworks Foundation Fellowship at the Institute of Advanced Study. \bibliographystyle{apj}
1,116,691,501,085
arxiv
\section{Introduction} \textbf{Context and Scope:} The US healthcare system is a complex setup governed and managed by state and federal agencies. Managed Care is a health delivery system utilised by Medicaid to manage cost, utilization and quality of healthcare. The Managed Care system uses contract agreements between Medicaid agencies and Managed Care Organisations (MCOs) for providing these services. Some states even utilize this system beyond traditional managed care for initiatives such as care improvement for chronic \& complex conditions, payment initiatives, etc. Contracts run the gamut from computer support to janitorial services to direct client services. HHS posts all notifications of new \textbf{Request for Proposal (RFP)}/solicitation releases, Requests for Application and Open Enrolments. RFPs are bid requests consisting of functional and non-functional requirements for different services. These also outline model contracts and the expected format of the proposals. The requirements are mentioned in the form of different questions/queries which are answered by each proposal/response to these RFPs. The procurement of these contracts entirely depends upon the scores obtained for each response based on the predefined evaluation criteria. A contract is generally awarded to the best scoring respondent(s). A typical RFP bid consists of RFP advertisement, RFP itself, a model contract, proposals/responses from bidding entities (such as MCOs) and scoring sheets for all the submissions. RFPs and supporting documents are publicly available information. MCOs typically utilise historical submissions to understand the requirements and respond better to improve their chances of winning a bid. Every RFP response (and related documents) typically runs into several hundred pages which are spread across different websites and data stores. Manual exploration of historical bids is a time consuming and iterative process. Given the changing healthcare landscape, limited time-frame and resources to draft new responses, the current process is not comprehensive enough to extract insights and derive competitive advantage. \textbf{Challenges:} Apart from being an industry specific problem statement, our work also poses a unique challenge of scoring entire documents. Most relevant efforts towards automatic scoring have dealt with with short answers \cite{leacock2003c,ramachandran2015identifying} and essays \cite{attali2006automated,taghipour2016neural}. Our work deals with much larger sequence lengths, and a larger feature space to capture. Another difference with relevant literature is that RFPs are written by experts over multiple iterations, as opposed to students writing essays for evaluation. As such, this removes the need to check for superficial grammatical errors. Instead, there is a need to identify which aspects of the text enhance scores (Enablers) and those which diminish it (Disablers). \textbf{Our Solution: } In this paper, we propose an automated framework using interpretable natural language processing techniques to analyse RFP responses. The framework comprises of two components: Text Processing Module and an Interpretable Scoring Model. RFP responses usually do not follow any standard template/formatting and are available in Portable Document Format or PDF for short. Moreover, to understand the content and extract insights, the text needs to be extracted at the most granular level (usually section or question level). These issues complicate the text extraction process and thus the need to develop a Text Processing Module. We have developed a generic Text processing module that would extract text from different formats of response. The extracted text is then analysed using our Interpretable Scoring Model. The scoring model enables us to identify terms/phrases and other auxiliary features which impact the section/question score positively and negatively. We term positively impacting features as enablers and negatively impacting ones as disablers. The framework also provides insights about auxiliary features which latently impact overall scoring. The framework also provides a single portal/platform to access historical bid responses for similar details across bidders and states. Major Contribution of this work is as follows, we have: \begin{itemize} \item Built a generic pdf parser to extract section-level content from RFP pdf documents. \item Generation of a real world document level scoring dataset that can be used to further NLP research. \item Proposed an interpretable deep-learning based regression model to automatically score RFP documents. \item Addressed a novel problem of identifying enablers and disablers from RFP documents for effective writing purposes. \end{itemize} \section{Related Work} Even though there are research studies on effective management of RFP pipeline, processing RFP data is a least addressed problem. USPTO: US6356909B1 presented a web-based system for management of RFP process. The system handles end-to-end pipeline of tasks such as, to generate RFP forms, the process of responding for RFPs and the process of reviewing and presenting the results. The RFPs are widely used in contract-based software development projects. Saito et al. \cite{saito} proposed a simple evaluation model to check whether the user requirements of software project were accurately mentioned in an RFP. But, the model is mainly focused on non-functional requirements like response time and security issues in the RFP. This work is similar to Automated Essay Scoring (ASE) when it comes to evaluating a draft. Various works \cite{crossley2018assessing,inproceedings} proposed deep learning approaches using CNNs and LSTMS to predict the grade/score of the text in Essay Grading problems. The most prominent dataset for Automated Essay scoring is the Automated Student Assessment Prize (ASAP) dataset \footnote{https://www.kaggle.com/c/asap-aes}. AES systems using the ASAP dataset often model as a regression problem \cite{kumar2019get,goenka2020esas} and then convert it into categorical variables, due to the narrow range of scoring values within the dataset. Our dataset has much larger sequence length compared to essays, as well as a wider scoring range (0-100), which makes it a more challenging problem to solve. Identifying positively/negatively contributing phrases is analogous to Sentiment Analysis. However, tools on sentiment analysis either use predefined dictionaries of positive words such as \textit{good, better}, etc,. and negative words such as \textit{not, worse} for polarity detection \cite{feldman2013techniques} or learn language semantics to understand subjectivity for polarity detection using huge corpora. Instead, in this paper we are generating such dictionaries with positively and negatively impacting words/phrases from RFP responses which are usually devoid of subjectivity. In some cases, essays with inclusion of terms that are specific to the domain would result in better score \cite{burstein2005advanced}. However, they do not consider negatively contributing terms. We perform thorough quantitative experiments to model the scoring system, as well as qualitatively assess enablers and disablers using expert human evaluators. \section{Dataset} For the purpose of this work we prepared a dataset consisting of 1300 RFP responses spread across multiple years and states with mean word count per response being 5k words. The text processing module on pool of PDFs of RFP Response and Scoring documents has two components; \begin{itemize} \item Text Extraction from PDF Documents \item Processing and Storage of the extracted text \end{itemize} \begin{figure*}[] % \begin{center} \includegraphics[width=0.9\linewidth]{text.png} \caption{Text Processing Flow } \label{fig:text_processing_flow} \end{center} \end{figure*} \subsection{Collection} \subsection{Text Extraction:} Text Extraction is the first and most important part of the overall pipeline. This step takes PDF documents as input and generates extracted text and auxiliary features as output. PDF is a versatile format which can handle a variety of inputs. A PDF document can be generated using text processing solutions like \LaTeX , MS Word, etc or could originate from a scan, fax or images. Usually the PDFs from text processing solutions maintain text and other attributes in native form while other sources lead to the document being an image based replica. Thus, we can broadly segregate PDFs into two categories: \begin{itemize} \item Text-Based PDFs \item Image-Based PDFs \end{itemize} Depending upon the type of PDF (text or image based), appropriate PDF parsing technique is applied. For text-based PDFs (or searchable PDFs) XML stream parsing works while OCR based methods are required for image-based PDF documents. For searchable PDF documents, we explored and used several XML stream parsing libraries such as pdfminer \footnote{https://www.researchgate.net/publication/267448343\_PDFMiner\_-\_Python\_PDF\_Parser}, Camelot \footnote{https://github.com/camelot-dev/camelot}. For extracting content for image-based PDF documents we utilised a proprietary OCR model. Since there is no standardisation when it comes to RFP responses and other related documents, we performed exploratory analysis to identify common patterns across templates. This lead to creation of reusable sub-components which were used to prepare multiple custom parsers. Each parser focussed upon the following details: \begin{itemize} \item Patterns for header and footers, \item Evaluate the start and end page of particular section using headings and content tables \item Identify the headings and sub headings locations by leveraging the details such as font size gradients, position on the page, etc. \end{itemize} The overall Text Processing and Storage flow with details related to scoring sheet module and response module is outlined in figure \ref{fig:text_processing_flow}. \subsubsection{Scoring Sheet Module } Every RFP bid comprises of a scoring sheet. This document usually contains tabular data with details related to different sections/questions, bidder details along with score by different evaluators. The scoring sheet module first tries to identify the template of the input document. Based on predefined rules, a specific parser is selected to extract the required information. The scoring information is normalised to maintain consistency of evaluation scale (the score ranges vary across RFP bids). \subsubsection{RFP Response Module} The RFP Response Module takes a PDF document as input and tries to identify if any of the existing parsers can be applied to extract information. If not, then we develop custom parsers to handle such a document. For the purposes of this paper, we developed a total 4 parsers to handle 42 templates. These 4 parsers were used for extracting text from proposals documents and scoring sheets for both searchable and scanned PDFs. Each parser was designed to extract details such as sections, question level information and a mapping of raw text with section and relevant question. We also extract details such as presence of infographics, headers, footers, tables, images, references. Even though RFPs and related documents are publicly available, certain portions of these documents are redacted. This is done to avoid exposure of confidential information. We also identified percentage of redacted content per section as one of the auxiliary features. \subsection{Processing and Storage of the Extracted Text} The extracted metadata from Scoring sheets \& RFP Responses are cleaned and processed using standard text processing techniques. White noise in text and non-dictionary words(due to OCR conversion issues or otherwise) are removed. The following preprocessing steps we applied to the extracted text: \begin{itemize} \item Removal of extra white spaces, newlines, stop words and special characters \item Sections, tables, URLs, emails, and date-time characters were masked using regular expressions. \end{itemize} Various auxiliary features derived from infographics such as number of figures/graphics, number of tables, number of words per response etc. These features are the cleaned and processed to prepare them to be used as auxiliary features for downstream tasks. We also derived additional features such as \textit{average word length, parts-of-speech tags, MCO details, degree of lexical richness, percentage of of redaction, etc.} These auxiliary features are explained in detail in Section 5.0.1. Final dataset along with scores, actual response texts and derived auxiliary features is stored in relevant databases. Since we had a non-standard schema for the extracted information, we made use of MongoDB \footnote{https://www.mongodb.com}, a NOSQL database. \begin{figure*}[] \begin{center} \includegraphics[width=0.9\linewidth]{sample_doc_text.png} \caption{(a) A sample page from one of the submitted responses, (b) is the extracted text from the given page} \label{fig:sample_doc} \end{center} \end{figure*} \begin{figure*}[] \begin{center} \includegraphics[width=0.9\linewidth]{enablers_new.png} \caption{ Enablers and Disablers using fine-tuned Bi-LSTM model with Exclusion-Inclusion Method} \label{fig:ei_enabler} \end{center} \end{figure*} \section{Problem Statement} Given a Dataset D with a questions answer pair Q $\rightarrow$ A with a one-to-one mapping between every question and answer. A score S is assigned to every question answer pair. The goal is to predict the score based on features extracted from text (of the answer/response) and other auxiliary information. We treat this as a regression problem, since we have a high scoring range (0-100). \section{Model Building} We experimented with traditional machine learning based models along with deep learning based more sophisticated approaches. This was done to ensure a comprehensive study of how different models and features impact the final setup. We also performed different ablation experiments to better understand the impact. For both the approaches, we used a set of auxiliary features apart from textual features. A brief overview of the auxiliary features is as follows: \subsubsection{Auxiliary Features} \begin{itemize} \item No of words : Several research projects have shown that higher-rated essays, in general, contain more words \cite{carlson1985relationship,ferris1994lexical}. \item Domain ID: Domain IDs are the broad level categorization of the responses. One response can have multiple domains. With subject matter expertise and background knowledge we created 10 domain IDs e.g. Quality Management, Compliance, Technology etc. While fitting in the model as auxiliary features we fed this domain ID feature doing one hot encoding. \item Part of speech tags: POS tags based n-grams capture context very well. Each response word was tagged with its corresponding part-of-speech (eg., Verb, Noun, Preposition). \item Average Word length : Word length can be used to indicate the sophistication of a writer \cite{hiebert2011beyond,reppen1995variation} and studies have shown that higher rated essays tend to use longer words. \item Lexical Richness : The lexical richness is defined as the ratio of number of unique tokens present in the text to the total number of tokens present. The writer with a larger vocabulary is generally more proficient and hence is better graded than a writer using limited vocabulary \cite{reppen1995variation}. \item Count of Sections, Figures, URLs, emails : During preprocessing Sections, Figures, URLs and Emails were replaced with the tokens \textit{SECTION}, \textit{FIGURE}, \textit{URL}, and \textit{EMAIL}. We make use of the count for these regular expressions to capture the amount of detail present in the text. \item Doc2Vec Features: Doc2Vec \cite{le2014distributed} features are the vector representation of a document. We extracted these 300 dimensional features from a fine-tuned pre-trained Doc2Vec model for each of the responses we have and treated these features as one of the auxiliary variables. \end{itemize} \subsection{Approach 1 : Random Forest } We trained a random forest regression model on the processed text and auxiliary features as explanatory features with normalized score as dependent variable. We used a bag-of-words approach to transform textual data into usable form. We also experimented with \textit{tf-idf} based features but could not achieve any significant improvement. We fine-tuned the model for mean absolute error and adjusted $R^2$ to achieve best possible outcomes. We utilised grid-search for hyper-parameter optimisation. We used k-fold (in this exercise 5 fold) cross validation technique to ensure that a stable model has been identified by addressing bias-variance trade off issues. \subsection{Approach 2 : Deep Learning Based Approaches} In approach 1, the main drawback was under-utilisation of textual features. The bag-of-words feature set has its limitation as it fails to capture context and key linguistic features. To tackle this issue, we upgraded experimented with deep learning based NLP models as well. \begin{figure*}[] \begin{center} \includegraphics[width=0.9\linewidth]{Model_rfp.png} \caption{Bi-LSTM based scoring appoach} \label{fig:bilstm_model} \end{center} \end{figure*} The model setup is as follows. Textual input is handled using one or more bi-LSTM layer(s) with $b$ hidden layers $e$ dimensional embedding layer. The bi-LSTM layer is followed by a Global Average Pooling/Flatten layer, Dropout Layer with dropout rate of $r$ and Batch Normalization respectively. For auxiliary input, we put one dense layer of size $d$ and merged this with the last output of text input branch. After merging we again use one Dropout layer with rate $r$ and Batch Normalization layer. Finally one dense layer of size 1 with linear activation for the final prediction of score. We use the different optimizers (e.g. Adam, Nadam, SGD, etc.) with learning rate $l$ and default learning decay rate. The high level model architecture is shown in figure \ref{fig:bilstm_model}. We observed that the model's validation loss converged in about 15 epochs with default learning rate. The textual input handles variable sequence length. This is done in order to handle the large variation in length of responses. We observed a variation of sequence lengths between 900 to 0.1 million words. This ability to handle variable input lengths helps us capture complete response without the need to truncate even a single word. This is also important from the interpretability perspective, i.e. identification of enablers and disablers. The large variation in input sequence lengths restricted us to use a small batch size of 4. We also make use of attention mechanism \cite{bahdanau2014neural} to capture the distinguished influence of the words on the output prediction. The hyper-parameter space we tried for fitting the model is, $b \in \{32, 64, 128, 256, 512\}, e \in \{32, 64, 128, 256, 512. 1024\}, r \in [0, 1), l \in [1e-7, 1e-2], d \in \{32, 64, 128, 256, 512, 1024\}.$ We tried different intermediate activation functions e.g. ReLU, tanh, sigmoid, etc. We created one custom activation layer specific to our problem i.e. as our output score is always between 0 to 100, we clipped each intermediate activation to be in between 0 and 100. This custom activation function is defined as $f: z \rightarrow a$ such that, \[ f(z) = \begin{cases} 0 & z\leq 0 \\ z & 0\leq z\leq 100 \\ 100 & 100\leq z \end{cases} \] This is very similar to ReLU activation with an additional constraint of clipping the maximum value. While calculating the model loss we also clipped the final prediction of the model $\hat{y}$ to be between 0 and 100, thus limiting the L1 loss value. The use of custom loss function reduced the test loss significantly. However, using either the custom loss function or the custom activation function is required as both lead to similar impact. Along with random initialization of embedding matrix, we tried pre-trained embeddings from BERT\cite{devlin2018bert} , GloVe \cite{pennington2014glove} and Word2vec \cite{rong2014word2vec}. \begin{table}[h] \begin{tabular}{c c c} \hline \textbf{Model Setup} & \textbf{MAE} \\ \hline \noalign{\smallskip} \hline Random Forest & 14 \\ Random Forest + Auxiliary & 12.3 \\\hline Bi-LSTM + Attention & 8.7 \\ Bi-LSTM + Attention + Auxiliary & 8.0 \\ Bi-LSTM + Attention + Auxiliary + BERT-Embeddings & 7.4 \\ \hline \end{tabular} \caption{Modelling Approaches with corresponding mean absolute error} \label{tab:mae} \end{table} \begin{center} \begin{table*}[h] \begin{tabular}{c c} \hline \textbf{Enablers} & \textbf{Disablers} \\ \noalign{\smallskip} \hline ['death', 'understand', 'social determinant', & ['knowledgeable', 'housing', 'transportation', \\ 'leave', 'learn', & 'department', 'utilization', \\ 'previous', 'approach', 'shelter', 'conduct', 'pharmacy'] &'peer', 'reside', 'social support', 'circumstance', 'symptom'] \\ \hline \end{tabular} \caption{Enablers and Disablers identified using fine-tuned Random Forest model with SHAP} \label{tab:dataset} \end{table*} \end{center} \section{Scoring Results} Table \ref{tab:mae} shows results for our dataset for each of the approaches explained in the previous section. The results highlight the obvious effectiveness of embeddings as compared to bag-of-words features. The deep learning approach outperforms the Random Forest model significantly, signifying that it is important to model the temporal and sequential features within the RFP responses. We also notice that handcrafted auxiliary features play an important role for scoring. The addition of handcrafted features improves performance within all the deep learning models, which highlights the the impact of non-textual features on the overall scoring of such documents. Our final best performing model is the Bi-LSTM model which uses BERT embeddings augmented with Auxiliary features. We would also like to highlight the effectiveness and importance of simple attention mechanisms over complex modifications, we did not observe any significant improvement using these choices. \section{Calculation of Enablers and Disablers} Apart from scoring the answers, it is also important to understand which attributes contribute to a better score. This enables us to suggest better writing practices and in turn achieve higher scores. We define \textit{Enablers} as terms which positively contribute to the score and \textit{Disablers} as terms which have a negative impact on the overall score. \subsection{For Random Forest} For the historical training data, out-of-bag predictions were used as bench-mark predictions. LIME \cite{ribeiro2016should} and SHAP \cite{lundberg2017unified} values were extracted for each response and were rank ordered based on magnitude and direction to identify key enablers and disablers for each historical response. Table \ref{tab:dataset} shows enablers and disablers for the sample text in Figure \ref{fig:sample_doc}(b). \subsection{For Deep Learning Models} SHAP and LIME can be used for deep learning models as well. The limitation of these methods is their focus on word level importance. To provide better context and identify phrase level importance, we use the Exclusion and Inclusion (EI) method \cite{eipaper}. This method calculates the effects by excluding phrases one at a time strategically and comparing the output score with each setting (amongst all the possible). Despite the large number of combinations this algorithm efficiently parallelises the calculations for different n-grams. The EI method has two steps for calculation of enablers and disablers for regression setting. In the first step, it calculates which words (or phrases) are not important and \textit{excludes} them by masking those words. The unimportant words are neither enablers or disablers. In the second step with only the important words remaining, it calculates the effects (either positive or negative),i.e. the \textit{inclusion} step. The effect of the words (or phrases) is evaluated through the following metric: \begin{equation} EI(phr_i) = \frac{\hat{y_{in}} - \hat{y_{ex}}}{\hat{y_{in}}} * 100 \end{equation} We calculate the percentage change in model output with respect to exclusion and inclusion of phrases. Here, $\hat{y_{in}}$ and $\hat{y_{ex}}$ are the predicted outputs including and excluding the phrase $i$ respectively. If the EI score is positive then the phrase has positive effect and if negative it has negative effect on the output. Figure \ref{fig:ei_enabler} shows the identified enabler and disabler phrases on the sample page shown in \ref{fig:sample_doc}. The phrases marked in green contribute positively towards the score (Enablers) while the phrases marked in red contribute negatively (Disablers). \subsection{Quality of Enablers and Disablers} Domain knowledge is essential for identification of terms that have a negative/positive impact on the document score. We performed a qualitative analysis of our results by performing a Subject Matter Experts (SME) or human evaluation of enabler and disabler terms. We performed this exercise on a subset of 100 sample documents. From these documents, we asked the human evaluators to highlight important words and phrases ( both enablers and disablers) which are likely to help them write better answers. Based on this, we calculate the agreement percentage of useful phrases for each document. We call this metric Phrase Quality (PQ). \begin{table}[h] \begin{tabular}{c c c} \hline \textbf{Model} & \textbf{Method} & \textbf{PQ Agreement} \\ \noalign{\smallskip} \hline Random Forest & SHAP & 0.73 \\ Bi-LSTM & SHAP & 0.78 \\ Bi-LSTM & Exclusion-Inclusion & 0.85 \\ \hline \end{tabular} \caption{Phrase Quality Agreement } \label{tab:dataset} \end{table} Our method shows improved average performance over traditional methods like SHAP which only model importance at word level and do not take phrases into account. Phrases capture more context and provide better insight towards writing better RFPs. \section{Conclusion and Future Work} We introduced a new problem statement to NLP researchers, automatic scoring of Request for Proposals (RFP) for the insurance industry. Using a generic pdf parser, we collected data for 1300 RFP responses across multiple states in US and preprocessed it to be analysed by Natural Language Processing Pipelines. We built a scoring system using Deep Learning approaches and introduced an interpretable system for identification of enabler and disabler words and phrases. These interpretations assist experts in writing better RFP responses. Future work includes building a multimodal system that can model the aesthetic as well as content wise qualities of proposal documents. \bibliographystyle{ACM-Reference-Format}
1,116,691,501,086
arxiv
\section{Introduction} \label{} The perturbative expansion in the strong coupling $\alpha_s$ is the main approach to predictions in quantum chromodynamics (QCD) at sufficiently high energies. However, the expansion parameter, $\alpha_s$, is not a physical observable of the theory. Its definition carries a dependence on conventions related to the renormalization procedure, such as the renormalization scale and renormalization scheme. Physical observables should, of course, be independent of any such conventions. This requirement leads, in the case of the renormalization scale, to well defined Renormalization Group Equations (RGE) that must be satisfied by physical quantities. The situation regarding the renormalization scheme is more complicated and perturbative computations are, most often, performed in conventional schemes such as ${\overline{\rm MS}}$~\cite{bbdm78}. In this work we discuss a new definition of the QCD coupling, that we denote $\hat \alpha_s$, recently introduced in Ref.~\cite{BJM16}, and its applications to the QCD description of inclusive hadronic $\tau$ decays. The running of this new coupling is renormalization scheme independent, i.e. in its $\beta$ function only scheme independent coefficients intervene. The scheme dependence of $\hat \alpha_s$ is parametrised by a single continuous parameter $C$. The evolution of $\hat \alpha_s$ with respect to this new parameter is governed by the same $\beta$ function that governs the scale evolution. We refer to the coupling $\hat \alpha_s$ as the $C$-scheme coupling. An important aspect is the fact that perturbative expansions in $\alpha_s$ are divergent series that are assumed to be asymptotic expansions to a ``true'' value, which is unknown~\cite{Renormalons}.\footnote{F. Dyson formulated the first form of this reasoning in 1952, in the context of Quantum Electrodynamics~\cite{FD52}.} In this spirit, different schemes correspond to different asymptotic expansions to the same scheme invariant physical quantity, and should be interpreted as such. One can then use the parameter $C$ to interpolate between perturbative series with larger or smaller coupling values, and exploit this dependence in order to optimize the predictions for observables of the theory. The idea of exploiting the scheme dependence in order to optimize the series differs from the approach of other celebrated methods used for the optimisation of perturbative predictions. In methods such as Brodsky-Lepage-Mackenzie (BLM)~\cite{Brodsky:1982gc} or the Principle of Maximum Conformality~\cite{Brodsky:2012rj,Mojaza:2012mf} the idea is to obtain a scheme independent result through a well defined algorithm for setting the renormalization scale, regardless of the intermediate scheme used for the perturbative calculation (which most often is ${\overline{\rm MS}}$). The ``effective charge" method~\cite{Grunberg:1982fw}, on the other hand, involves a process dependent definition of the coupling. In the procedure described here, one defines a process independent class of schemes, parametrised by the parameter $C$. The optimal value of $C$ must be set independently for each process considered. We begin with the scale running of the QCD coupling that we write as \begin{equation} \label{bfun} -\,Q\,\frac{{\rm d}a_Q}{{\rm d}Q} \,\equiv\, \beta(a_Q) \,=\, \beta_1\,a_Q^2 + \beta_2\,a_Q^3 + \beta_3\,a_Q^4 + \cdots \end{equation} We will work with $a_Q \equiv \alpha_s(Q)/\pi$, with $Q$ being a physically relevant scale. Since the recent five-loop computation of Ref.~\cite{bck16}, the first five coefficients of the QCD $\beta$-function are known analytically. The coefficients $\beta_1$ and $\beta_2$ are scheme independent. Let us consider a scheme transformation to a new coupling $a'$, which, perturbatively, takes the general form \begin{equation} \label{ap} a' \,\equiv\, a + c_1\,a^2 + c_2\,a^3 + c_3\,a^4 + \cdots \, \end{equation} The QCD scale $\Lambda$ is also different in the two schemes and obeys the relation \begin{equation} \label{Lambdap} \Lambda' \,=\, \Lambda\,{\rm e}^{c_1/\beta_1}. \end{equation} The shift in $\Lambda'$ depends only on a single constant~\cite{cg79}, governed by $c_1$ of \eq{ap}. This fact motivates the definition of the new coupling $\hat a_Q$, which is scheme invariant except for shifts in $\Lambda$ parametrised by a parameter $C$ as \begin{eqnarray} \label{ahat} \frac{1}{\hat a_Q} + \frac{\beta_2}{\beta_1} \ln\hat a_Q \,&\equiv&\, \beta_1 \Big( \ln\frac{Q}{\Lambda} + \frac{C}{2} \Big) \nonumber \\ && \hspace{-18mm} \,=\, \frac{1}{a_Q} + \frac{\beta_1}{2}\,C + \frac{\beta_2}{\beta_1}\ln a_Q - \beta_1 \!\int\limits_0^{a_Q}\, \frac{{\rm d}a}{\tilde\beta(a)}, \end{eqnarray} where \begin{equation} \frac{1}{\tilde\beta(a)} \,\equiv\, \frac{1}{\beta(a)} - \frac{1}{\beta_1 a^2} + \frac{\beta_2}{\beta_1^2 a} \end{equation} is free of singularities in the limit $a\to 0$ and we have used the scale invariant form of $\Lambda$. The coupling $\hat a_Q$ is a function of the parameter $C$ but we do not make this dependence explicit to keep the notation simple. The definition of \eq{ahat} should be interpreted in perturbation theory in an iterative sense, which allows one to deduce the corresponding coefficients $c_i$ of \eq{ap} (their explicit expressions are given in \cite{BJM16} using the ${\overline{\rm MS}}$ as the input scheme). One should remark that a combination similar to~\eqn{ahat}, but without the logarithmic term on the left-hand side, was already discussed in Refs.~\cite{byz92,ben93}. However, without this term, an unwelcome logarithm of $a_Q$ remains in the perturbative relation between the couplings $\hat a_Q$ and $a_Q$. This non-analytic term is avoided by the construction of Eq.~\eqn{ahat}. From the definition of the new coupling $\hat a_Q$ we can derive its $\beta$ function that reads \begin{equation} \label{betahat} -\,Q\,\frac{{\rm d}\hat a_Q}{{\rm d}Q} \,\equiv\, \hat\beta(\hat a_Q) \,=\, \frac{\beta_1 \hat a_Q^2}{\left(1 - \sfrac{\beta_2}{\beta_1}\, \hat a_Q\right)} . \end{equation} The function $\hat \beta$ takes a simple form and is scheme independent since only the coefficients $\beta_1$ and $\beta_2$ intervene. The evolution with the parameter $C$ obeys an analogous equation \begin{equation} -2 \frac{{\rm d}\hat a_Q}{{\rm d}C} =\, \frac{\beta_1 \hat a_Q^2}{\left(1 - \sfrac{\beta_2}{\beta_1}\, \hat a_Q\right)}. \end{equation} Therefore, there is a complete analogy between the coupling evolution with respect to the scale and with respect to the scheme parameter $C$. The dependence of $\hat a_Q$ on $C$ is displayed in Fig.~\ref{fig1} using the ${\overline{\rm MS}}$ as input scheme and setting the scale to the $\tau$ mass, $M_\tau$. The new coupling becomes smaller for larger values of $C$ and perturbativity breaks down for values below roughly $C=-2$. Therefore, we restrict our analysis to $C\geq -2$. \begin{figure} \includegraphics[width=0.47\textwidth]{fig1.pdf} \caption{The coupling $\hat a(M_\tau)$ according to Eq.~\eqn{ahat} as a function of $C$, and for the ${\overline{\rm MS}}$ input value $\alpha_s(M_\tau)=0.316(10)$. The yellow band corresponds to the $\alpha_s$ uncertainty.\label{fig1}} \end{figure} \section{\boldmath Application to $\tau$ decays} As a phenomenological application of the $C$-scheme coupling, we focus here on the perturbative expansion of the total $\tau$ hadronic width. The chief observable is the ratio $R_\tau$ of the total hadronic branching fraction to the electron branching fraction. It is conventionally decomposed as \begin{equation} R_\tau \,=\, 3\, S_{\rm EW} (|V_{ud}|^2 + |V_{us}|^2)\, ( 1 + \delta^{(0)} + \cdots), \end{equation} where $S_{\rm EW}$ is an electroweak correction and $V_{ud}$, as well as $V_{us}$, CKM matrix elements. Perturbative QCD is encoded in $\delta^{(0)}$ (see Refs.~\cite{bnp92,bj08} for details) and the ellipsis indicate further small sub-leading corrections. The calculation of $\delta^{(0)}$ is performed from a contour integral of the so called Adler function in the complex energy plane, exploiting analyticity properties, which allows one to avoid the low energy region where perturbative QCD is not valid. In doing so, one must adopt a procedure in order to deal with the renormalization scale. The scale logarithms can be summed either before or after performing the contour integration. The first choice, where the integrals are performed over the running QCD coupling, is called Contour Improved Perturbation Theory (CIPT), while the second, where the coupling is evaluated at a fixed scale and the integrals are performed over the logarithms, is called Fixed Order Perturbation Theory (FOPT). Analytic results for the coefficients of the Adler function are available up to five loops, or $\alpha_s^4$ \cite{bck08}. Here we consider an estimate for the yet unknown fifth order coefficient of the Adler function, namely $c_{51}=283$ \cite{bj08}. In FOPT, the perturbative series of $\delta^{(0)}(a_Q)$ in terms of the ${\overline{\rm MS}}$ coupling $a_Q$ is given by \cite{bck08,bj08} \begin{align} \label{del0} \delta_{\rm FO}^{(0)}(a_Q) = a_Q + 5.202a_Q^2 + 26.37a_Q^3 + 127.1a_Q^4 +\cdots \end{align} In the $C$-scheme coupling $\hat a_Q$, the expansion for $\delta_{\rm FO}^{(0)}$ is \begin{align} \label{del0ah} &\delta_{\rm FO}^{(0)}(\hat a_Q) = \hat a_Q + (5.202 + 2.25 C)\,\hat a_Q^2 \nonumber \\ & \hspace{0.2cm} + (27.68 + 27.41 C + 5.063 C^2)\,\hat a_Q^3 \nonumber\\ & \hspace{0.2cm} + (148.4 + 235.5 C + 101.5 C^2 + 11.39 C^3)\,\hat a_Q^4\nonumber\\ & \hspace{0.2cm} + \cdots \end{align} \begin{figure} \includegraphics[height=3.7cm]{fig3.pdf} \caption{$\delta_{\rm FO}^{(0)}(\hat a_Q)$ of Eq.~\eqn{del0ah} as a function of $C$. The yellow band arises from either removing or doubling the fifth-order term. In the red dots, the ${\cal O}(\hat a^5)$ vanishes, and ${\cal O}(\hat a^4)$ is taken as the uncertainty. For further explanation, see the text. \label{fig3}} \end{figure} {\noindent In Fig.~\ref{fig3}, we display $\delta_{\rm FO}^{(0)}(\hat a_Q)$ as a function of $C$. Assuming $c_{5,1}=283$, the yellow band corresponds to removing or doubling the ${\cal O}(\hat a^5)$ term. A plateau is found for $C\approx -1$. Taking $c_{5,1}=566$ and then doubling the ${\cal O}(\hat a^5)$ results in the blue curve that does not show this stability. Hence, this scenario is disfavoured. In the red dots, which lie at $C=-0.882$ and $C=-1.629$, the ${\cal O}(\hat a^5)$ correction vanishes, and the ${\cal O}(\hat a^4)$ term is taken as the uncertainty, in the spirit of asymptotic series. The point to the right has a substantially smaller error, and yields} \begin{equation} \label{del0oa5zero} \delta_{\rm FO}^{(0)}(\hat a_{M_\tau},C=-0.882) \,=\, 0.2047 \pm 0.0034 \pm 0.0133 \,. \end{equation} The second error covers the uncertainty of $\alpha_s(M_\tau)$. In this case, the direct ${\overline{\rm MS}}$ prediction of Eq.~\eqn{del0} is \begin{equation} \label{del0MSb} \delta_{\rm FO}^{(0)}(a_{M_\tau}) \,=\, 0.1991 \pm 0.0061 \pm 0.0119\,\,\,\,\, ({\overline{\rm MS}}) \,. \end{equation} This value is somewhat lower, but within $1\,\sigma$ of the higher-order uncertainty. In CIPT, contour integrals over the running coupling have to be computed, and hence the result cannot be given in analytical form. The general behaviour is very similar to FOPT, with the exception that now also for $c_{5,1}=566$ a zero of the ${\cal O}(\hat a^5)$ term is found. Employing the value of $C$ which leads to the smaller uncertainty one finds \begin{equation} \label{del0CIoa5z} \delta_{\rm CI}^{(0)}(\hat a_{M_\tau},C=-1.246) \,=\, 0.1840 \pm 0.0062 \pm 0.0084 \,. \end{equation} As has been discussed many times in the past (see e.g.~\cite{bj08}) the CIPT prediction lies substantially below the FOPT results. On the other hand, the parametric $\alpha_s$ uncertainty in CIPT turns out to be smaller. \section{Higher-order terms} The behaviour of the series at higher orders is not known exactly. However, realistic models of the Adler function can be constructed in the Borel plane, in which the singularities of the function, namely its renormalon content, is partially known~\cite{Renormalons}. In Ref.~\cite{bj08} (see also Ref.~\cite{BBJ14}), models of the Adler function were constructed using the leading renormalons, that largely dominate the higher-order behaviour of the perturbative series. The model is matched to the exactly known coefficients in order to fully reproduce QCD for terms up to $a_Q^4$. This allows for a complete reconstruction of the series, to arbitrarily high orders in the coupling, and, moreover, one is able to obtain the ``true'' value of the asymptotic series by means of the Borel sum. In fact, the series is not strictly Borel summable because infra-red renormalons obstruct integration on the positive real axis. The ``true'' value has, therefore, an inherent ambiguity that stems from the prescription adopted to circumvent the singularities along the contour of integration. This ambiguity is related to non-perturbative physics~\cite{Renormalons, bj08}. Here we perform a preliminary investigation of the behaviour of $\delta^{(0)}$ at higher orders using the $C$-scheme coupling. The Adler function coefficients for terms higher than $a_Q^5$ are obtained in the ${\overline{\rm MS}}$ scheme from the central model of Ref.~\cite{bj08}. The series can then be translated to the $C$-scheme by means of the perturbative relation between the couplings $a_Q$ and $\hat a_Q$~\cite{BJM16}. Fig.~\ref{Delta0} shows four different series that should approach the same Borel summed result, showed as a horizontal band. The four series use as input the coefficients exactly known in QCD with the addition of the estimate $c_{5,1}=283$. One observes that the optimised version of $\delta_{\rm FO}^{(0)}$ (filled circles) approaches the Borel sum of the series faster than the ${\overline{\rm MS}}$ result (empty circles). Of course, because the optimised series has a larger coupling (see Fig.~1) asymptoticity sets in earlier and the divergent character is clearly visible already around the 10th order. The FOPT result with $C=0.7$ shows that smaller couplings do not necessarily lead to a better approximation at lower orders, requiring many more terms to give a good approximation to the Borel summed result. Finally, the optimal CIPT series does not give a good approximation to the Borel summed result (this is also the case in the ${\overline{\rm MS}}$~\cite{bj08}). Unfortunately, the use of the $C$-scheme coupling does not make the CIPT prediction closer to FOPT. The $C$-scheme FOPT, on the other hand, is in excellent agreement with the central Borel model which suggests that FOPT should be the favoured expansion. \begin{figure} \includegraphics[width=1\columnwidth,angle=0]{PertSeries_HigherOrders.pdf}\label{Delta0} \caption{Four series for $\delta^{(0)}$ with higher-order coefficients from the central model of Ref.~\cite{bj08}. In all cases $\alpha_s(M_\tau)=0.316$ which corresponds to the central value of the present world average~\cite{PDG16}. The optimised FOPT (filled circles) and CIPT (filled squares) series can be compared with the FOPT ${\overline{\rm MS}}$ results (empty circles) and FOPT for $C=0.7$ (triangles). The shaded band gives the Borel summed result, the ``true" value of the series, with its associated ambiguity~\cite{BJM16}. }\vspace{-0.5cm} \end{figure} \vspace{-0.5cm} \section*{Acknowledgements} \vspace{-0.3cm} It is a pleasure to thank the organisers of this very fruitful meeting. DB is supported by the S\~ao Paulo Research Foundation (FAPESP) grant 2015/20689-9, and by CNPq grant 305431/2015-3. The work of MJ and RM has been supported in part by MINECO Grant number CICYT-FEDER-FPA2014-55613-P, by the Severo Ochoa excellence program of MINECO, Grant SO-2012-0234, and Secretaria d'Universitats i Recerca del Departament d'Economia i Coneixement de la Generalitat de Catalunya under Grant 2014 SGR~1450. \bibliographystyle{elsarticle-num}
1,116,691,501,087
arxiv
\section{Introduction} \textbf{ The gedanken experiment of Maxwell's demon has led to the studies concerning the foundations of thermodynamics and statistical mechanics~\cite{Leff2002}. The demon measures fluctuations of a system's observable and converts the information gain into work via feedback control~\cite{Sagawa2008}. Recent developments have elucidated the relationship between the acquired information and the entropy production and generalized the second law of thermodynamics and the fluctuation theorems~\cite{Parrondo2015,Toyabe2010, Koski2014,Vidrighin2016}. Here we extend the scope to a system subject to quantum fluctuations by exploiting techniques in superconducting circuit quantum electrodynamics~\cite{Blais2004}. We implement Maxwell's demon equipped with coherent control and quantum nondemolition projective measurements on a superconducting qubit, where we verify the generalized integral fluctuation theorems~\cite{Funo2015,Funo2013} and demonstrate the information-to-work conversion. This reveals the potential of superconducting circuits as a versatile platform for investigating quantum information thermodynamics under feedback control, which is closely linked to quantum error correction~\cite{Terhal2015} for computation~\cite{Kelly2015} and metrology~\cite{Unden2016}. }\\ The fluctuation theorem is valid in systems far from equilibrium and can be regarded as a generalization of the second law of thermodynamics and the fluctuation-dissipation theorem~ \cite{Jarzynski2010,Campisi2011}. In particular, the generalized integral fluctuation theorem, which incorporates the information content on equal footing with the entropy production, bridges information theory and statistical mechanics~\cite{Sagawa2010}, and has been extended to quantum systems~\cite{Morikuni2011,Funo2013}. Experimentally, Maxwell's demons were implemented in classical systems using colloidal particles~\cite{Toyabe2010}, a single electron box~\cite{Koski2014}, and a photodetector~\cite{Vidrighin2016}. More recently, the integral quantum fluctuation theorem in the absence of feedback control was tested with a trapped ion~\cite{An2014}. Maxwell's demon and the generalized second law in a quantum system were studied in spin ensembles with nuclear magnetic resonance~\cite{Camati2016}. However, experimental demonstrations of the fluctuation theorems that directly address the statistics of single quantum trajectories under feedback control are still elusive. Toward this goal, recent progress in superconducting quantum circuits offers quantum non-demolition~(QND) projective measurement of a qubit~\cite{Blais2004} and improved coherence times~\cite{Oliver2013} which altogether enable high-fidelity feedback operations. For example, stabilization of Rabi oscillations using coherent feedback~\cite{Vijay2012,Campagne-Ibarcq2013}, fast initialization of a qubit~\cite{Riste2012}, and deterministic generation of an entanglement state between two qubits~\cite{riste2013deterministic} have been achieved. Here we verify the generalized integral fluctuation theorem under feedback control by using a superconducting transmon qubit as a quantum system and taking statistics over repeated single-shot measurements on individual quantum trajectories. Note that Nagihloo {\it et al.}~recently reported a related experiment with continuous weak measurement and feedback~\cite{Naghiloo2017}. We investigate the role of absolute irreversibility associated with the projective measurements as well~\cite{Funo2015}. \begin{figure}[tb] \begin{center} \includegraphics[width=7.8cm]{Fig1.png} \caption{ Maxwell's demon and absolute irreversibility. (a)~Concept of the experiment. The system initially prepared in a canonical distribution $\hat{\rho}_\mathrm{ini}$ evolves in time. A projective measurement by the demon disrupts the evolution, projecting the system onto a quantum state. The demon gains the stochastic Shannon entropy $I_\mathrm{Sh}$ and converts it into work $W$ via a feedback operation $\hat{U}$. For achieving the ultimate bound of the extracted work $\langle W \rangle =k_\mathrm{B} T \langle I_\mathrm{Sh} \rangle$, the final state distribution $\hat{\rho}_\mathrm{fin}$ of the system has to be the same as $\hat{\rho}_\mathrm{ini}$. However, an unoptimized feedback operation prevents it and introduces absolute irreversibility, quantified by the probability $\lambda_\mathrm{fb}$, limiting the amount of the extractable work~[Eq.(\ref{eq:2nd_law})]. The time-reversed reference process starts from $\hat{\rho}_\mathrm{r}$~$(= \hat{\rho}_\mathrm{ini})$. (b)~Schematic of the feedback-controlled system in the experiment. (c)~Qubit-resonator coupled system. A superconducting transmon qubit fabricated on a sapphire substrate is placed at the center of an aluminum cavity resonator. In the qubit measurement, the ground and excited states are distinguished in the phase of a microwave readout pulse reflected by the resonator. } \label{fig:(schem)FB_8} \end{center} \end{figure} The theorem is formulated by considering a pair of processes, the original (forward) process and its time-reversed reference process~[Fig.~\ref{fig:(schem)FB_8}(a)]. The initial state of each process is set to be the canonical distribution at temperature $T$. If we ignore the relaxation of the qubit, the fluctuation theorem reads~\cite{Funo2015,suppl} \begin{equation}\label{eq:JEFBabs} \langle \mathrm{e}^{-\sigma - I_\mathrm{Sh}} \rangle = 1 -\lambda_\mathrm{fb}, \end{equation} where $I_\mathrm{Sh}$ is the stochastic Shannon entropy of the initial state of the qubit, $\sigma = - \beta (W + \Delta F)$ is the entropy production, $\beta$ is the inverse temperature $1/(k_\mathrm{B} T)$ of the qubit, $W$ is the work extracted from the qubit, and $\Delta F$ is the change in the equilibrium free energy of the system. The constant $\lambda_\mathrm{fb}$ on the right-hand side of Eq.(\ref{eq:JEFBabs}) denotes the total probability of those events in the time-reversed process whose counterparts in the original process do not exist. Such events, called absolutely irreversible events, involve a formal divergence of the entropy production and should therefore be treated separately~\cite{Funo2015,suppl}. Here, the absolute irreversibility is caused by the projective measurement that restricts possible forward events. Below, we focus on the case with $\Delta F = 0$, i.e., to the process with the same system Hamiltonian at the beginning and the end, for simplicity of discussions. In the experiment [Fig.~\ref{fig:160508CDK36_649_JEFB_3RO_ctSwp}(a)], we evaluate the work $W=E(x) - E(z)$ extracted from the system by employing the two-point measurement protocol~(TPM), in which QND projective measurements on the energy eigenbasis (with outcomes $x$ and $z$) are applied respectively to the initial and final states of the system~\cite{Campisi2011}. A positive amount of the work~($W > 0$) corresponds to the energy deterministically extracted from the system via the stimulated emission of a single photon induced by the $\pi$-pulse. Depending on the measurement outcome $x$ for the feedback control, the feedback operation does or does not flip the state of the qubit with a $\pi$-pulse. The probability $p(x)$ of the state $x$ being found gives $I_\mathrm{Sh} = -\ln p(x)$. \begin{figure*}[tb] \begin{center} \includegraphics[width=114.4mm]{Fig2.png} \caption{ Generalized integral fluctuation theorem under feedback control. (a) Pulse sequence used in the experiment. The qubit is initialized with a projective measurement and postselection, followed by a resonant pulse excitation which prepares a superposition as an input. The two-point measurement protocol (TPM) consists of two quantum nondemolition projective readout pulses. Depending on the outcome $x$ of the first readout ($x={\rm g}$ or ${\rm e}$ corresponding to the ground or the excited state of the qubit), a $\pi$-pulse for the feedback control is or is not applied. The $\pi$-pulse flips the qubit state to the ground state and extracts energy. The second readout with outcome $z$ completes the protocol. See the Supplementary Information~\cite{suppl} for details. (b)~Experimentally obtained statistical average $\langle \mathrm{e} ^{\beta W - I_\mathrm{Sh}} \rangle$ vs.\ the inverse initial qubit temperature $1/T$ (blue circles). The red solid (black dashed) curve is the theoretical value of the probability $1 -\lambda_\mathrm{fb}$ in the presence (absence) of absolute irreversibility. The green dashed curve is obtained by a master equation taking into account the qubit relaxation during the pulse sequence. } \label{fig:160508CDK36_649_JEFB_3RO_ctSwp} \end{center} \end{figure*} In Fig.~\ref{fig:160508CDK36_649_JEFB_3RO_ctSwp}(b) we compare the experimentally obtained statistical average $\langle \mathrm{e} ^{\beta W - I_\mathrm{Sh}} \rangle$ with the theoretical value of $1-\lambda _\mathrm{fb}$~\cite{suppl}. Depending on the effective temperature of the qubit initial state, the probability of the absolutely irreversible events varies. The excellent agreement confirms the generalized integral fluctuation theorem under feedback control. Furthermore, the relation in Eq.(\ref{eq:JEFBabs}) is proven to hold for any initial effective temperature of the qubit, even at negative temperatures. The smaller the inverse temperature $\beta$ is, the larger the contribution of absolute irreversibility is. Next, we investigate the effects of imperfect projection in the readout. With a weak readout pulse, the state of the qubit is not completely projected. It also gives less information gain for the feedback control. To evaluate the influence of the weak measurement, we add two more readout pulses to the pulse sequence [Fig.~\ref{fig:160508CDK36_651_JEFB_5RO_roSwp}(a)]. The TPM again starts with a projective readout with outcome $x$, but now the feedback control is performed based on the subsequent variable-strength measurement with outcome $k$($= \mathrm{g}$ or $\mathrm{e}$). Then, to project the qubit state before the feedback control, we apply another strong measurement to obtain outcome $y$($= \mathrm{g}$ or $\mathrm{e}$). Using these measurement outcomes, we calculate the stochastic QC-mutual information $I_\mathrm{QC} = \ln p(y|k) - \ln p(x)$~\cite{Funo2013}. Here, QC indicates that the measured system is quantum and the measurement output is classical~\cite{Sagawa2008}, and $p(y|k)$ is the probability of outcome $y$ being obtained conditioned on the preceding measurement outcome $k$. The first term in $I_\mathrm{QC}$ quantifies the correction to $I_\mathrm{Sh}$ because of the imperfect projection. If the measurement for the feedback control is a QND projective measurement and there is no relaxation of the qubit, $p(y|k)$ becomes unity and $I_\mathrm{QC}$ reduces to $I_\mathrm{Sh}$. On the other hand, for the measurement with imperfect projection, the absolute irreversibility disappears because such measurement no longer gives restriction on forward events. Therefore, we obtain $\lambda_\mathrm{fb}=0$. In this case, the generalized integral fluctuation theorem is reformulated as~\cite{Funo2013,suppl} \begin{equation}\label{eq:JE5RO} \langle \mathrm{e}^{\beta W - I_\mathrm{QC}} \rangle = 1. \end{equation} \begin{figure*}[tb] \begin{center} \includegraphics[width=158.5mm]{Fig3.png} \caption{ Effects of the feedback error on the fluctuation theorem and the second law of thermodynamics. (a)~Pulse sequence. Two readout pulses are inserted to the one in Fig.~\ref{fig:160508CDK36_649_JEFB_3RO_ctSwp}(a). The outcome $k$(=$\mathrm g$ or $\mathrm e$) obtained by the readout with a variable pulse amplitude is used for the feedback control. The feedback error probability $\epsilon_{\mathrm{fb}}$ is a function of the measurement strength. The subsequent readout with outcome $y$ projects the qubit state before the feedback control. See Ref.~\cite{suppl} for details. (b)~Experimentally determined $\langle \mathrm{e}^{\beta W - I_\mathrm{QC}} \rangle$ (blue circles) and $\langle \mathrm{e}^{\beta W} \rangle$ (red squares) vs.\ $\epsilon_{\mathrm{fb}}$. (c)~$\langle I_\mathrm{QC} \rangle$ (blue circles) and $\langle \beta W \rangle$ (red squares) vs.\ $\epsilon_{\mathrm{fb}}$. The black dotted line represents the Shannon entropy $\langle I_\mathrm{Sh} \rangle$ of the qubit initial state, which is prepared at the effective temperature $T=0.14$~K with the excited state occupancy of 0.097. Line-connected dots in (b) and (c) show the simulated results incorporating the effect of qubit relaxation~\cite{suppl}. Inset in~(c): Information-to-work conversion efficiency $\eta$ (circles) and the simulated result (line-connected dots). The efficiency $\eta$ in the gray zone is inaccessible due to the absolute irreversibility. } \label{fig:160508CDK36_651_JEFB_5RO_roSwp} \end{center} \end{figure*} Figure \ref{fig:160508CDK36_651_JEFB_5RO_roSwp}(b) plots the statistical averages, $\langle \mathrm{e} ^{\beta W - I_\mathrm{QC}} \rangle$ and $\langle \mathrm{e} ^{\beta W} \rangle$, evaluated from the measurement outcomes of the pulse sequence shown in Fig.~\ref{fig:160508CDK36_651_JEFB_5RO_roSwp}(a). By changing the amplitude of the readout pulse measuring $k$, it is possible to continuously vary the post-measurement state from the projected state to a weakly disturbed state. Accordingly, the feedback error probability $\epsilon_{\mathrm{fb}}$ increases with decreasing the readout pulse amplitude. (See the Supplementary Information~\cite{suppl} for the details.) We see that $\langle {\mathrm{e}}^{\beta W -I_{\mathrm{QC}}} \rangle $ (blue circles), which involves the information gain due to the measurement, is almost unity regardless of the feedback error probability. The small deviation from unity is understood as the effect of the qubit relaxation during the TPM (blue curve)~\cite{Pekola2015}. In contrast, the value $\langle \mathrm{e}^{\beta W} \rangle$, which discards the information used in the feedback operation, clearly deviates from unity. For the weaker readout amplitude, however, the amount of information gain becomes less, and thus $\langle \mathrm{e}^{\beta W} \rangle$ approaches unity. The situation corresponds to the integral fluctuation theorem in the absence of feedback control. Figure~\ref{fig:160508CDK36_651_JEFB_5RO_roSwp}(c) depicts the statistical averages $\langle I_\mathrm{QC} \rangle$ and $\langle \beta W \rangle$ as a function of the feedback error probability $\epsilon_{\mathrm{fb}}$. The QC-mutual information $\langle I_\mathrm{QC} \rangle$ (blue circles) decreases to zero with increasing $\epsilon_{\mathrm{fb}}$. Even for $\epsilon_{\mathrm{fb}}=0$, there remains a difference between $\langle I_\mathrm{QC} \rangle$ and $\langle I_\mathrm{Sh} \rangle$ (black dotted line) due to the qubit relaxation between the two readouts for $k$ and $y$. The difference between $\langle I_\mathrm{QC} \rangle$ and $\langle \beta W \rangle$ in the limit of $\epsilon_{\mathrm{fb}} \rightarrow 0$ corresponds to $\ln(1- \lambda _\mathrm{fb})$ in this feedback protocol. The conversion efficiency from the QC-mutual information $\langle I_\mathrm{QC} \rangle$ to the work $\langle W \rangle$ is defined as \begin{equation} \eta=\frac{ \langle W \rangle}{k _\mathrm{B} T \langle I _\mathrm{QC} \rangle}, \end{equation} where we omit the contribution from the free-energy change by considering $\Delta F = 0$. As shown in the inset of Fig.~\ref{fig:160508CDK36_651_JEFB_5RO_roSwp}(c), $\eta$ becomes larger for stronger measurement and reaches the maximum value of 0.65. The main limiting factor of the efficiency in the present experiment is the contribution $k_\mathrm{B}T \ln (1 - \lambda_\mathrm{fb})$ in the generalized second law of thermodynamics~\cite{suppl} \begin{equation} \langle W \rangle \leq k_\mathrm{B} T \langle I _\mathrm{Sh} \rangle + k_\mathrm{B} T \ln (1 -\lambda_\mathrm{fb}) \label{eq:2nd_law} \end{equation} which is derived from the fluctuation theorem Eq.(\ref{eq:JEFBabs}). The result in the inset of Fig.~\ref{fig:160508CDK36_651_JEFB_5RO_roSwp}(c) indicates that our feedback scheme achieves the equality condition in Eq.(\ref{eq:2nd_law}) and is optimal in this sense. We have successfully implemented Maxwell's demon and verified the generalized integral fluctuation theorem in a single qubit. In the present work, the measurement outcome obtained by the demon was analyzed in terms of the Shannon and the QC-mutual information. On the other hand, the effect of the coherence can be investigated in a similar setup~\cite{Elouard2017}. By implementing the memory of the demon with a qubit~\cite{Quan2006}, or a quantum resonator as demonstrated recently~\cite{Cottet2017}, one can characterize the energy cost for the measurement~\cite{Sagawa2009} or study feedback schemes maintaining the coherence between the system and the memory to improve the energy efficiency of the feedback. Superconducting quantum circuits further allow us to extend the study of information thermodynamics to larger and more complex quantum systems. It will lead to an estimation of the lower bound of the thermodynamic cost for quantum information processing. \section*{Methods} The transmon qubit has the resonant frequency $\omega_\mathrm{q}/2\pi = 6.6296$~GHz, the energy relaxation time $T_1=24$~$\mu$s, and the phase relaxation time $T_2^\ast = 16$~$\mu$s at the base temperature $\sim$10~mK of a dilution refrigerator. The cavity has the resonant frequency $\omega _\mathrm{cav}/2\pi = 10.6180$~GHz, largely detuned from the qubit, and the relaxation time $1/\kappa = 0.076$~$\mu$s. The coupling strength between the qubit and the resonator is estimated to be $g/2\pi = 0.14$~GHz. The pulse sequences for the experiments in Figs.~\ref{fig:160508CDK36_649_JEFB_3RO_ctSwp} and~\ref{fig:160508CDK36_651_JEFB_5RO_roSwp} take about 2.5~$\mu$s and 4~$\mu$s, respectively. Each readout pulse has the width of 500~ns. The qubit excitation pulse and the feedback control pulse are both 20-ns wide. See~\cite{suppl} for details. We take the statistics of the outcomes by repeating the pulse sequence about $8\times 10^4$ times, with a repetition interval 300~$\mu$s which is much longer than the qubit relaxation time. \section*{Acknowledgements} The authors acknowledge T. Sagawa for useful discussions and W. D. Oliver for providing the transmon qubit. This work was partly supported by JSPS KAKENHI (Grant No.~26220601), NICT, and JST ERATO (Grant No.~JPMJER1601). Y.Mu.\ was supported by JSPS through the Program for leading Graduate School (MERIT) and JSPS Fellowship (Grant No.~JP15J00410). K.F.\ acknowledges supports from the National Science Foundation of China (grants~11375012, 11534002). \section*{Author contributions} Y.Ma., K.F.\ and Y.Mu.\ designed the experiments. Y.Ma.\ conducted the experiments. S.K.\ and Y.T.\ assisted in setting up the measurement system. K.F., Y.Mu.\ and M.U.\ provided theoretical supports. A.N.\, Y.T.\ and R.Y.\ participated in discussions on the analysis. Y.Ma.\ and Y.N.\ wrote the manuscript with feedback from all authors. M.U.\ and Y.N.\ supervised the project.
1,116,691,501,088
arxiv
\section{Introduction} A tricritical point (or line) is where a first-order phase transition terminates in the phase diagram. It is important to locate the tricritical point in various phenomenological models, since the first-order phase transition of the early universe may produce the baryon asymmetry \cite{Kuzmin:1985mm,Cohen:1990py,Cohen:1990it} or black holes \cite{Kapusta:2007dn} etc. The tricritical point has been investigated in the context of the electroweak phase transition \cite{Kajantie:1996mn,Kajantie:1996qd,Csikor:1998eu,Rummukainen:1998as,Aoki:1999fi} and also in the context of the QCD chiral phase transition at finite temperature and density. \cite{Casalbuoni:2006rs} There are several methods to analyze a tricritical point. Among these methods, the most reliable one is the numerical simulation. This approach can solve the problem of the infrared singularities from the transverse gauge bosons which are incalculable in the perturbation theory.\cite{Arnold:1992rz} \ On the other hand, several authors applied the alternative methods to analyze a tricritical point, for example, the $\varepsilon$-expansion techniques \cite{Gleiser:1992ch,Arnold:1993bq} or the auxiliary mass method \cite{Ogure:1998xu} etc. In this paper, we consider another analytic method to analyze a tricritical point by means of the ring improved one-loop finite temperature effective potential. To locate the tricritical point, we firstly expand the effective potential up to third order in the high temperature expansion. Then we expand the effective potential up to sixth order in the order parameter expansion around the tricritical point where an order parameter is small compared to the critical temperature. We apply this method in the U($N_f$)$\times$U($N_f$) sigma model.\footnote{Pisarski and Wilczek have discussed this model as a low energy effective theory of QCD and have shown that if $N_f\geq 3$, the restoration of the chiral symmetry at finite temperature should be first-order.\cite{Pisarski:1983ms} \ Following this argument, several authors have examined the strength of the first-order phase transition in the U($N_f$)$\times$U($N_f$) sigma model from the point of view of the electroweak baryonegesis. \cite{Appelquist:1995en,Khlebnikov:1995qb,Kikukawa:2007zk}\ Although they have introduced the U$_1$(A) breaking terms, we does not include these terms, since in the following analysis we consider the large $N_f$ region where the U$_1$(A) breaking terms are irrelevant for the critical behavior.}\ In our approximation, the tricritical line in the space of the coupling constants of the sigma model can be evaluated to the lowest order of the coupling constants and it follows an analytic relation between the tree-level masses of the scalar bosons. \footnote{We locate the tricritical point related to the diagram T vs. coupling constants not to T vs. baryonic chemical potential.} The technique expanding the effective potential by the order parameter up to sixth order has been applied to study the tricritical point of QCD at finite temperature and density.\cite{Barducci:1989wi}\ Their calculation was done by using the two-loop Cornwall, Jackiw and Tomboulis effective potential.\cite{Cornwall:1974vz} \ Although their calculation contains the numerical evaluation of the effective potential, our technique for the sigma model is the purely analytic one and we can derive a relation between the tree-level masses of the scalar bosons on the tricritical line. This paper is organized as follows. In section~\ref{sec:model}, we describe the U($N_f$) $\times$U($N_f$) sigma model with Yukawa interaction and the effective masses (field dependent mass) of the sigma model. In section~\ref{sec:eff}, we describe the ring improved one-loop finite temperature effective potential and the $T$-dependent effective masses of the sigma model. In section~\ref{sec:cep}, we analyze the tricritical line of the sigma model and examine the validity of our approximations. Section~\ref{sec:sd} is devoted to a summary and discussions. \section{U($N_f$)$\times$U($N_f$) sigma model with Yukawa interaction} \label{sec:model} In this paper, we consider the following Lagrangian: \begin{eqnarray} \mathcal{L}=\tr |\partial _\mu \Phi|^2+i\bar{\psi}_L^a\Slash{\partial}\psi_L^a+i\bar{\psi}_R^a\Slash{\partial}\psi_R^a-(y\bar{\psi}^a_L\Phi^{ab}\psi^b_R+h.c.)+\mathcal{L}_\Phi, \label{eq:LlsmKIN} \end{eqnarray} where $\mathcal{L}_\Phi$ is the scalar potential term which is written as, \begin{align} \mathcal{L}_\Phi &= - m_{\Phi}^{2} \tr\Phi ^\dagger \Phi - \frac{\lambda _1}{2} (\tr\Phi ^\dagger \Phi)^2 - \frac{\lambda _2}{2}\tr (\Phi ^\dagger \Phi )^2 . \label{eq:Llsm} \end{align} The field $\Phi(x)$ is an $N_f \times N_f$ complex matrix, and $\psi_{L}^a$ ($\psi_{R}^a$) is the left (right) hand Wyle spinors of $N_f$ flavors $(a=1,2,...,N_f)$ which transform under the chiral symmetry as \begin{equation} \Phi \rightarrow g_L \Phi g_{R}^{-1} ,\quad \psi_{L(R)}\rightarrow g_{L(R)}\psi_{L(R)},\quad g_L, g_R \in \text{U($N_f$)}. \end{equation} For stability, the quartic couplings should satisfy the following conditions at tree-level: $\lambda_2>0,\lambda_1+\lambda_2/N_f>0$. The vacuum expectation value (VEV) of scalar field $\Phi(x)$ is assumed as, \begin{equation} \langle \Phi \rangle = \frac{\phi_0}{\sqrt{2N_f}} \leavevmode\hbox{\small1\normalsize\kern-.33em1}, \label{VEV} \end{equation} where $\leavevmode\hbox{\small1\normalsize\kern-.33em1}$\ is the $N_f \times N_f$ unit matrix. At tree-level, $\phi_0$ is determined by the potential, \begin{equation} \label{eq:eff-potential-tree} V_0(\phi)=\frac{1}{2}m_\Phi^2\phi^2+\frac{1}{8}\left( \lambda_1+\frac{\lambda_2}{N_f} \right)\phi^4 . \end{equation} For $m_{\Phi }^2 < 0$, it is given by \begin{equation} \phi_0=\sqrt{\frac{-2m_\Phi^2}{\lambda_1+\lambda_2/N_f}}. \end{equation} The scalar field $\Phi$ is parameterized around the VEV as follows: \begin{equation} \Phi(x) = \frac{ \phi + h + i \eta }{\sqrt{2N_f}}\, \leavevmode\hbox{\small1\normalsize\kern-.33em1} + \sum_{\alpha=1}^{N_f^2-1} ( \xi^\alpha + i \pi^\alpha ) T^\alpha , \end{equation} where $T^\alpha$ $(\alpha=1,\cdots, N_f^2-1)$ are the generator of SU($N_f$) satisfying the normalization, ${\rm Tr} (T^\alpha T^\beta)=\delta^{\alpha \beta}/2$. The fields $h, \eta, \xi^\alpha, \pi^\alpha, \psi^a$ acquire masses at the tree-level as summarized in Table~\ref{tab:lsmm}, where, for notational simplicity, we use the following abbreviations: \begin{eqnarray} a_h&=&\frac{3}{2}(\lambda_1+\lambda_2/N_f), \\ a_\xi &=&\frac{1}{2}(\lambda_1+3\lambda_2/N_f), \\ a_\eta&=&a_\pi=\frac{1}{2}(\lambda_1+\lambda_2/N_f), \end{eqnarray} and \begin{eqnarray} b_h &=& a_h - a_\pi = (\lambda_1+\lambda_2/N_f), \\ b_\xi &=& a_\xi - a_\pi=(\lambda_2/N_f). \end{eqnarray} The effective masses and the degrees of freedom of the each fields which we use for the effective potential are also summarized in Table~\ref{tab:lsmm}. \begin{table}[t] \caption{The effective masses, the tree-level masses and the degrees of freedom of the fields in U($N_f$)$\times$U($N_f$) linear sigma model with Yukawa interaction.} \begin{center} \begin{tabular}{cccc} \hline \hline field & $m_i^2(\phi)$ & $m_i^2(\phi_0)$ & $n_i$ \\ \hline h & $m_\Phi^2+a_h \phi^2$ & $b_h \phi_0^2$ & 1 \\ $\xi$ & $m_\Phi^2+a_\xi \phi^2$ & $b_\xi \phi_0^2$ & $N_f^2-1$ \\ $\eta$ & $m_\Phi^2+a_\eta \phi^2$ & 0 & 1 \\ $\pi$ & $m_\Phi^2+a_\pi \phi^2$ & 0 & $N_f^2-1$ \\ \hline $\psi$ & $\frac{1}{2N_f}y^2\phi^2$& $\frac{1}{2N_f}y^2\phi_0^2$ & $-4N_f$ \\ \hline \end{tabular} \label{tab:lsmm} \end{center} \end{table} \section{Effective potential} \label{sec:eff} In this section, we describe the ring improved one-loop finite temperature effective potential in the U($N_f$)$\times$U($N_f$) sigma model with Yukawa interaction.\cite{Arnold:1992rz,Dolan:1973qd,Carrington:1991hz}\ At tree-level, the potential is given by Eq.(\ref{eq:eff-potential-tree}). The one-loop contribution at zero temperature, $V_1^{(0)}$, is given by \begin{align} V_1^{(0)}(\phi) =&\sum_{i=s,f}n_i\frac{m_i^4(\phi)}{64\pi^2} \left[\ln\frac{m_i^2(\phi)}{\mu^2}-\frac{3}{2}\right] \label{eq:1loop0} , \end{align} in $\overline{\rm MS}$ scheme, where $i$ runs over all of the scalar bosons: $s=\{h, \eta, \xi^\alpha, \pi^\alpha \}$ and the fermions: $f=\{\psi^a\}$. $n_i$ and $m_i(\phi)$ are the number of degrees of freedom and the effective masses depending on $\phi$, respectively. The one-loop contribution at finite temperature, $V_1^{(T)}$, is given by \begin{align} V_1^{(T)}(\phi,T) =\frac{T^4}{2\pi^2}\left(\sum_{i=s}n_iJ_{B}[m_i^2(\phi)/T^2]+\sum_{i=f}n_iJ_{F}[m_i^2(\phi)/T^2]\right) , \end{align} where $J_B$ and $J_F$ are defined by \begin{eqnarray} J_B(a)&=&\int_{0}^{\infty}dx~x^2 \ln\left(1-e^{-\sqrt{x^2+a}}\right), \nonumber \\ J_F(a)&=&\int_{0}^{\infty}dx~x^2 \ln\left(1+e^{-\sqrt{x^2+a}}\right). \end{eqnarray} In the high temperature limit where $m(\phi)/T \lesssim 1$, $J_B$ and $J_F$ can be expanded as follows: \begin{align} J_B(m^2/T^2)= &-\frac{\pi^4}{45}+\frac{\pi^2}{12}\frac{m^2}{T^2}-\frac{\pi}{6}\left(\frac{m^2}{T^2}\right)^{3/2} \notag \\ &-\frac{1}{32}\frac{m^4}{T^4}\ln\frac{m^2}{a_bT^2}+\mathcal{O}\left(\frac{m^6}{T^6}\right), \\ J_F(m^2/T^2)= &-\frac{7\pi^4}{360}-\frac{\pi^2}{24}\frac{m^2}{T^2} \notag \\ &-\frac{1}{32}\frac{m^4}{T^4}\ln\frac{m^2}{a_fT^2}+\mathcal{O}\left(\frac{m^6}{T^6}\right), \end{align} where $a_b=16\pi^2\exp(3/2-2\gamma_E)$($\ln a_b\approx 5.4076$), $a_f=\pi^2\exp(3/2-2\gamma_E)$($\ln a_f\approx 2.6351$). One can include the contribution of ring diagrams, $V_{\ring}(\phi,T)$, by replacing $m_i^2(\phi)$ in $V_1^{(0)}$ and $V_1^{(T)}$ with the $T$-dependent effective masses $\mathcal{M}_i^2(\phi,T)\equiv m_i^2(\phi)+\Pi_i$, where $\Pi_i$ is the self-energy of a field $i$ in the IR limit where the Matsubara frequency and the momentum of the external line goes to zero and in the leading order of $m_i(\phi)/T$. For all of scalar bosons of the sigma model, the self-energies are given by \begin{align} \Pi_s&=\Pi^{(s)}+\Pi^{(f)}, \notag\\ \Pi^{(s)}&=\frac{1}{12}[(N_f^2+1)\lambda_1+2N_f\lambda_2]T^2,\notag\\ \Pi^{(f)}&=\frac{1}{12}y^2T^2, \label{eq:self_s} \end{align} which are corresponding to the contributions from the scalar bosons themselves and the fermions, respectively. Since fermions do not receive the correction from the ring diagrams, $\Pi_\psi=0$. The $T$-dependent effective masses for scalar bosons and fermions are given by \begin{align} {\cal M}_{s,i}^2(\phi,T)&=m_{\Phi}^2+a_i\phi^2+\left[(N_f^2+1)\lambda_1+2N_f\lambda_2+y^2\right]\frac{T^2}{12} \nonumber \\ &\equiv m_{\Phi}^2+a_i\phi^2+b\frac{T^2}{12}, \nonumber \\ {\cal M}_{f}^2(\phi,T)&=m^2_{\psi}(\phi). \end{align} After all, the one-loop ring-improved effective potential is summarized as follows: \begin{align} V(\phi) =&V_0(\phi) +V_1^{(0)}(\phi) +V_1^{(T)}(\phi,T)+ V_{\ring}(\phi,T) \notag \\ =&V_0(\phi)+\sum_{i=s,f}n_i\frac{\mathcal{M}_i^4(\phi,T)}{64\pi^2} \left[\ln\frac{\mathcal{M}_i^2(\phi,T)}{\mu^2}-\frac{3}{2}\right] \notag \\ +&\frac{T^4}{2\pi^2}\left(\sum_{i=s}n_iJ_B[\mathcal{M}_i^2(\phi,T)/T^2]+\sum_{i=f}n_iJ_F[\mathcal{M}_i^2(\phi,T)/T^2]\right). \end{align} A comment on the validity of the ring-improved perturbation theory is in order. By inspecting the higher order diagrams for the scalar field self-energies, one can see that the non-ring diagrams are suppressed with respect to the ring diagrams at least by the following factors in the symmetric phase, \cite{Quiros:1999jp} \begin{eqnarray} \beta_{\lambda_1+\lambda_2/N_f}&\equiv& \frac{1}{4\pi}\left(\lambda_1+\frac{\lambda_2}{N_f}\right)\frac{T}{m_{\rm eff}} , \label{ring1}\\ \beta_{\lambda_2/N_f}&\equiv& \frac{1}{4\pi}N_f^2\frac{\lambda_2}{N_f}\frac{T}{m_{\rm eff}}, \label{ring2} \end{eqnarray} where $m_{\rm eff}=m_{\Phi}^2+bT^2/12$. It is sufficient to consider these conditions only in the symmetric phase for our analysis, since $\phi=0$ at the tricritical line. Therefore, in order to guarantee the validity of the ring-improved perturbation theory at the tricritical line, it is required that $\beta_{\lambda_1+\lambda_2/N_f},\beta_{\lambda_2/N_f}\ll 1$, while the perturbative expansion at zero temperature is valid for $N_f^2(\lambda_1+\lambda_2/N_f)/(4\pi)^2\ll 1$ and $N_f\lambda_2/(4\pi)^2 \ll 1$. \section{Analysis of the tricritical line} \label{sec:cep} At the critical temperature of the first-order phase transition, the effective potential satisfies the following conditions, \begin{eqnarray} \frac{\partial V}{\partial \phi}(\phi_c,T_c)=0,\hspace{.5cm} V(\phi_c,T_c)=V(0,T_c). \label{cond-1st} \end{eqnarray} A tricritical line is where $\phi_c/T_c$ is equal to zero for a finite value of $T_c$. To locate a tricritical line, we take the following approach. In the parameter region near a tricritical line, $\phi_c/T_c$ takes very small but nonzero value. In this region, we may solve Eq.(\ref{cond-1st}) by expanding the effective potential up to the sixth order of the $\phi_c/T_c$. Then we identify a tricritical line, by taking the limit as $\phi_c/T_c$ goes to zero. We follow this procedure by means of the ring improved one-loop finite temperature effective potential which is expanded to ${\cal O}({\cal M}^3/T^3)$ in high temperature limit. This potential is written as follows: \begin{align} V_3(\phi,T) &\equiv&T^4\left[\frac{1}{2}\left(\frac{m_{\rm eff}^2}{T^2}\right)\frac{\phi^2}{T^2}+\frac{1}{8}\left(\lambda_1+\frac{\lambda_2}{N_f}\right)\frac{\phi^4}{T^4}-\frac{1}{12\pi}\sum_{i=s}n_i \frac{{\cal M}_{i}^3(\phi,T)}{T^3}\right]. \nonumber \\ \label{effpot_pt3} \end{align} The high temperature expansion is valid if ${\cal M}_i/T\lesssim 1$. We evaluate later this condition at the tricritical line. Besides the high temperature expansion, we expand ${\cal M}^3_i/T^3$ terms up to ${\cal O}(\phi^6/T^6)$, \begin{eqnarray} \frac{{\cal M}_i^3(\phi,T)}{T^3} &=&\frac{m_{\rm eff}^3}{T^3}\Biggl[1+\frac{3}{2}\frac{a_iT^2}{m_{\rm eff}^2}\frac{\phi^2}{T^2} \nonumber\\ &&+\frac{3}{8}\left(\frac{a_iT^2}{m_{\rm eff}^2}\right)^2\frac{\phi^4}{T^4}-\frac{1}{16}\left(\frac{a_iT^2}{m_{\rm eff}^2}\right)^3\frac{\phi^6}{T^6}+{\cal O}\left(\frac{\phi^8}{T^8}\right)\Biggr]. \label{exp_mt3_b} \end{eqnarray} This expansion is valid if $\phi/T\sim0$, $m_{\rm eff}^2=m_{\Phi}^2+bT^2/12>0$ and $a_iT^2/m_{\rm eff}^2\lesssim1$. We also evaluate later this condition at the tricritical line. Using this expansion, Eq.(\ref{effpot_pt3}) is written as follows: \begin{eqnarray} V_3(\phi,T)/T^4 &=&c_0+c_2\frac{\phi^2}{T^2}+c_4\frac{\phi^4}{T^4}+c_6\frac{\phi^6}{T^6}+{\cal O}\left(\frac{\phi^8}{T^8}\right) \label{calv2} \\ c_2&=&\frac{1}{2}\left(\frac{m_{\rm eff}^2}{T^2}-\frac{1}{4\pi}\sum_{i=s}n_ia_i\frac{m_{\rm eff}}{T}\right), \nonumber \\ c_4&=&\frac{1}{8}\left(\lambda_1+\frac{\lambda_2}{N_f}-\frac{1}{4\pi}\sum_{i=s}n_ia_i^2\frac{T}{m_{\rm eff}}\right), \nonumber \\ c_6&=&\frac{1}{192\pi}\sum_{i=s}n_ia_i^3 \frac{T^3}{m_{\rm eff}^3}. \nonumber \end{eqnarray} Then Eq.(\ref{cond-1st}) is solved by \begin{eqnarray} \frac{\phi_c^2}{T_c^2}=\sqrt{\frac{c_2}{c_6}}=\frac{-c_4}{2c_6} \end{eqnarray} for $c_2 \ge 0$, $c_4 \le 0$ and $c_6 > 0$. The tricritical line is identified in the limit as $\phi_c/T_c \searrow 0$, which means that $c_2\searrow 0,\ c_4 \nearrow 0$.\footnote This result is not changed if we include the term $c_8\phi^8/T^8$, as long as $c_6, c_8>0$. }\ There are two temperatures which satisfy $c_2=0$, namely, \begin{eqnarray} T_1=\frac{-12m_\Phi^2}{b},\hspace{1cm}T_2=\frac{-m_\Phi^2}{(b/12-b_s^2/16\pi^2)}, \end{eqnarray} where $b_s\equiv\sum_{i=s} n_ia_i$. From Eq.(\ref{ring1}) and Eq.(\ref{ring2}), we can see the ring-improved perturbation is broken down for $T=T_1$ at $\phi=0$, since ${\cal M}_i(0,T_1)=0$. Therefore, $T_1$ is just an artifact for our analysis, and we identify $T_2$ as the critical temperature. This gives the following relation for the coupling constants. \begin{eqnarray} \sum_{i=s}n_ia_i^2-\left(\lambda_1+\frac{\lambda_2}{N_f}\right)\sum_{i=s}n_ia_i=0. \label{relation} \end{eqnarray} By means of the following relations between the coupling constants and the tree-level mass of the each fields (see Table~\ref{tab:lsmm}): \begin{align} \sum_{i=s}n_ia_i&=(N_f^2+1)\frac{m_h^2(\phi_0)}{\phi_0^2}+(N_f^2-1)\frac{m_{\xi}^2(\phi_0)}{\phi_0^2}, \label{b1} \\ \sum_{i=s}n_ia_i^2&=\left(\frac{N_f^2}{2}+2\right)\frac{m_h^4(\phi_0)}{\phi_0^4}+(N_f^2-1)\frac{m_h^2(\phi_0)m_{\xi}^2(\phi_0)}{\phi_0^4}+(N_f^2-1)\frac{m_{\xi}^4(\phi_0)}{\phi_0^4}, \label{b2} \end{align} we can solve Eq.(\ref{relation}) for the tree-level mass of the field $h$ as follows, \begin{eqnarray} m_{h,cep}(\phi_0)=2^{\frac{1}{4}}\left(\frac{N_f^2-1}{N_f^2-2}\right)^{\frac{1}{4}}m_{\xi}(\phi_0). \label{mh0} \end{eqnarray} The chiral phase transition of the sigma model is first-order when $m_h< m_{h,cep}$. The relation Eq.(\ref{relation}) means that $m_{h,cep}$ becomes smaller for smaller $m_\xi$ and approaches a finite value for large $N_f$. Note that $m_\Phi^2$ is canceled out in the above calculation. This means that this result is independent of the stationary condition of the zero temperature symmetry breaking.\footnote{The truncation of the logarithmic terms does not mean we analyze the tricritical line with the tree-level potential. This terms is just {\it suppressed} as ${\cal O}({\cal M}^4/T^4)$ and we can consistently neglect them in our analysis up to ${\cal O}({\cal M}^3/T^3)$. At lower temperature, the effective potential should be modified, for example, to solve the stationary condition at zero temperature. Our claim that the tricritical line is not affected by $m_\Phi$ is meaningful in this point.}\ Also note that the contribution from the fermions to the tricritical line of ${\cal O}(y^2)$ is canceled out and the possible leading contribution is ${\cal O}(y^4)$. \subsubsection*{The validity of the perturbation theory} Although we assumed the perturbation theory for the above argument, it is necessary to examine its validity. We give here the argument about the validity of the zero temperature perturbation theory, the high temperature expansion, the order parameter expansion and the ring-improved perturbation theory at the tricritical line. The conditions for the validity of these approximations are summarized as follows: \ \noindent zero temperature perturbation theory: \begin{eqnarray} \frac{b_s}{4\pi}\lesssim1,\hspace{1cm} \frac{1}{4\pi}n_\psi y^2\lesssim1. \label{zero} \end{eqnarray} high temperature expansion: \begin{eqnarray} \frac{\mathcal{M}_{i=s,f}}{T}\lesssim1. \label{highT} \end{eqnarray} order parameter expansion: \begin{eqnarray} \frac{a_i}{m_\Phi^2/T^2+b/12}\lesssim1. \label{order} \end{eqnarray} ring-improved perturbation theory: \begin{eqnarray} \beta_{\lambda_1+\lambda_2/N_f}&=& \frac{1}{4\pi}\left(\lambda_1+\frac{\lambda_2}{N_f}\right)\frac{T}{m_{\rm eff}}\ll1 , \label{ring1-2}\\ \beta_{\lambda_2/N_f}&=& \frac{1}{4\pi}N_f^2\frac{\lambda_2}{N_f}\frac{T}{m_{\rm eff}}\ll1. \label{ring2-2} \end{eqnarray} To justify our calculation of the tricritical line, all of these conditions must be satisfied in the limit as $\phi_c/T_c$ goes to zero. At first, we examine the condition for the high temperature expansion, Eq.(\ref{highT}). Since $T=T_2$ at the tricritical line, for scalar bosons, it follows \begin{eqnarray} \frac{\mathcal{M}_{s,i}}{T_c}\to \frac{b_s}{4\pi} \ \ \ \ (\phi_c/T_c\to0) . \label{c2-0} \end{eqnarray} Therefore, if the zero temperature perturbation theory is valid: $b_s/4\pi\lesssim 1$, it follows $\mathcal{M}_i/T_c\lesssim 1$ and high temperature expansion is also valid. The high temperature expansion for the fermions are valid in the limit as $\phi_c/T_c$ goes to zero, since ${\cal M}_\psi/T_c$ is proportional to $\phi_c/T_c$. Next, we examine the condition for the order parameter expansion, Eq.(\ref{order}). At the tricritical line, this factor is evaluated as \begin{eqnarray} \frac{a_i}{m_\Phi^2/T^2+b/12}\to a_i\frac{16\pi^2}{b_s^2}\ \ \ \ (\phi_c/T_c\to0) \end{eqnarray} Therefore, in order to satisfy the condition Eq.(\ref{order}), it must be $N_f\gg1$ and $a_i\ll1$ and $b_s/4\pi\lesssim 1$. At last, we examine the conditions for the ring-improved perturbation. In the limit as $\phi_c/T_c$ goes to zero, \begin{eqnarray} \frac{T_c}{\mathcal{M}_i}&\to&\frac{4\pi}{b_s} \nonumber \\ &\sim&\frac{1}{N_f^2}\frac{4\pi}{(\lambda_1+\lambda_2/N_f)+\lambda_2/N_f} \ \ \ \ \ \ (N_f\gg1) . \end{eqnarray} Then \begin{eqnarray} \beta_{\lambda_1+\lambda_2/N_f} &\sim&\frac{1}{N_f^2}\frac{\lambda_1+\lambda_2/N_f}{(\lambda_1+\lambda_2/N_f)+\lambda_2/N_f} \ \ \ \ \ \ (N_f\gg1) , \label{ring_validity_1} \\ \beta_{\lambda_2/N_f} &\sim&\frac{\lambda_2/N_f}{(\lambda_1+\lambda_2/N_f)+\lambda_2/N_f} \ \ \ \ \ \ (N_f\gg1) . \label{ring_validity_2} \end{eqnarray} Therefore it follows that the ring-improved perturbation theory is valid if $N_f\gg1$ and $(\lambda_1+\lambda_2/N_f)\gg \lambda_2/N_f$. Concerning this evaluation, we note that at the tricritical line, the rhs of Eq.(\ref{ring_validity_2}) is evaluated as $\beta_{\lambda_2/N_f}\sim 0.4$. Since this value is not $\ll {\cal O}(1)$, but $\lesssim {\cal O}(1)$, we need a more precise argument to confirm our result.\footnote{In our evaluation, $\beta_{\lambda_2/N_f}$ is divided by $4\pi$ compared to the ordinary criteria of the ring improvement as, \cite{Quiros:1999jp} \ because we should take into account the loop factor in our argument. In \cite{Quiros:1999jp} loop factor is neglected, for example, as $m_{eff}^2\sim\lambda^2T^2$. This corresponds to our case with neglecting factor 1/4$\pi$ in Eq.(\ref{c2-0}). We also consider the numerical factor which is neglected in the ordinary argument, so our argument is not so crude.} These result is summarized as follows. Our analysis is valid if $a_i\ll1$ and $N_f\gg1$ and $(\lambda_1+\lambda_2/N_f)\gg \lambda_2/N_f$. This parameter region seems to be small, but there is a region where our evaluations is reliable. The parameter region that $N_f\gg1$ and $a_i\ll1$ and $b_s/4\pi\lesssim 1$ is just the parameter region where we can analyze with the large $N_f$ expansion. \section{Summary and Discussion} \label{sec:sd} We have studied the tricritical line of the U($N_f$)$\times$U($N_f$) sigma model with Yukawa interaction in an analytic approach by means of the ring improved one-loop finite temperature effective potential. To evaluate the tricritical line analytically, we have expanded the effective potential up to ${\cal O}(M^3/T^3)$ in the high temperature expansion, and up to ${\cal O}(\phi^6/T^6)$ in the order parameter expansion. In this approximation, we have shown that the tricritical line can be examined to the lowest order of the coupling constants and it follows a analytic relation between the tree-level masses of the scalar bosons. This analysis is valid in the parameter region where $a_i\ll1$ and $N_f\gg1$ and $(\lambda_1+\lambda_2/N_f)\gg \lambda_2/N_f$. We can examine the contribution of gauge bosons by means of our technique. However, in this case, since the parameter to evaluate the validity of the ring improvement for the transverse modes of the gauge bosons, $\beta_{gT}=g^2T/M^2_{gT}$, diverges for the small value of $\phi/T$, we cannot perform a reliable analysis near the tricritical line.\footnote{We find there is no tricritical line in this case because of the $\phi^3$ term from the transverse modes of the gauge bosons.}\ As a prescription, we could add the magnetic mass term $\sim g^2T$ to the transverse modes which reduce the singularity of the $\beta_{gT}$. However, in this case, we find that the coefficient of the order parameter expansion is proportional to $1/g^2$, therefore the perturbation is not valid after all. We need another prescription to take care of this problem. Although we have expanded the potential up to ${\cal O}(M^3/T^3)$ in the high temperature limit, we can improve this approximation by including the terms of ${\cal O}(M^4/T^4)$. Since these terms contain the $\log(T^2/\mu^2)$ terms, we cannot solve analytically the conditions of the first-order phase transition for the critical temperature. Therefore we need another prescription to derive the analytic relation between the coupling constants on the tricritical line up to the next-to-leading-order. By means of our technique, we might study the many of phenomenological issues, such as QCD chiral phase transition, electroweak phase transition or the other problems of the cosmology. However, in order to address these issues, we need to consider a lot of problems, such as the effect of the finite density or the infrared singularity of the gauge bosons as mentioned above etc. It is also interesting to compare our analysis to the results of the numerical simulations, and to examine the non-perturbative feature of the tricritical line. \subsection*{Acknowledgments} The author would like to thank Y.~Kikukawa, M.~Kohda for valuable discussions.
1,116,691,501,089
arxiv
\section{Introduction} Our aim in the present paper is threefold: \begin{enumerate} \item To generalise the concept of duality (introduced in \cite{QCR} for ordinary difference equations) to lattice equations; \item To use duality to derive the 3 dimensional (3D) lattice equation \begin{equation} \label{DAKP} \begin{split} 0&= a_{{1}} \left(\tau_{{k-1,l+1,m+1}}\tau_{{k+1,l,m}}\tau_{{k+1,l,m+1}}\tau_{{k+1,l+1,m}} -\tau_{{k,l,m+1}}\tau_{{k,l+1,m}}\tau_{{k,l+1,m+1}}\tau_{{k+2,l,m}} \right)\\ &\ \ \ + a_{{2}} \left(\tau_{{k,l+1,m}}\tau_{{k,l+1,m+1}}\tau_{{k+1,l-1,m+1}}\tau_{{k+1,l+1,m}} -\tau_{{k,l,m+1}}\tau_{{k,l+2,m}}\tau_{{k+1,l,m}}\tau_{{k+1,l,m+1}} \right)\\ &\ \ \ + a_{{3}} \left( \tau_{{k,l,m+1}}\tau_{{k,l+1,m+1}}\tau_{{k+1,l,m+1}}\tau_{{k+1,l+1,m-1}} -\tau_{{k,l,m+2}}\tau_{{k,l+1,m}}\tau_{{k+1,l,m}}\tau_{{k+1,l+1,m}} \right)\\ &\ \ \ + a_{{4}} \left( \tau_{{k,l,m}}\tau_{{k,l+1,m+1}}\tau_{{k+1,l,m+1}}\tau_{{k+1,l+1, m}}-\tau_{{k,l,m+1}}\tau_{{k,l+1,m}}\tau_{{k+1,l,m}}\tau_{{k+1,l+1,m+1}} \right). \end{split} \end{equation} \item To provide conservation laws for equation (\ref{DAKP}), to present reductions to two dimensional integrable systems, and to support our conjecture that equation (\ref{DAKP}) admits $N$-soliton solutions and its reductions have the Laurent property and vanishing algebraic entropy. \end{enumerate} Most of currently known integrable 3D lattice equations are related to discretizations of the three continuous 3D Kadomtsev-Petviashvili equations called AKP, BKP and CKP. The lattice AKP equation, \begin{equation} \label{AKP} A\tau_{k+1,l,m}\tau_{k,l+1,m+1}+ B\tau_{k,l+1,m}\tau_{k+1,l,m+1}+ C\tau_{k,l,m+1}\tau_{k+1,l+1,m}=0, \end{equation} was first derived by Hirota \cite{Hir}, and is also called the Hirota-Miwa equation \cite{Miw}. The more general lattice BKP equation (also called the Miwa equation), \begin{equation} \label{BKP} A\tau_{k+1,l,m}\tau_{k,l+1,m+1}+ B\tau_{k,l+1,m}\tau_{k+1,l,m+1}+ C\tau_{k,l,m+1}\tau_{k+1,l+1,m}+ D\tau_{k,l,m}\tau_{k+1,l+1,m+1}=0, \end{equation} was first found by Miwa in \cite{Miw}). The lattice CKP equation, \begin{align} &(\tau_{k,l,m}\tau_{k+1,l+1,m+1}+\tau_{k+1,l,m}\tau_{k,l+1,m+1}-\tau_{k,l+1,m}\tau_{k+1,l,m+1}-\tau_{k,l,m+1}\tau_{k+1,l+1,m})^2\nonumber\\ =&4(\tau_{k,l,m}\tau_{k+1,l,m+1}-\tau_{k,l+1,m}\tau_{k,l,m+1}) (\tau_{k+1,l,m}\tau_{k+1,l+1,m+1}-\tau_{k+1,l+1,m}\tau_{k+1,l,m+1}) \label{CKP} \end{align} was first derived by Kashaev as a 3D lattice model associated with the local Yang-Baxter relation \cite{Kas}, and later was independently found by King and Schief \cite{KS} as a superposition principle for the continuous CKP equation. This equation is also formulated as a hyperdeterminant in \cite{TW}. The AKP equation is a bilinear equation on a six-point octahedral stencil ($A_3$ lattice). Equations of this type have been classified with respect to multi-dimensional consistency in \cite{ABS}. The lattice BKP and CKP equations are both defined on an 8-point cubic stencil. However, whereas lattice BKP is bilinear, the lattice CKP is quartic and nonlinear. A nonlinear form of the AKP equation (quartic and defined on a 10-point stencil) was given in \cite[equation 5.5]{GRPSW}. A quintic nonlinear non-potential lattice AKP equation was given in \cite[equation 3.19]{FuNi}. This equation is defined on a 10-point stencil \cite[Figure 3]{FuNi}. A quadrilinear 3D lattice equation related to the lattice BKP equation, defined on a 14-point stencil ($D_3$ lattice), is presented in \cite[Equation 24]{KS}. Our equation (\ref{DAKP}), which we will obtain as a dual to the AKP equation (\ref{AKP}), is a quadrilinear equation defined on the 14 point stencil depicted in Figure \ref{STC}. \begin{figure}[h!] \begin{center} \begin{tikzpicture}[rotate=0] \fill \foreach \p in {(0, 0), (0., 1.20), (0., 2.40), (-.6, -.6), (-1.2, -1.2), (1.3, -.3), (2.6, -.6), (.7, .30), (-.6, .60), (-1.9, .90), (1.3, .90), (1.9, 1.50), (.7, -.90), (.7, -2.10)} {\p circle (3pt)}; \draw[thick] (-1.2, -1.2)--(0, 0)--(0., 2.40) (-1.9, 0.90)--(0.7, 0.30)--(1.9, 1.50) (0, 0)--(1.3, -0.3)--(1.3, 0.90)--(0., 1.20)--(-0.6, 0.60)--(-0.6, -0.6)--(0.7, -0.90)--(1.3, -0.3)--(2.6, -0.6) (0.7, -2.10)--(0.7, 0.30) ; \end{tikzpicture} \caption{\label{STC} The 14-point stencil of equation (\ref{DAKP}).} \end{center} \end{figure} To our knowledge equation (\ref{DAKP}) is new. Given that the number of known integrable 3D lattice equations is quite small, cf. \cite[Sections 3.9-3.10]{HJN}, any possible addition to this number would seem worthwhile pursuing. The idea of duality for ordinary difference equations is as follows: given an ordinary difference equation (O$\Delta$E), $E=E(u_n,u_{n+1},\ldots,u_{n+d})=0$, with an integral, $K[n]=K(u_n,u_{n+1},\ldots,u_{n+d-1})$, the difference of the integral with its upshifted version factorises $K[n+1] - K[n] = E \Lambda$. The quantity $\Lambda$ is called an integrating factor. The equation $\Lambda=0$ is a dual equation to the equation $E=0$, both equations share the same integral. If $E=0$ has several integrals $K_i$, then a linear combination of them gives rise to a dual with parameters: \[ \sum_i a_i K_i[n+1] - \sum_i a_i K_i[n] = E \left( \sum_i a_i \Lambda_i \right). \] In \cite{QCR} duals to $(d-1,-1)$-periodic reductions of the modified Korteweg-de Vries (mKdV) lattice equation are shown to be integrable maps, namely level-set-dependent mKdV maps. In \cite{DTKQ} a novel hierarchy of maps is found by applying the concept of duality to the linear equation $u_{n}=u_{n+d}$, and $\lfloor \frac{d-1}{2}\rfloor$ integrals are provided explicitly. The integrability of these maps is established in \cite{HKQ}. We note that dual equations are not necessarily integrable, examples exist where the dual is not integrable \cite{DKQ}. In \cite{JV}, the authors study several integrable 4th order maps and integrable maps that are dual to them. Given a 2D lattice equation, $E=E(u_{k,l},\ldots,u_{k+d,l+e})=0$, instead of considering differences of integrals we now consider conservation laws: \[ P[k+1,l] - P[k,l] + Q[k,l+1] - Q[k,l] = E \Lambda. \] Here the quantity $\Lambda$ is called the characteristic of the conservation law. Again the equation $\Lambda=0$ or a linear combination, $\sum_i a_i \Lambda_i =0$, can be viewed as the dual equation to $E=0$. The situation for 3D lattice equations is similar. The structure of the paper is as follows. In section \ref{sdAKP} we present a 3D lattice equation which is dual to the lattice AKP equation, and we provide a matrix formula which simultaneously captures four conservation laws for the AKP equation as well as three conservation laws for the dual AKP equation. In section \ref{scqds} we show that these conservation laws give rise to quotients-difference formulations, in the same way that Rutishauser's quotient-difference (QD) algorithm \cite{Rut} is a quotient-difference formulation of the discrete-time Toda equation \cite{Hir2}. In section \ref{stnss} we provide 1-soliton and 2-solition solutions, and we provide evidence to support a conjectured form for the general $N$-soliton solution. In section \ref{sLp} we provide evidence to support our conjecture that 2-periodic reductions to ordinary difference equations have the Laurent property. In section \ref{DG} we provide details of calculations which indicate that 2-periodic reductions of the dual AKP equation have quadratic growth. In section \ref{srt2dle} we show that reductions of the dual AKP equation (\ref{DAKP}) to 2D lattice equations include the higher analogue of the discrete time Toda (HADT) equation and its corresponding quotient-quotient-difference (QQD) system \cite{SNK}, the discrete hungry Lotka-Volterra system, discrete hungry QD, as well as the hungry forms of HADT and QQD introduced in \cite{CCHT}. \section{Derivation of a dual to the lattice AKP equation, and a matrix conservation law} \label{sdAKP} Seven characteristics of conservation laws for the lattice AKP equation can be obtained from the results in \cite{MQ}. We choose to only consider the parameter independent ones and we set all arbitrary functions equal to 1. We will denote shifts in $k$ using tildes, shifts in $l$ by hats, and shifts in $m$ by dots, e.g. $\underset{\dot{ }}{\hat{\tilde{\tau}}}=\tau_{k+1,l+1,m-1}$. The four characteristics (denoted $\Lambda_1,\Lambda_2,\Lambda_3,\Lambda_7$ in \cite{MQ}) are given by \[ \W=\left(\dfrac{\ut{\hat{\dot{\tau}}}}{\hat{\dot{\tau}}\hat{\tau}\dot{\tau}}- \dfrac{\tilde{\tilde{\tau}}}{\tilde{\tau}\tilde{\hat{\tau}}\tilde{\dot{\tau}}},\ \dfrac{\uh{\tilde{\dot{\tau}}}}{\tilde{\dot{\tau}}\tilde{\tau}\dot{\tau}}- \dfrac{\hat{\hat{\tau}}}{\hat{\tau}\hat{\dot{\tau}}\tilde{\hat{\tau}}},\ \dfrac{\ud{\tilde{\hat{\tau}}}}{\tilde{\hat{\tau}}\tilde{\tau}\hat{\tau}}- \dfrac{\dot{\dot{\tau}}}{\dot{\tau}\tilde{\dot{\tau}}\hat{\dot{\tau}}},\ \dfrac{\tau}{\tilde{\tau}\hat{\tau}\dot{\tau}}- \dfrac{\tilde{\hat{\dot{\tau}}}}{\tilde{\hat{\tau}}\hat{\dot{\tau}}\tilde{\dot{\tau}}} \right). \] One can now check the following matrix conservation law \begin{equation} \label{CONS} \tilde{\P}-\P+\hat{\Q}-\Q+\dot{\R}-\R=\V^T\W, \end{equation} where $\P,\Q,\R$ are the $3\times4$ matrices \[ \P= \begin{pmatrix} -\frac{\tilde{\tau}\ut{\hat{\dot{\tau}}}}{\hat{\tau}\dot{\tau}} & 0 & 0 & -\frac{\tau\hat{\dot{\tau}}}{\hat{\tau}\dot{\tau}} \\[2mm] -\frac{\tilde{\tau}\ut{\hat{\tau}}}{\hat{\tau}\tau} & 0 & \frac{\dot{\tau}\ud{\hat{\tau}}}{\hat{\tau}\tau} & 0 \\[2mm] -\frac{\tilde{\tau}\ut{\dot{\tau}}}{\dot{\tau}\tau} & \frac{\hat{\tau}\uh{\dot{\tau}}}{\dot{\tau}\tau} & 0 & 0 \end{pmatrix},\quad \Q=\begin{pmatrix} 0&-\frac{\hat{\tau}\uh{\tilde{\tau}}}{\tilde{\tau}\tau} & \frac{\dot{\tau}\ud{\tilde{\tau}}}{\tilde{\tau}\tau} & 0 \\[2mm] 0 & -\frac{\hat{\tau}\uh{\tilde{\dot{\tau}}}}{\tilde{\tau}\dot{\tau}} & 0 & -\frac{\tau \dot{\tilde{\tau}}}{\tilde{\tau}\dot{\tau}} \\[2mm] \frac{\tilde{\tau}\ut{\dot{\tau}}}{\dot{\tau}\tau} & -\frac{\hat{\tau}\uh{\dot{\tau}}}{\dot{\tau}\tau} & 0 & 0 \end{pmatrix},\quad \R=\begin{pmatrix} 0 & \frac{\hat{\tau}\uh{\tilde{\tau}}}{\tilde{\tau}\tau} & -\frac{\dot{\tau}\ud{\tilde{\tau}}}{\tilde{\tau}\tau} & 0 \\[2mm] \frac{\tilde{\tau}\ut{\hat{\tau}}}{\hat{\tau}\tau} & 0 & -\frac{\dot{\tau}\ud{\hat{\tau}}}{\hat{\tau}\tau} & 0 \\[2mm] 0 & 0 & -\frac{\dot{\tau}\ud{\tilde{\hat{\tau}}}}{\tilde{\tau}\hat{\tau}} & -\frac{\tilde{\hat{\tau}}\tau}{\tilde{\tau}\hat{\tau}} \end{pmatrix}, \] and $\V^T$ denotes the transpose of \[ \V=\left( \hat{\dot{\tau}}\tilde{\tau},\ \tilde{\dot{\tau}}\hat{\tau},\ \hat{\tilde{\tau}}\dot{\tau}\right). \] Denoting two vectors of coefficients by $\v=\left( A , B , C \right)$ and $\w=\left(a_1, a_2, a_3, a_4 \right)$, we have that $\v\V^T=0$ represents the AKP equation (\ref{AKP}) and the equation $\W\w^T=0$ is equivalent to equation (\ref{DAKP}). Hence, pre-multiplying (\ref{CONS}) with $\v$ gives four conservation laws for the lattice AKP equation, and post-multiplying (\ref{CONS}) with $\w^T$ yields three conservation laws for equation (\ref{DAKP}). Thus the lattice AKP equation and equation (\ref{DAKP}) are dual to each other. \section{Corresponding quotients-difference systems} \label{scqds} The lattice AKP equation and the dual AKP equation can each be written as a system of one difference equation combined with a number of quotient equations. Let us introduce variables \[ p=\frac{\tau\hat{\dot{\tau}}}{\hat{\tau}\dot{\tau}},\qquad q=\frac{\tau \dot{\tilde{\tau}}}{\tilde{\tau}\dot{\tau}},\qquad v=\frac{\tilde{\hat{\tau}}\tau}{\tilde{\tau}\hat{\tau}}. \] Note that $p=-P_{14}$, $q=-Q_{24}$ and $v=-R_{34}$. Hence, the fourth conservation law for the AKP equation can be written as the difference equation \begin{equation} \label{DEAKP} A(\tilde{p}-p)+B(\hat{q}-q)+C(\dot{v}-v)=0. \end{equation} Taking logarithms, we find \begin{align*} \ln(p)&=c+\hat{\dot{c}}-\hat{c}-\dot{c}=(1-\hat{S})(1-\dot{S})c \\ \ln(q)&=c+\tilde{\dot{c}}-\tilde{c}-\dot{c}=(1-\tilde{S})(1-\dot{S})c \\ \ln(v)&=c+\tilde{\hat{c}}-\tilde{c}-\hat{c}=(1-\tilde{S})(1-\hat{S})c \end{align*} where $c=\ln(\tau)$, capital $\tilde{S}$ denotes the shift operator in $k$ (and similarly $\hat{S}$ and $\dot{S}$ represent shifts in $l$ resp. $m$), and $1$ is the identity. This gives $(1-\tilde{S})\ln(p)=(1-\hat{S})\ln(q)=(1-\dot{S})\ln(v)$, which can be written in quotient form, \begin{equation}\label{QQ} \frac{\tilde{p}}{p}=\frac{\hat{q}}{q}=\frac{\dot{v}}{v}. \end{equation} As (\ref{QQ}) contains only two independent equations the system of equations for $p,q,v$ defined by (\ref{DEAKP}) and (\ref{QQ}) can be referred to as a QQD-system. Similarly, we can write the dual AKP equation in variables \begin{equation} \label{uzwv} u=\frac{\tilde{\tau}\ut{\dot{\tau}}}{\tau\dot{\tau}},\quad z=\frac{\hat{\tau}\uh{\dot{\tau}}}{\tau\dot{\tau}},\quad w=\frac{\dot{\tau}\ud{\hat{\tilde{\tau}}}}{\tilde{\tau}\hat{\tau}}, \end{equation} and the variable $v$ introduced above. We have $u=-P_{31}=Q_{31}$, $z=P_{32}=-Q_{32}$, and $w=R_{33}$. The third conservation law becomes \begin{equation} \label{D} a_1(\hat{u}-\tilde{u})+a_2(\tilde{z}-\hat{z})+a_3(w-\dot{w})+a_4(v-\dot{v})=0. \end{equation} Taking logarithms we find \[ \ln(u)=\frac{(\tilde{S}-1)(\tilde{S}-\dot{S})}{\tilde{S}}c,\quad \ln(z)=\frac{(\hat{S}-1)(\hat{S}-\dot{S})}{\hat{S}}c,\quad \ln(w)=\frac{(\tilde{S}-\dot{S})(\hat{S}-\tilde{S})}{\dot{S}}c. \ One can now derive quotient equations which are either ratios of quadratic terms \[ \frac{\hat{\hat{\tilde{u}}}\hat{\tilde{u}}}{\dot{\hat{\tilde{u}}}\hat{\tilde{u}}}= \frac{\hat{\tilde{\tilde{z}}}\dot{\hat{z}}}{\dot{\hat{\tilde{z}}}\hat{\tilde{z}}}= \frac{\dot{\hat{\tilde{w}}}\dot{w}}{\dot{\tilde{w}}\dot{\hat{w}}}= \frac{\hat{\tilde{v}}\dot{\dot{v}}}{\dot{\tilde{v}}\dot{\hat{v}}}, \] or ratios of linear terms \begin{equation} \label{Q3} \frac{\tilde{\hat{u}}}{\tilde{u}}=\frac{\tilde{v}}{\dot{v}},\quad \frac{\tilde{\hat{u}}}{\dot{\tilde{u}}}=\frac{\dot{\tilde{w}}}{\dot{w}},\quad \frac{\tilde{\hat{z}}}{\hat{z}}=\frac{\hat{v}}{\dot{v}},\quad \frac{\tilde{\hat{z}}}{\dot{\hat{z}}}=\frac{\dot{\hat{w}}}{\dot{w}}, \end{equation} of which only three are independent. In the sequel, we will refer to the system of quotient and difference equations (\ref{D}) and (\ref{Q3}) as the Q$^3$D-system. \section{The $N$-soliton solution} \label{stnss} \subsection*{1-soliton} Equation (\ref{DAKP}) admits the 1-soliton solution $\tau_{k,l,m}=1+c_1 x_1^ky_1^lz_1^m$ with dispersion relation $Q_1=0$, where \begin{align} Q_i= &y_iz_i \left( x_i-1 \right) \left( x_i-y_i \right) \left( x_i-z_i \right) a_{{1}}+x_iz_i \left( y_i-1 \right) \left( y_i-x_i \right) \left( y_ i-z_i \right) a_{{2}}\notag\\ &+x_iy_i \left( z_i-1 \right) \left( z_i-x_i \right) \left( z_i-y_i \right) a_{{3} }+x_iy_iz_i \left( x_i-1 \right) \left( y_i-1 \right) \left( z_i-1 \right) a_{{4}}. \end{align} In the sequel we will use the following notation, $x_{ij}=x_ix_j$, $c_{ij}=c_ic_j$, and if $Q_i=Q(x_i,y_i,z_i)$ then $Q_{ij}=Q(x_{ij},y_{ij},z_{ij})$. \subsection*{2-soliton} Equation (\ref{DAKP}) admits the 2-soliton solution \[ \tau_{k,l,m}=1+c_1x_1^ky_1^lz_1^m+c_2x_2^ky_2^lz_2^m+c_1c_2 R_{12}x_{12}^ky_{12}^lz_{12}^m \] where $Q_1=Q_2=0$, \[ R_{ij}= \frac{a_1S^{ij}_1+a_2S^{ij}_2+a_3S^{ij}_3+a_4S^{ij}_4}{Q_{ij}}, \] with \begin{align*} S^{ij}_1=\bigg( & \left( x_i-x_j \right) \left( x_jy_i-y_jx_i \right) \left( x_{ij}-z_{ij} \right) + \left( x_i-x_j \right) \left( x_jz_i-z_jx_i \right) \left( x_{ij}-y_{ij} \right)\\ &+ \left( x_jz_i-z_jx_i \right) \left( x_jy_i-y_jx_i \right) \left( 1-x_{ij} \right) \bigg)y_{ij}z_{ij},\\ S^{ij}_4=\bigg( & \left(1 - x_{ij} \right) \left( y_i-y_j \right) \left( z_i-z_j \right) + \left( x_i-x_j \right) \left(1 - y_{ij} \right) \left( z_i-z_j \right) \\ &+ \left( x_i-x_j \right) \left( y_i-y_j \right) \left(1 - z_{ij} \right) \bigg)x_{ij}y_{ij}z_{ij}, \end{align*} and $S^{ij}_k$ for $k=2$, resp. $k=3$, are obtained from $S^{ij}_1$ by interchanging the symbols $x$ and $y$, respectively $x$ and $z$. This has been checked by direct computation, using a Groebner basis in Maple \cite{MAP}. \subsection*{$N$-soliton} Let $P(N)$ denote the powerset of the string $12\ldots N$, e.g. we write \[ P(3)=\{\varepsilon ,1,2,3,12,23,13,123\}, \] where $\varepsilon$ is the empty string, and let $P_2(S)$ be the subset of the powerset of a string $S$ containing all 2-letter substrings, e.g. \[ P_2(123)=\{12,23,13\}. \] \begin{conjecture} Equation (\ref{DAKP}) admits the following $N$-soliton solution: \[ \tau_{k,l,m}=\sum_{w \in P(N)} \Big(\prod_{v\in P_2(w)}R_v\Big) c_w x_w^k y_w^l z_w^m,\qquad \text{with } Q_i=0,\ i\in\{1,2,\ldots,N\}. \] \end{conjecture} Note that in the above formula $c_\varepsilon=x_\varepsilon=\cdots=1$ is understood. The formula can be computationally checked as follows: Taking particular values for $a_1,a_2,a_3$ and $a_4$, one can find rational points $p_i=(x_i,y_i,z_i)\in\mathbb{Q}^3$ such that $Q_i=0$. Using $N\in\mathbb{N}$ points, one substitutes the $N$-soliton solution, which contains $N$ arbitrary constants $c_1,\ldots,c_N$, into the equation for fixed points $(k,l,m)\in\mathbb{Z}^3$. For example, taking $(a_1,a_2,a_3,a_4)=(1,2,3,2)$ the following points \begin{align*} &p_1=(2, 4, 2/3),\ &&p_2=(6, 21, -14),\ &&p_3=(7, 14, -6),\\ &p_4=(8, 15, -40/9),\ &&p_5=(14, 80, -560),\ &&p_6=(18, 120, -15/2) \end{align*} satisfy $Q_i=0$. Taking $k=-2,l=1,m=3$ one needs to verify that \begin{equation}\label{equ} \begin{split} &\tau_{{-3,2,4}}\tau_{{-1,1,3}}\tau_{{-1,1,4}}\tau_{{-1,2,3}}+2\,\tau_{{-2,1,3}}\tau_{{-2 ,2,4}}\tau_{{-1,1,4}}\tau_{{-1,2,3}}-\tau_{{-2,1,4}}\tau_{{-2,2,3}}\tau_{{-2,2,4}}\tau_{ {0,1,3}}\\ &-2\,\tau_{{-2,1,4}}\tau_{{-2,2,3}}\tau_{{-1,1,3}}\tau_{{-1,2,4}}+3\,\tau_{{-2 ,1,4}}\tau_{{-2,2,4}}\tau_{{-1,1,4}}\tau_{{-1,2,2}}-2\,\tau_{{-2,1,4}}\tau_{{-2,3,3}} \tau_{{-1,1,3}}\tau_{{-1,1,4}}\\ &-3\,\tau_{{-2,1,5}}\tau_{{-2,2,3}}\tau_{{-1,1,3}}\tau_{{-1 ,2,3}}+2\,\tau_{{-2,2,3}}\tau_{{-2,2,4}}\tau_{{-1,0,4}}\tau_{{-1,2,3}} \end{split} \end{equation} vanishes. Using the above 6 points $p_i$ the value of the 6-soliton solution at $(k,l,m)=(-3,2,4)$ is {\small \begin{align*} \tau_{{-3,2,4}}&=1+{\frac {32\,c_{{1}}}{81}}+{\frac {235298\,c_{{2}}}{3 }}+{\frac {5184\,c_{{3}}}{7}}+{\frac {125000\,c_{{4}}}{729}}-{\frac { 9973408256.10^{8}\,c_{{1}}c_{{2}}c_{{3}}c_{{4}}}{2850829229061}} +{ \frac {17537436614656.10^{14}\,c_{{1}}c_{{4}}c_{{5}}}{ 304882184692881}}\\ &-{\frac {286643773308928.10^{13}\,c_{{2}}c_{{4}} c_{{5}}}{5379614362287}} -{\frac {31023435087872.10^{14}\,c_{{3}} c_{{4}}c_{{5}}}{1827893357279451}}-{\frac {8355684882055168.10^{7}\,c_ {{1}}c_{{2}}c_{{5}}}{3557331}}-{\frac {244.10^{6}\,c_{{1}}c_{{4}}}{2735937}}\\ &+{\frac {419082155327488.10^{10}\,c_{{ 1}}c_{{3}}c_{{5}}}{79592065203}}-{\frac {25192657019901837312.10^{7}\, c_{{2}}c_{{3}}c_{{5}}}{414577637413}}+{\frac {15625\,c_{{6}}}{2}}+ 229376.10^{6}\,c_{{5}}+{\frac {430515.10^{5}\,c_{{3}}c_{{6}}}{313747}}\\ &+{ \frac {1220703125\,c_{{4}}c_{{6}}}{34992}}-{\frac {6272.10^{13}\, c_{{5}}c_{{6}}}{339}}+{\frac {5.10^{5}\,c_{{1}}c_{{6}}}{1539}}-{\frac { 1838265625\,c_{{2}}c_{{6}}}{422}}+{\frac { 4563788408614224681500672.10^{17}\,c_{{1}}c_{{2}}c_{{3}}c_{{4 }}c_{{5}}}{3335745327609453768757318923}} \end{align*}} {\small \begin{align*} \phantom{\tau_{{-3,2,4}}} &-{\frac {1075648.10^{6}\,c_{{2 }}c_{{3}}c_{{4}}}{253204479}}+{\frac {9415684768.10^{6}\,c_{{1}}c_{{2}} c_{{4}}}{1595051271}}-{\frac {74176.10^{8}\,c_{{1}}c_{{3}}c_{{4}}}{ 10556764911}}+{\frac {359696691200\,c_{{1}}c_{{2}}c_{{3}}}{13161}}-{\frac {323830284288.10^{6}\, c_{{2}}c_{{5}}}{367}}\\ &+{ \frac {68956750243187158016.10^{25}\,c_{{1}}c_{{2}}c_ {{3}}c_{{4}}c_{{5}}c_{{6}}}{13520106051588549460696520569492377}}+{ \frac {203190312540790063104.10^{17}\,c_{{1}}c_{{2}}c_{{3}}c_ {{5}}c_{{6}}}{22721904043520272815643}}-{\frac {270945647.10^{21}\,c_{{2}} c_{{4}}c_{{5}}c_{{6}}}{3405295891327671}}\\ &+{\frac { 2558749998443072.10^{22}\,c_{{1}}c_{{2}}c_{{4}}c_{{5}}c_ {{6}}}{87172894850121902722923}}+{\frac { 512857367787136.10^{25}\,c_{{1}}c_{{3}}c_{{4}}c_{{5}} c_{{6}}}{17456412275319361592913150717}}-{\frac {5065859375.10^{8} \,c_{{1}}c_{{3}}c_{{4}}c_{{6}}}{1419494280227793}}-{ \frac {8.10^{6}\,c_{{3}}c_{{4}}}{22869}}\\ &+{\frac { 1197498441289024.10^{22}\,c_{{2}}c_{{3}}c_{{4}}c_{{5}}c_ {{6}}}{3344920733568156174088032717}}+{\frac {486525134375.10^{10}\, c_{{1}}c_{{2}}c_{{3}}c_{{4}}c_{{6}}}{3851564365825970013}}+{\frac { 433061888.10^{7}\,c_{{1}}c_{{5}}}{29079}}-{\frac { 5190429687500\,c_{{3}}c_{{4}}c_{{6}}}{3075034347}}\\ &-{\frac {3785197879296.10^{7}\,c_{{3}}c_{{5}}}{ 1802479}}-{\frac {152276992.10^{13}\,c_{{4}}c_{{5}}}{226286703}}- {\frac {23683072.10^{14}\,c_{{1}}c_{{5}}c_{{6}}}{187297839}} +{\frac {1353499793462147416064.10^{14}\,c_{{1} }c_{{2}}c_{{4}}c_{{5}}}{7248099679897056849}}\\ &-{\frac {24118045.10^{5}\,c_{{1}}c_{{2}}c_{{6}}}{324729}} +{\frac {34110994995.10^{5}\,c_{{2}}c_{{3}}c_{{6}}}{5926981771}}-{\frac {297851562500\,c_{{1}}c_{{4}}c_{{6}}}{155948409}}+{\frac { 287875.10^{18}\,c_{{4}}c_{{5}}c_{{6}}}{2036580327}}+{\frac { 308710976\,c_{{1}}c_{{2}}}{243}}\\ &+{\frac { 143614501953125\,c_{{2}}c_{{4}}c_{{6}}}{358705908}}-{\frac { 8504.10^{8}\,c_{{1}}c_{{3}}c_{{6}}}{38590881}}-{\frac { 4427367168.10^{13}\,c_{{2}}c_{{5}}c_{{6}}}{8750381}}-{\frac {972800\,c_{{1}}c_{{3}}}{861}}-{\frac {224599144.10^{8}\,c_{{1}}c_{{2}}c_{{3 }}c_{{6}}}{5926981771}}\\ &-{\frac {1916804736\,c_{{ 2}}c_{{3}}}{4387}}-{\frac {117649.10^{6}\,c_{{2}}c_{{4}}}{425007}}+{\frac { 929743819565759987712.10^{10}\,c_{{1}}c_{{2}}c_{{3}}c_{{5}}}{ 148833371831267}}+{ \frac {5239147585536.10^{14}\,c_{{3}}c_{{5}}c_{{6}}}{ 1304163853181}}\\ &-{\frac { 16971308117950201856.10^{17}\,c_{{1}}c_{{3}}c_{{4}}c_{{5}}}{ 302920719026139857929671}}+{\frac {8342573976349835264.10^{14}\, c_{{2}}c_{{3}}c_{{4}}c_{{5}}}{825274865866379324583}}-{\frac { 183176122990592.10^{17}\,c_{{1}}c_{{3}}c_{{5}}c_{{6}}}{ 172763889794740251}}\\ &-{\frac {66307976.10^{22}\,c_{{1}}c_ {{4}}c_{{5}}c_{{6}}}{52134853582482651}}+{\frac {4984888671875.10^{4}\,c_ {{2}}c_{{3}}c_{{4}}c_{{6}}}{342087606876807}}-{\frac { 52304285961241509888.10^{14}\,c_{{2}}c_{{3}}c_{{5}}c_{{6}}}{ 63292211820390732077}}\\ &+{\frac { 8906247704.10^{22}\,c_{{3}}c_{{4}}c_{{5}}c_{{6}}}{ 105336010499942922777}}-{\frac {574687791015625.10^{4}\,c_{{1}}c_{{2}}c_{ {4}}c_{{6}}}{6394560545439}}-{\frac {228475758493696.10^{14}\,c_ {{1}}c_{{2}}c_{{5}}c_{{6}}}{1611531417627}}, \end{align*} } \noindent and we obtain similar expressions for the values of the 6-soliton solution at the other 13 lattice points of the stencil, cf. Figure \ref{STC}. Substituting these expressions into (\ref{equ}) gives 0. This has been checked also for values of $a_i$, other point $p_j$ and other values for $k,l,m$. We have also performed another computational verification, this time of the 3-soliton solution. Starting with expressions for $p_1,p_2,p_3$ of the form $p_i=b_ix+c_i$ where $b_i,c_i\in\mathbb{Q}$ are randomly chosen and $x$ is a parameter, we have solved the linear system $Q_{12}=Q_{13}=Q_{23}=0$ for $a_1,a_2,a_3,a_4$, and verified the solution for a range of values for $k,l,m$. In Figure \ref{sol} we have plotted two cross sections of a three soliton solution. \begin{figure}[H] \begin{center} \includegraphics[width=6.6cm]{solm0}\hspace{1.5cm} \includegraphics[width=6.6cm]{solm50} \caption{\label{sol} Two cross sections, $m=0$ resp. $m=50$, of the function $u$ defined in (\ref{uzwv}) where $\tau$ is the three soliton solution of dual AKP with $(a_1,a_2,a_3,a_4)=(1,2,3,2)$ and $p_1=(\frac 15, \frac{12}{13}, \frac{18}{65})$, $p_2=(\frac{11}{31}, \frac{15}{32}, \frac{495}{496})$, $p_3=(2, 4, \frac 23)$, and $c_1=c_2=c_3=1$.} \end{center} \end{figure} \section{Laurent property} \label{sLp} Consider an ordinary difference equation of order $d$, \begin{equation}\label{IM1} \tau_n=\frac{P(\tau_{n-d},\dots,\tau_{n-1})}{Q(\tau_{n-d},\dots,\tau_{n-1})}, \end{equation} where $P$ is a polynomial and $Q$ is a monomial. Let ${\cal R}$ be the ring of coefficients. From a set of $d$ initial values $U=\{\tau_{k}\}_{0\leq k< d}$, one finds $\tau_{n}$ as rational functions of the initial values, given by \begin{align}\label{IM2} \tau_{n}=\frac{p_{n}(\tau_{0},\dots,\tau_{d-1})}{q_{n}(\tau_{0},\dots,\tau_{d-1})}, \end{align} with greatest common divisor gcd$(p_{n},q_{n})=1$. By definition, if $ q_{n}\in{\cal R}[U]$ is a monomial for all $n\geq0$, then (\ref{IM1}) has the Laurent property. The first examples of recurrences with the Laurent property were discovered by Michael Somos in the 1980s \cite{Gal}. Since then many more have been found \cite{ACH,EZ,FZ,HK,LP}, and the Laurent property is a central feature of cluster algebras \cite{fziv, fst}. In \cite[Definition 2.11]{Mas} the author defines the Laurent property for discrete bilinear equations. The idea is that a lattice equation has the Laurent property if all {\em good} initial value problems have the Laurent property. The author points out that not all well-posed, cf. \cite{vdK}, initial value problems are good. Certainly, the initial value problems obtained from (doubly periodic) reductions given below, see (\ref{LAKP}), are good. In \cite{HHKQ} a more specific Laurent property was introduced, where the terms are Laurent polynomials in some of the variables but polynomial in others. The form of (\ref{IM1}) guarantees that all components $q_{n}$ are monomials for $0\leq n \leq d$. Suppose these monomials depend on a subset of the initial values $V\subset U$, specified by a set of superscripts $I\subset \{1,\ldots,d\}$. The following conditions guarantee that $q_{n}$ is a monomial $\in {\cal R}[V]$ for all $i$ and $n\geq0$, cf. \cite[Theorem 2]{HHKQ}. \begin{theorem} \label{THM} Suppose that $q_{d}$ is a monomial in ${\cal R}[V]$. If $p_{d}$ is coprime to $p_{d+k}$ for all $k =1,\dots,d$, and $q_{m}\in {\cal R}[V]$ is a monomial for $d+1\leq m \leq 2d$, then (\ref{IM1}) has the following Laurent property: all iterates are Laurent polynomials in the variables from $V$ and they are polynomial in the remaining variables from $W=U\setminus V$. \end{theorem} Introducing the variable $n=z_1k+z_2l+z_3m$, where we take $z_1,z_2,z_3$ to be non-negative integers such that gcd$(z_1,z_2,z_3)=1$, and performing a reduction $\tau_{k,l,m}\rightarrow \tau_n$, one obtains the ordinary difference equation \begin{equation} \label{LAKP} \begin{split} 0&=a_{{1}} \left( \tau_{{n+z_{{1}}}}\tau_{{n+z_{{1}}+z_{{2}}}}\tau_{{n+z_{{1}}+z_{{3}}}}\tau_{{n-z_{{1}}+z_{{2}}+z_{{3}}}} -\tau_{{n+2\,z_{{1}}}}\tau_{{n+z_{{2}}}}\tau_{{n+z_{{3}}}}\tau_{{n+z_{{2}}+z_{{3}}}} \right)\\ &\ \ \ +a_{{2}} \left( \tau_{{n+z_{{2}}}}\tau_{{n+z_{{1}}+z_{{2}}}}\tau_{{n+z_{{2}}+z_{{3}}}}\tau_{{n+z_{{1}}-z_{{2}}+z_{{3}}}} -\tau_{{n+z_{{1}}}}\tau_{{n+2\,z_{{2}}}}\tau_{{n+z_{{3}}}}\tau_{{n+z_{{1}}+z_{{3}}}}\right)\\ &\ \ \ +a_{{3}} \left(\tau_{{n+z_{{3}}}}\tau_{{n+z_{{1}}+z_{{3}}}}\tau_{{n+z_{{2}}+z_{{3}}}}\tau_{{n+z_{{1}}+z_{{2}}-z_{{3}}}} -\tau_{{n+z_{{1}}}}\tau_{{n+z_{{2}}}}\tau_{{n+2\,z_{{3}}}}\tau_{{n+z_{{1}}+z_{{2}}}}\right)\\ &\ \ \ +a_{{4}} \left( \tau_{{n}}\tau_{{n+z_{{1}}+z_{{2}}}}\tau_{{n+z_{{1}}+z_{{3}}}}\tau_{{n+z_{{2}}+z_{{3}}}} -\tau_{{n+z_{{1}}}}\tau_{{n+z_{{2}}}}\tau_{{n+z_{{3}}}}\tau_{{n+z_{{1}}+z_{{2}}+z_{{3}}}} \right), \end{split} \end{equation} which has order \[ d=\max(2z_1,2z_2,2z_2,z_1+z_2+z_3)-\min(0,z_1+z_2-z_3,z_1+z_3-z_2,z_2+z_3-z_1). \] \begin{conjecture} The iterates $\tau_n$ are Laurent polynomials in the initial values $\tau_{i}$, with $i=p,p+1,\ldots,d-p-1$ where \[ p=\min(z_1,z_2,z_2)-\min(0,z_1+z_2-z_3,z_1+z_3-z_2,z_2+z_3-z_1), \] and polynomial in the others, $\tau_0,\tau_1,\ldots,\tau_{p-1},\tau_{d-p},\ldots,\tau_{d-2},\tau_{d-1}$. \end{conjecture} This conjecture has been proven, using Theorem \ref{THM} and \cite{MAP}, for $z_1=z_2=1$, $1\leq z_3\leq 20$, for $z_1=1,z_2=2,z_3=3$, and some but not all of the conditions of Theorem \ref{THM} have been verified for all co-prime $z_1<z_2<z_3\leq 10$. \section{Degree growth} \label{DG} Given an ordinary difference equation of the form (\ref{IM1}) one can define an integer sequence $\{d^p_n\}_{n=0}^\infty$ where $d^p_n$ denotes the degree of the polynomial $p_n$ defined by (\ref{IM2}). According to the {\em degree growth conjecture} \cite{FV,HV} we have \begin{itemize} \item growth is linear in $n$ $\implies$ equation is linearizable. \item growth is polynomial in $n$ $\implies$ equation is integrable. \item growth is exponential in $n$ $\implies$ equation is non-integrable. \end{itemize} \begin{conjecture} For all positive integers $z_1,z_2,z_3$ such that gcd$(z_1,z_2,z_3)=1$ equation (\ref{LAKP}) has quadratic growth. \end{conjecture} We have verified the following. Choosing (randomly) rational values for the coefficients $a_i$, starting with rational initial values $\tau_0,\ldots,\tau_{d-2}$ and letting $\tau_{d-1}=a+bx$, where $a,b$ are rational values and $x$ a parameter, we have calculated up to a 150 iterates until the degree (in $x$) exceeded 250. Taking the second difference of the degree sequence yielded a periodic sequence in almost all cases with $1\leq z_1\leq 4$, $1\leq z_2 \leq z_3 \leq 7$. In two cases more iterations were required. Keeping the maximal degree fixed at 500, for $z=(1,1,7)$ we calculated 370 iterations and found that the period of the second difference is 259, for $z=(3,7,7)$ we calculated 354 iterations and found that the period of the second difference is 240. Curiously, the leading order terms are all of the form $(M^{z_1}_{z_2,z_3})^{-1} n^2$ with \begin{align*} M^1&=\left[ \begin {array}{ccccccc} 2&4&15&40&85&156&259 \\ \noalign{\medskip}4&7&12&25&60&94&172\\ \noalign{\medskip}15&12&16& 24&40&76&150\\ \noalign{\medskip}40&25&24&29&40&60&108 \\ \noalign{\medskip}85&60&40&40&46&60&82\\ \noalign{\medskip}156&94& 76&60&60&67&84\\ \noalign{\medskip}259&172&150&108&82&84&92 \end {array} \right],\qquad M^2=\left[ \begin {array}{ccccccc} 4&7&12&25&60&94&172 \\ \noalign{\medskip}7&x&15&x&40&x&154\\ \noalign{\medskip}12&15&28&25 &60&55&132\\ \noalign{\medskip}25&x&25&x&40&x&76\\ \noalign{\medskip} 60&40&60&40&84&60&140\\ \noalign{\medskip}94&x&55&x&60&x&82 \\ \noalign{\medskip}172&154&132&76&140&82&172\end {array} \right], \\ M^3&=\left[ \begin {array}{ccccccc} 15&12&16&24&40&76&150 \\ \noalign{\medskip}12&15&28&25&60&55&132\\ \noalign{\medskip}16&28&x &40&40&x&77\\ \noalign{\medskip}24&25&40&69&60&55&168 \\ \noalign{\medskip}40&60&40&60&114&76&76\\ \noalign{\medskip}76&55&x &55&76&x&108\\ \noalign{\medskip}150&132&77&168&76&108&240\end {array} \right],\qquad M^4=\left[ \begin {array}{ccccccc} 40&25&24&29&40&60&108 \\ \noalign{\medskip}25&x&25&x&40&x&76\\ \noalign{\medskip}24&25&40&69 &60&55&168\\ \noalign{\medskip}29&x&69&x&85&x&77\\ \noalign{\medskip} 40&40&60&85&136&94&132\\ \noalign{\medskip}60&x&55&x&94&x&150 \\ \noalign{\medskip}108&76&168&77&132&150&296\end {array} \right], \end{align*} where $x$ indicates that gcd$(z_1,z_2,z_3)>1$. \section{Reductions to 2D integrable lattice equations} \label{srt2dle} We give some reductions to integrable 2D lattice equations known in the literature. \begin{itemize} \item Setting\ \ $\dot{ }=\hat{ }$ , $u=e$, $v=\tilde{q}$, $z=w=0$ and $a_1+a_4=0$ the Q$^3$D-system reduces to Rutishauser's QD-algorithm \[ \tilde{e}+\tilde{q}=\hat{e}+\tilde{\hat{q}},\qquad \hat{e}\hat{q}=e\tilde{q}. \] \item Taking $z=a_2=0$ and $a_1=a_3=-a_4=1$ and introducing variables $i=k-l$, $j=3l+m$, $\tau_{k,l,m}=\Delta_i^j$, equation (\ref{DAKP}) reduces to the higher analogue of the discrete-time Toda (HADT) equation \cite[Equation (3.18)]{SNK}, \begin{align*} &\Delta_{{i+1}}^{{j}} \left( \Delta_{{i-2}}^{{j+4}}\Delta_{{i+1}}^{{j+1}}\Delta_{{i }}^{{j+3}}-\Delta_{{i}}^{{j+2}}\Delta_{{i-1}}^{{j+3}}\Delta_{{i}}^{{j+3}}+\Delta_{{i-1 }}^{{j+3}}\Delta_{{i}}^{{j+1}}\Delta_{{i}}^{{j+4}} \right)\\ &=\Delta_{{i-1}}^{{j+4}} \left( -\Delta_{{i+2}}^{{j}}\Delta_{{i}}^{{j+1}}\Delta_{{i-1}}^{{j+3}}+\Delta_{{i }}^{{j+2}}\Delta_{{i}}^{{j+1}}\Delta_{{i+1}}^{{j+1}}-\Delta_{{i}}^{{j}}\Delta_{{i+1}}^{{j+ 1}}\Delta_{{i}}^{{j+3}} \right), \end{align*} and the Q$^3$D-system reduces to the QQD-system \cite[Equation (1.4)]{SNK}, \begin{eqnarray*} u_{{i,3+j}}+v_{{i+1,j+1}}+w_{{i+1,j}}&=&u_{{i+2,j}}+v_{{i+1,j}}+w_{{i+1,j+1}}\\ u_{{i,3+j}}v_{{i,j+1}}&=&u_{{i+1,j}}v_{{i+1,j}}\\ u_{{i,3+j}}w_{{i,j+1}}&=&u_{{i+1,j+1}}w_{{i+1,j+1}}. \end{eqnarray*} \item By introducing some special bi-orthogonal polynomials, in \cite{CCHT} the so-called discrete hungry quotient-difference (dhQD) algorithm and a system related to the QD-type discrete hungry Lotka-Volterra (QD-type dhLV) system have been derived, as well as hungry forms of the HADT-equation (hHADT) and the QQD scheme (hQQD). These systems are all reductions of the Q$^3$D system, or of the dual to the AKP equation, (\ref{DAKP}). Setting $z=w=0$, $\tilde{u}=q$, $v=e$ and introducing $i=k$, $j=pl+m$ we get QD-type dhLV \cite[Equations (6,7)]{CCHT}, \begin{eqnarray*} e_{{i,j}}+q_{{i,j}}&=&e_{{i,j+1}}+q_{{i-1,j+p}}\\ e_{{i,j+1}}q_{{i,j+p}}&=&e_{{i+1,j}}q_{{i,j}}. \end{eqnarray*} Setting $z=w=0$, $\tilde{u}=q$, $v=e$ and introducing $i=k$, $j=l+pm$ we get dhQD \cite[Equations (9,10)]{CCHT}, \begin{eqnarray*} e_{{i,j}}+q_{{i,j}}&=&e_{{i,j+p}}+q_{{i-1,j+1}}\\ e_{{i,j+p}}q_{{i,j+1}}&=&e_{{i+1,j}}q_{{i,j}}. \end{eqnarray*} With $z=0$ the reduction $i=k-l$, $j=(p+2)l+pm$ yield hQQD \cite[Equation (23)]{CCHT}, \begin{eqnarray*} u_{{i,j+p+2}}+v_{{i+1,j+p}}+w_{{i+1,j}}&=&u_{{i+2,j}}+v_{{i+1,j}}+w_{{i+1,j+p}}\\ u_{{i,j+p+2}}v_{{i,j+p}}&=&u_{{i+1,j}}v_{{i+1,j}}\\ u_{{i,j+2}}w_{{i,j}}&=&u_{{i+1,j}}w_{{i+1,j}}. \end{eqnarray*} Performing the same reduction on equation (\ref{DAKP}), with $a_2=0$, gives the hHADT equation \cite[Equation (18)]{CCHT}, \begin{align*} &\left( \Delta_{{i-2}}^{{j+2p+2}}\Delta_{{i+1}}^{{j+p}}\Delta_{{i}}^{{j+p+2}} -\Delta_{{i}}^{{j+2p}}\Delta_{{i-1}}^{{j+p+2}}\Delta_{{i}}^{{j+p+2}} +\Delta_{{i}}^{{j+2p+2}}\Delta_{{i}}^{{j+p}}\Delta_{{i-1}}^{{j+p+2}} \right) \Delta_{{i+1}}^{{j}}\\ &= \left(\Delta_{{i+2}}^{{j}}\Delta_{{i}}^{{j+p}}\Delta_{{i-1}}^{{j+p+2}} -\Delta_{{i}}^{{j+2}}\Delta_{{i}}^{{j+p}}\Delta_{{i+1}}^{{j+p}} +\Delta_{{i}}^{{j}}\Delta_{{i+1}}^{{j+p}}\Delta_{{i}}^{{j+p+2}} \right) \Delta_{{i-1}}^{{j+2p+2}}. \end{align*} \end{itemize} \section{Conclusion} In this paper we have generalized the concept of duality introduced in \cite{QCR} for ordinary difference equations (O$\Delta$Es) to the realm of lattice equations (P$\Delta$Es). The dAKP equation \eqref{DAKP} and the AKP equation \eqref{AKP} are dual to each other. Generally speaking, dual equations to integrable equations do not need to be integrable themselves; the only thing that is guaranteed is the existence of integrals (for O$\Delta$Es), or conservation laws (for P$\Delta$Es). However, our equation \eqref{AKP} unifies a number of known (hierarchies of) integrable 2D lattice equations, which arise as reductions. Together with the support we have provided for our conjectures, that equation \eqref{DAKP} admits an $N$-soliton solution, and its reductions have the Laurent property and zero algebraic entropy, we believe it is a new {\em integrable} 3D lattice equation. \subsection*{Acknowledgements} This research was supported by the Australian Research Council [DP140100383], by the NSF of China [No. 11371241, 11631007], and by two La Trobe University China Strategy Implementation Grants.
1,116,691,501,090
arxiv
\section{Conclusion} \label{sec:con} In this paper, we study practical shilling attacks. We analyze the limitations of existing works and design a new framework PC-Attack\xspace that transfers the knowledge to attack the victim RS on incomplete target dataset. Experimental results demonstrate the superiority of PC-Attack\xspace. In the future, we plan to introduce more self-supervised learning tasks so that PC-Attack\xspace can get more supervision signals and better capture the transferable RS knowledge. \section{Experiments} \label{sec:exp} \begin{table}[t] \caption{Statistics of datasets} \vspace{-11pt} \centering \label{tab:statistics} \scalebox{0.85}{ \begin{tabular}{ccccc} \hline Dataset & \#Users & \#Items & \#Interactions & Sparsity \\ \hline FilmTrust & 780 & 721 & 28,799 & 94.88\% \\ Automotive & 2,928 & 1,835 & 20,473 & 99.62\% \\ T \& HI & 1,208 & 8,491 & 28,396 & 99.72\% \\ Yelp & 2,762 & 10,477 & 119,237 & 99.59\% \\ \hline \end{tabular} } \vspace{-10pt} \end{table} \begin{table*}[t] \caption{Attack performance (HR@50) of different attack methods against different victim RS models. PC-Attack$^*$\xspace indicates that the complete target data is used. Best results are shown in bold.} \vspace{-10pt} \label{tab:main_exp} \centering \resizebox{0.82\linewidth}{!}{ \begin{tabular}{c|c|ccccccccc} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Victim RS} & \multicolumn{9}{c}{Attack Method (HR@50)} \\ \cline{3-11} & & RevAdv & TrialAttack & Leg-UP & AUSH & Bandwagon & Random & Segment & PC-Attack$^*$ & PC-Attack \\ \hline \multirow{7}{*}{Automotive} & CDAE & 0.1643 & \textbf{0.2504} & 0.2237 & 0.1949 & 0.2114 & 0.2247 & 0.1949 & 0.1348 & 0.1693 \\ & ItemAE & 0.2916 & 0.3128 & 0.3105 & 0.2926 & 0.3232 & 0.3183 & 0.2926 & \textbf{0.4908} & 0.3787 \\ & LightGCN & 0.1002 & 0.1228 & 0.1149 & 0.1361 & 0.1462 & 0.1222 & 0.1361 & 0.1406 & \textbf{0.1716} \\ & NCF & 0.7012 & 0.7744 & 0.7712 & 0.7359 & 0.7689 & 0.7462 & 0.7359 & \textbf{0.8574} & 0.6484 \\ & NGCF & 0.0866 & 0.0960 & 0.1398 & 0.1416 & 0.1362 & 0.1387 & \textbf{0.1416} & 0.0864 & 0.1056 \\ & VAE & 0.0842 & 0.0841 & 0.1168 & 0.1196 & 0.0960 & 0.0965 & 0.1196 & 0.1238 & \textbf{0.1559} \\ & WRMF & 0.9243 & 0.2941 & 0.3561 & 0.4386 & 0.3769 & 0.3069 & 0.4386 & \textbf{0.9268} & 0.9213 \\ \hline \multirow{7}{*}{FilmTrust} & CDAE & 0.4587 & 0.6270 & 0.6190 & 0.5342 & 0.4837 & 0.5954 & 0.5342 & 0.7505 & \textbf{0.7810} \\ & ItemAE & 0.6429 & 0.4721 & 0.5965 & 0.5501 & 0.5253 & 0.4807 & 0.5501 & 0.9534 & \textbf{0.9544} \\ & LightGCN & 0.8522 & 0.8362 & 0.8820 & 0.8594 & 0.8271 & 0.8169 & 0.8594 & \textbf{0.8949} & 0.8517 \\ & NCF & 0.9319 & 0.9108 & 0.9521 & 0.8846 & 0.8855 & 0.8929 & 0.8846 & 0.9266 & \textbf{0.9543} \\ & NGCF & 0.9015 & 0.9123 & \textbf{0.9207} & 0.9072 & 0.9079 & 0.9091 & 0.9072 & 0.9037 & 0.9101 \\ & VAE & 0.9713 & 0.9742 & \textbf{0.9749} & 0.9724 & 0.9721 & 0.9726 & 0.9724 & 0.9689 & 0.9730 \\ & WRMF & 0.5143 & 0.4171 & 0.4732 & 0.4976 & 0.4873 & 0.4667 & 0.4976 & \textbf{0.5706} & 0.4935 \\ \hline \multirow{7}{*}{T \& HI} & CDAE & 0.1126 & 0.3186 & 0.3929 & 0.3409 & 0.3173 & \textbf{0.4156} & 0.3409 & 0.3449 & 0.2279 \\ & ItemAE & 0.1074 & 0.2755 & 0.2677 & 0.1433 & 0.1857 & 0.2335 & 0.1433 & \textbf{0.3324} & 0.1487 \\ & LightGCN & 0.0028 & 0.0376 & 0.0383 & 0.0531 & 0.1603 & 0.0273 & 0.0531 & 0.0456 & \textbf{0.2064} \\ & NCF & 0.5421 & 0.6895 & 0.8508 & 0.8038 & \textbf{0.9017} & 0.8827 & 0.8038 & 0.7749 & 0.1988 \\ & NGCF & 0.0177 & 0.0739 & 0.1018 & 0.0903 & 0.1016 & 0.0927 & 0.0903 & \textbf{0.1064} & 0.0705 \\ & VAE & 0.3530 & 0.9975 & 0.9993 & 0.9995 & 0.9991 & \textbf{0.9996} & 0.9995 & 0.9916 & 0.9669 \\ & WRMF & 0.0406 & 0.0697 & 0.0495 & 0.0448 & 0.0530 & 0.0460 & 0.0448 & \textbf{0.0868} & 0.0743 \\ \hline \end{tabular} } \vspace{-9pt} \end{table*} \subsection{Experimental Settings} \vspace{3pt} \noindent\textbf{Datasets.} We use four public datasets\footnote{\url{https://github.com/XMUDM/ShillingAttack} \label{footnote:legup}} widely adopted in previous works on shilling attacks~\citep{LinC0XLY20,abs-2206-11433}, including FilmTrust, Yelp and two other Amazon datasets Automotive, and Tools \& Home Improvement (T \& HI). Target items for testing attacks are included in the datasets. Tab.~\ref{tab:statistics} illustrates the statistics of the data. Default training/test split is used for training and tuning surrogate RS models (if baselines require a surrogate RS) and victim RS models. By default, we train PC-Attack\xspace to learn graph topology from the complete Yelp dataset since Yelp is the largest dataset. Then, we test it on attacking victim RS on the other three datasets. Since some experiments require long-tail items, we define long-tail items as items with no more than three interactions. \vspace{5pt} \noindent\textbf{Shilling Attack Baselines.} We use three classic attack methods\textsuperscript{\ref{footnote:legup}} Random Attack, Bandwagon Attack and Segment Attack~\citep{abs-2206-11433}, and four state-of-the-art shilling attack methods RevAdv\footnote{\url{https://github.com/graytowne/revisit_adv_rec}}~\citep{TangWW20}, TrialAttack\footnote{\url{https://github.com/Daftstone/TrialAttack}}~\citep{WuLGZC21}, Leg-UP\textsuperscript{\ref{footnote:legup}}~\citep{abs-2206-11433} and AUSH\textsuperscript{\ref{footnote:legup}}~\citep{LinC0XLY20} as baselines. \vspace{5pt} \noindent\textbf{Victim RS.} We conduct shilling attacks against various prevalent RS models: NCF~\citep{HeLZNHC17}, WRMF~\citep{HuKV08}, LightGCN~\citep{0001DWLZ020}, NGCF~\citep{Wang0WFC19}, VAE~\citep{LiangKHJ18}, CDAE~\citep{WuDZE16} and ItemAE~\citep{SedhainMSX15}. \vspace{5pt} \noindent\textbf{Hyper-parameters.} The hyper-parameters of attack baselines and victim RS are set as the original papers suggest and tuned to show the best results. We set the number of fake profiles to 50 for all methods. This is roughly the population that can manifest the differences among attack models~\citep{BurkeMBW05}. For PC-Attack\xspace, we set training epochs to 32, batch size to 32, embedding size to 64 and learning rate to 0.005. $z$ and $y$ used in crafting profiles are set to 50 and 10, respectively. The length of random walk is set to 64 and the restart probability $1-\alpha$ is 0.8. The number of GIN layers $\hat{b}$ is 5. Other hyper-parameters of PC-Attack\xspace are selected through grid search and the chosen hyper-parameters are: $\tau=0.07$, $\lambda_g=0.5$, $\lambda_s=0.5$, $\eta_g=0.5$, $\eta_s=0.5$, $\mu_{user}=0.5$, and $\mu_{item}=0.5$. By default, we set $p=10\%$ when collecting target data. Adam optimizer is adopted for optimization. \vspace{5pt} \noindent\textbf{Evaluation Metrics.} Hit Ratio (HR@k) and Normalized Discounted Cumulative Gain (NDCG@k) are used for evaluation. HR@k measures the average proportion of normal users whose top-k recommendation lists contain the target item after the attack. NDCG@k measures the ranking of the target item after the attack. For both metrics, we set k to 50. \subsection{Overall Attack Performance} Tab.~\ref{tab:main_exp} summarizes the overall attack performance of different attack methods. We have the following observations: \begin{enumerate} \item PC-Attack$^*$\xspace and PC-Attack\xspace together achieve the best results in most cases, showing the effectiveness of our designs. Some baselines may have better results in a few cases, but their attack performance is not robust. \item PC-Attack\xspace achieves best results in more than 30\% cases. In other cases where PC-Attack\xspace does not rank first, its performance is not far away from the best performance. Note that our goal (i.e., a practical attack) is to use as little information of the target data as possible. In the case of the unfair comparison, (i.e., baselines take the complete target data and PC-Attack\xspace accesses at most 10\%), it is acceptable that PC-Attack\xspace can have degraded performance in exchange for the feasibility of the attack. However, PC-Attack\xspace shows promising results, showing the power of cross-system attack. \end{enumerate} \subsection{Impacts of Accessible Target Data ($p$)} We evaluate the performance of PC-Attack\xspace when $p$ changes (the default value is $p=10\%$ and PC-Attack$^*$\xspace uses $p=100\%$). Fig.~\ref{fig:exp-data} illustrates the results on the FilmTrust dataset. We can observe that, as $p$ decreases, the attack performance of PC-Attack\xspace degrades gradually. However, thanks to the knowledge learned from the source data, the decline of attack performance is not significant and PC-Attack\xspace is robust when the percentage of accessible target data varies. \subsection{Impacts of Source Data} We conduct two experiments to check the impacts of source data on PC-Attack\xspace: \vspace{5pt} \noindent\textbf{Impacts of Using Different Source Data.} Tab.~\ref{tab:src-to-filmtrust} reports the results of PC-Attack\xspace when using FilmTrust as the target data and the other three datasets are used as the source data. We can observe that using different source dataset does not affect the performance too much, which confirms that different RS data have common topological information of which the knowledge is transferable. However, the larger the source data is, the better PC-Attack\xspace can capture the structural patterns. Hence, PC-Attack\xspace shows best results when using Yelp (the largest dataset in our experiments) as the source data. \vspace{5pt} \noindent\textbf{Performance of Using Multiple Source Datasets.} PC-Attack\xspace can learn graph topology from multiple source datasets to benefit from the large volume of public RS data. To illustrate the results of using multiple source dataset, we train PC-Attack\xspace on different source datasets in the order of dataset size and then use it to attack WRMF, NGCF and LightGCN on the FilmTrust dataset. As shown in Fig.~\ref{fig:cross-doamin}, as more source datasets are used, the performance of PC-Attack\xspace gradually gets improved, showing that we can feed more public RS datasets to PC-Attack\xspace and get even better attack performance. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{Impact_of_Training_Data_on_filmtrust.pdf} \vspace{-10pt} \caption{Impacts of Accessible Target Data ($p$).} \label{fig:exp-data} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{Migration_on_filmtrust.pdf} \vspace{-8pt} \caption{Learn graph topology from multiple source datasets and then attack victim RS on FilmTrust.} \label{fig:cross-doamin} \vspace{-10pt} \end{figure} \subsection{Attack Invisibility} Next, we investigate the invisibility of PC-Attack\xspace. \vspace{5pt} \noindent\textbf{Attach Detection.} We apply the state-of-the-art unsupervised attack detector~\citep{ZhangT0LCM15} on the fake profiles generated by different attack methods. Tab.~\ref{tab:det} describes the accuracy and recall of the detector on different attack methods. Lower precision and recall indicate that the attack method is less perceptible. Based on the results, we find that the detection performance is highly data-dependent, and fake users are more easy to detect on denser datasets. For example, it is difficult for the detector to find fake users on Yelp. But it has relatively high precision and recall for detecting most attack methods (except PC-Attack\xspace) on FilmTrust. PC-Attack\xspace generates almost undetectable fake users. In most cases, the detector performs the worst on PC-Attack\xspace. On T \& HI, the detector does not has the lowest precision and recall for PC-Attack\xspace, but the values are close to lowest ones. \begin{table}[] \caption{Results of using different source datasets (the target dataset is FilmTrust). Best results are shown in bold.} \vspace{-10pt} \label{tab:src-to-filmtrust} \resizebox{\linewidth}{!}{ \begin{tabular}{ccccccc} \hline \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Victim\\ RS\end{tabular}} & \multicolumn{6}{c}{Source Data} \\ \cline{2-7} & \multicolumn{2}{c}{Automotive} & \multicolumn{2}{c}{T \& HI} & \multicolumn{2}{c}{Yelp} \\ & HR@50 & NDCG@50 & HR@50 & NDCG@50 & HR@50 & NDCG@50 \\ \hline CDAE & 0.773 & 0.198 & \textbf{0.793} & \textbf{0.199} & 0.781 & 0.200 \\ ItemAE & 0.953 & 0.257 & 0.949 & 0.255 & \textbf{0.954} & \textbf{0.257} \\ LightGCN & 0.817 & 0.219 & 0.823 & 0.221 & \textbf{0.852} & \textbf{0.236} \\ NCF & 0.907 & 0.247 & 0.927 & 0.254 & \textbf{0.954} & \textbf{0.256} \\ NGCF & 0.895 & 0.243 & 0.909 & 0.242 & \textbf{0.910} & \textbf{0.247} \\ VAE & 0.971 & 0.259 & 0.970 & 0.258 & \textbf{0.973} & \textbf{0.259} \\ WRMF & \textbf{0.508} & \textbf{0.139} & 0.475 & 0.131 & 0.494 & 0.136 \\ \hline \end{tabular} } \end{table} \begin{table}[t] \caption{Detection performance on different attack methods. Best results are shown in bold.} \label{tab:det} \centering \vspace{-10pt} \resizebox{0.95\linewidth}{!}{ \begin{tabular}{ccccccc} \hline \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Victim\\ RS\end{tabular}} & \multicolumn{6}{c}{Target Data} \\ \cline{2-7} & \multicolumn{2}{c}{FilmTrust} & \multicolumn{2}{c}{Automotive} & \multicolumn{2}{c}{T \& HI} \\ & Precison & Recall & Precison & Recall & Precison & Recall \\ \hline RevAdv & 0.0713 & 0.0778 & 0.0080 & 0.0089 & \textbf{0.0180} & \textbf{0.0200} \\ TrialAttack & 0.1603 & 0.1927 & 0.1114 & 0.1038 & 0.0200 & 0.0256 \\ Leg-UP & 0.2497 & 0.2711 & 0.0000 & 0.0000 & 0.0400 & 0.0444 \\ AUSH & 0.2429 & 0.2556 & 0.0380 & 0.0422 & 0.0742 & 0.0822 \\ Bandwagon & 0.2371 & 0.2489 & 0.0220 & 0.0244 & 0.0762 & 0.0845 \\ Random & 0.2340 & 0.2467 & 0.0220 & 0.0244 & 0.0441 & 0.0489 \\ Segment & 0.2602 & 0.2733 & 0.0280 & 0.0311 & 0.0662 & 0.0733 \\ PC-Attack & \textbf{0.0280} & \textbf{0.0311} & \textbf{0.0000} & \textbf{0.0000} & 0.0240 & 0.0266 \\ \hline \end{tabular} } \vspace{-10pt} \end{table} \vspace{5pt} \noindent\textbf{Fake User Distribution.} Using t-SNE~\citep{van2008visualizing}, Fig.~\ref{fig:vis} visualizes users' representations generated by WRMF after it is attacked by PC-Attack\xspace on Automotive and FilmTrust. We can observe that fake users profiles are scattered among real user profiles and it is hard for detectors to distinguish fake and real users, showing that PC-Attack\xspace can launch virtually invisible attacks. \begin{figure}[!t] \centering \subfloat[Automotive]{ \includegraphics[width=0.5\linewidth]{Tsne_automotive_attacker_88.pdf} } \subfloat[FilmTrust]{ \includegraphics[width=0.5\linewidth]{Tsne_filmtrust_attacker_5.pdf} } \vspace{-8pt} \caption{Visualization of user profiles. Red nodes are fake profiles and blue nodes are real profiles.} \label{fig:vis} \end{figure} \subsection{Impacts of the Starting Node in Target Data} \begin{figure}[!t] \centering \includegraphics[width=0.88\linewidth]{Impact_of_center_on_filmtrust_aga_wmf.pdf} \vspace{-5pt} \caption{Impact of the starting item.} \vspace{-10pt} \label{fig:exp-data2} \end{figure} By default, we start with the most popular item to collect the target data and select no more than 10\% of user-item interaction records within the limit of only 2-hop neighbors. To evaluate the robustness of PC-Attack\xspace, we compare the default setting with two other options: use an item sampled from long-tail items as the starting point and use a random sampled item as the starting point. Fig.~\ref{fig:exp-data2} compares the results of the default setting and two extra settings for attacking WRMF on FilmTrust. We can see that the attack performance of starting with a randomly sampled item does not lag much behind starting with the most popular item, and is also robust w.r.t. different settings of total numbers of nodes in the collected target data. Differently, starting from a sampled long-tail item results in a much worse performance. The reason is the long-tail item does not have many 2-hop neighbors and the collected data cannot help fine-tune GS-Encoder well on the target data. The performance of starting from a long-tail item is very consistent when the limit of node numbers changes. The reason is that long tail items have few neighbors and the number of the 2-hop neighbors of long tail items do not exceed 10\% of the total target data. Changing the limit does not actually change the number of the collected nodes. In summary, starting with a popular item brings the best results. Considering that popular items are always more readily available, PC-Attack\xspace uses the popular item as the default starting point to collect target data. \subsection{Performance of Cross-domain Attack.} \begin{table}[!t] \caption{Results of PC-Attack\xspace for the cross-domain attack.} \label{tab:cross-domain} \vspace{-10pt} \centering \scalebox{0.75}{ \begin{tabular}{ccccc} \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Victim\\ RS\end{tabular}} & \multicolumn{2}{c}{T \& HI $\rightarrow$ Automotive} & \multicolumn{2}{c}{Automotive $\rightarrow$ T \& HI} \\ \cline{2-5} & HR@50 & NDCG@50 & HR@50 & NDCG@50 \\ \hline CDAE & 0.035 & 0.008 & 0.222 & 0.068 \\ ItemAE & 0.288 & 0.096 & 0.142 & 0.080 \\ LightGCN & 0.160 & 0.051 & 0.238 & 0.113 \\ NCF & 0.366 & 0.084 & 0.079 & 0.036 \\ NGCF & 0.039 & 0.009 & 0.061 & 0.018 \\ VAE & 0.047 & 0.011 & 0.975 & 0.565 \\ WRMF & 0.902 & 0.288 & 0.073 & 0.027 \\ \hline \end{tabular} } \end{table} PC-Attack\xspace is designed for both cross-system attack and cross-domain attack. Experimental results in previous sections are for cross-system attack. Next, we further report the performance of cross-domain attack using PC-Attack\xspace. Tab.~\ref{tab:cross-domain} provides the result of PC-Attack\xspace for using T \& HI as the source and Automotive as the target, and using Automotive as the source and T \& HI as the target. The two datasets contain data in different categories of Amazon. From the result, we can observe that PC-Attack\xspace achieves acceptable attack performance for cross-domain attack, but the performance is worse than cross-system attack reported in Tab.~3. The reason is that the source dataset Yelp used in our default experiments for cross-system attack is much larger than T \& HI and Automotive used in the experiments for cross-domain attack. PC-Attack\xspace can better capture topological information from a larger source dataset. Hence, it shows better results in cross-system attack than cross-domain attack. \subsection{Impacts of Hyper-parameters and Ablation Study} \begin{table}[!t] \caption{Results using different hyper-parameters.} \label{tab:exp_para} \vspace{-10pt} \resizebox{\linewidth}{!}{ \begin{tabular}{ccc|ccc|ccc} \toprule \bm{$\eta_g:\eta_s$} & \textbf{HR@50} & \textbf{NDCG@50} & \bm{$\lambda_g:\lambda_s$} & \textbf{HR@50} & \textbf{NDCG@50} & \bm{$\mu_{item}:\mu_{user}$} & \textbf{HR@50} & \textbf{NDCG@50} \\ \cmidrule(l){1-9} 1:0 & 0.467 & 0.131 & 1:0 & 0.471 & 0.132 & 1:0 & 0.474 & 0.131 \\ 0:1 & 0.471 & 0.132 & 0:1 & 0.478 & 0.134 & 0:1 & 0.477 & 0.114 \\ 2:1 & 0.499 & 0.134 & 2:1 & 0.515 & 0.147 & 2:1 & 0.481 & 0.136 \\ 1:2 & 0.486 & 0.136 & 1:2 & 0.512 & 0.144 & 1:2 & 0.501 & 0.142 \\ 1:1 & 0.522 & 0.146 & 1:1 & 0.522 & 0.146 & 1:1 & 0.522 & 0.146 \\ \cmidrule(l){1-9} \end{tabular} } \vspace{-5pt} \end{table} The three sets of balance hyper-parameters $\eta$, $\lambda$ and $\mu$ are used to balance the effects of representations from graph and sequence views, graph-view and sequence-view loss functions, and user and item loss functions, respectively. Tab.~\ref{tab:exp_para} reports the performance of PC-Attack\xspace when attacking WRMF using different balance hyper-parameters. We can observe that the change of balance hyper-parameters affect the performance of PC-Attack\xspace. When setting equal values to all balance hyper-parameters, the attack performance is the best. Besides, the first two rows and the last row in Tab.~\ref{tab:exp_para} can be viewed as three ablation experiments: (1) only use graph-view representation, graph-view loss and item loss, (2) only use sequence-view representation, sequence-view loss and sequence loss, and (3) the default PC-Attack\xspace that uses all parts. From Tab.~\ref{tab:exp_para}, we can see that PC-Attack\xspace performs best when all parts are present and removing any of them will degrade the attack performance. Therefore, we can conclude that each component in PC-Attack\xspace indeed contributes to its overall attack performance. \section{Introduction} \label{sec:intro} Recommender System (RS) has become an essential tool in various online services. However, its prevalence also attracts attackers who try to manipulate RS to mislead users for gaining illegal profits. Among various attacks, \emph{Shilling Attack} is the most subsistent and profitable one~\citep{abs-2206-11433}. RS allows users to interact with the system through various operations such as giving ratings or browsing the page of an item. In shilling attacks, an adversarial party injects a few fake user profiles into the system to hoax RS so that the target item can be promoted or demoted~\citep{GunesKBP14}. This way, the attacker can increase the possibility that the target item can be viewed/bought by people or impair the competitors by demoting their products. In experiments, shilling attacks are able to spoof real-world RS, including Amazon, YouTube and Yelp~\citep{XingMDSFL13,YangGC17}. In practice, services of various large companies were affected by shilling attacks~\citep{LamR04}. Studying how to spoof RS has become a hot direction~\cite{DeldjooNM21} as it gives insights into the defense against malicious attacks. Although much effort has been devoted to developing new shilling attack methods~\cite{GunesKBP14,DeldjooNM21}, we find \emph{existing shilling attack approaches are still far from practical}\footnote{The detailed analysis is provided in Sec.~\ref{sec:analysis}.}. The main reason is that most of them require the complete knowledge of the RS data, which is not available in real shilling attacks. A few works study attacking using incomplete data~\citep{ZhangTLSYZG21} or transferring knowledge from other sources to attack the victim RS~\citep{FanDZ0LWT021}. Nevertheless, they still request a large portion of the target data or assume other data sources share some items with the victim RS. In this paper, we study the problem of designing a \emph{practical} shilling attack approach. We believe a practical shilling attack method should have the following nice properties: \vspace{3pt} \noindent\textbf{Property 1:} Do not require any prior knowledge of the victim RS (e.g., model architecture or parameters in the model). \vspace{3pt} \noindent\textbf{Property 2:} When training the attacker, other data sources (e.g., public RS datasets) instead of the data of the victim RS can be used. Do not assume the training data contain any users or items that exist in the victim RS. \vspace{3pt} \noindent\textbf{Property 3:} When attacking, the attacker should use as little information of the data in the victim RS as possible. Required information should be easy to access in practice. \vspace{3pt} \begin{table*}[t] \caption{Comparisons of shilling attack approaches. Reference of each method can be found in Sec.~\ref{sec:related}. $m$ and $n$ indicate the numbers of users and items, respectively. $p$ is the maximum percentage of the target data that PC-Attack\xspace requires. $e$, $k$ and $c$ represent the number of training epochs, the length of recommendation list and the number of queries, respectively.} \label{tab:pre} \centering \vspace{-10pt} \resizebox{0.95\linewidth}{!}{ \begin{tabular}{ccccccccc} \hline \multirow{2}{*}{Category} & \multirow{2}{*}{Method} & \multicolumn{3}{c}{Knowledge} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Do not train with\\ a surrogate RS\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Do not require\\ multiple queries\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Cross-domain\\ attack\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Cross-system\\ attack\end{tabular}} \\ \cline{3-5} & & \begin{tabular}[c]{@{}c@{}}Target\\ Data\end{tabular} & \begin{tabular}[c]{@{}c@{}}RS\\ Architecture\end{tabular} & \begin{tabular}[c]{@{}c@{}}RS\\ Parameters\end{tabular} & & & & \\ \hline \multirow{2}{*}{Optimization} & PGA and SGLD & $m\cdot n$ & \checkmark & \checkmark & \checkmark & \checkmark & × & × \\ & RevAdv and RAPU & $m\cdot n$ & × & × & × & \checkmark & × & × \\ \hline \multirow{3}{*}{GAN} & TrialAttack & $m\cdot n$ & \checkmark & × & × & \checkmark & × & × \\ & Leg-UP & $m\cdot n$ & × & × & × & \checkmark & × & × \\ & \begin{tabular}[c]{@{}c@{}}DCGAN, AUSH\\ and RecUP\end{tabular} & $m\cdot n$ & × & × & \checkmark & \checkmark & × & × \\ \hline \multirow{3}{*}{RL} & PoisonRec & $e\cdot n \cdot k$ & × & × & \checkmark & × & × & × \\ & LOKI & $m\cdot n$ & × & × & × & \checkmark & × & × \\ & CopyAttack & $e\cdot n \cdot k$ & × & × & \checkmark & × & \checkmark & × \\ \hline KD & Model Extraction Attack & $c \cdot k$ & \checkmark & × & \checkmark & × & × & × \\ \hline & PC-Attack\xspace & $p \cdot m\cdot n$ & × & × & \checkmark & \checkmark & \checkmark & \checkmark \\ \hline \end{tabular} } \vspace{-5pt} \end{table*} Our idea is that limiting the access to the target RS data does not mean that the attacker cannot leverage a large volume of other public RS data to train the attack model. We propose a new concept of \emph{Cross-system Attack}: Thanks to the prosperous development of RS research, many real RS datasets are available and can be used for extracting knowledge and training the attacker to launch shilling attacks. Along this direction, we design a \underline{P}ractical \underline{C}ross-system Shilling \underline{Attack} (PC-Attack\xspace) framework that requires little information on the victim RS model and the target RS data. The contributions of this work are summarized as follows: \begin{enumerate} \item We analyze the inadequacy of existing shilling attack methods and propose the concept of cross-system attack for designing a practical shilling attack model. \item We design PC-Attack\xspace for shilling attacks. PC-Attack\xspace is trained to capture graph topology knowledge from public RS data in a self-supervised manner. Then, it is fine-tuned on a small portion of target data that is readily available to construct fake profiles. PC-Attack\xspace has all the three nice properties discussed above. \item We conduct extensive experiments to demonstrate that PC-Attack\xspace exceeds state-of-the-art methods w.r.t. attack power and attack invisibility. Even in an unfair comparison where other attack methods can access the complete target data, PC-Attack\xspace with limited access to the target data still exhibits superior performance. \end{enumerate} \section*{Acknowledgments} This work was partially supported by the National Natural Science Foundation of China (No. 62002303, 42171456) and the Natural Science Foundation of Fujian Province of China (No. 2020J05001). \section{Our Framework PC-Attack\xspace} \label{sec:method} \subsection{Motivation and Overview of PC-Attack\xspace} Existing works on cross-domain and cross-system recommendations~\citep{ZhaoPXZLY13,ZhuWCLOW18} have verified that the knowledge learned from a source domain/system can help improve recommendation results in a target domain/system, i.e., the RS knowledge is \emph{transferable}. This has inspired us in designing PC-Attack\xspace. We believe it is possible to have an attack model that captures RS knowledge from the source data and can be transferred to attack the victim RS. Fig.~\ref{fig:framework} provides an overview of PC-Attack\xspace: \begin{enumerate}[leftmargin=12pt,topsep=2pt,itemsep=1pt] \item Firstly, we construct a bipartite user-item graph on the source data where each user-item edge indicates the existence of the corresponding user-item interaction. \item After that, PC-Attack\xspace trains a graph structure encoder (GS-Encoder) to capture the structural patterns of the source data in a self-supervised manner. \item Then, PC-Attack\xspace feeds a small portion of the public target data (e.g., a popular item and some people who bought it) into GS-Encoder and fine-tunes to get simulated representations after a successful attack. \item Finally, based on the simulated representations, PC-Attack\xspace searches for possible co-visit items of the target item that affect the possibility of recommending the target item and fills them in fake user profiles. Fake user profiles are injected into the victim RS to start the attack. \end{enumerate} PC-Attack\xspace does not assume that the entity-correspondences across different domains/systems exist (i.e., a user or an item exists in both source and target data) even if they indeed exist. This way, PC-Attack\xspace does not require additional prior knowledge. Therefore, in step 2, PC-Attack\xspace is designed to only capture the structural patterns of the source data without knowing real identities of each node. To endow PC-Attack\xspace with the attack power, when constructing fake user profiles in step 4, we adopt the idea that items that can affect whether the target item is recommended are likely to have been interacted by some users together with the target item. This idea is called as \emph{co-visitation attack} and has been verified in existing shilling attack methods~\citep{YangGC17}. \vspace{5pt} \noindent\textbf{Required Knowledge.} Compared to existing methods, PC-Attack\xspace requires \emph{much less} information. It only needs a subgraph of a popular item, numbers of users/items in the target data and users who interacted with the target item before. \subsection{Learn from Graph Topology in the Source} Without the knowledge of explicit entity-correspondences between the source data and the target data, we can not directly leverage historical records in the source data to encode users and items into representations that can be later used in attacking the target data. Nevertheless, previous studies have shown that topologies of user-item bipartite graphs from different RS data share some common properties~\citep{HuangZ11}. We can construct a user-item bipartite graph from the source data and train a graph structure encoder (GS-Encoder) to capture graph topological properties that are shared among different RS domains/systems. Graph Neural Network (GNN) is the prevalent neural network used for modeling graph data. We use GNN as the backbone of GS-Encoder to capture the intrinsic and transferable properties from interaction data in the bipartite graph. However, most feedback provided by users is implicit (e.g., clicks and views) rather than explicit (e.g., ratings). Hence, the observed interactions often contain noise that may not indicate real user preferences. Neighborhood aggregation schemes in GNN may amplify the influence of interactions on representation learning, making learning more susceptible to interaction noise. To alleviate the negative effect of noise, we introduce \emph{contrastive learning}, a type of self-supervised learning (SSL)~\citep{LiuZGNWZT21}, into GS-Encoder. SSL constructs supervision signals from the correlation within the input data, avoiding requiring implicit labels. Through contrasting samples, GS-Encoder learns to move similar sample pairs to stay close to each other while dissimilar ones are far apart. We adopt the idea of \emph{multi-view learning}~\citep{abs-2103-00111} when designing the self-supervised contrastive learning task. We model the node neighborhood as both a subgraph and a sequence, which helps GS-Encoder better capture the topological properties. \vspace{5pt} \noindent\textbf{Multi-view Data Augmentation.} To model node neighborhoods, we first sample paths by random walks and expand a single node $j$ in the user-item bipartite graph into its local structure as the \emph{subgraph view} $g_j$. We use the random walk with a restart process~\citep{TongFP06}: (1) a random walk starts from a node $j$ in the bipartite graph, and (2) it randomly transmits to a neighbor with a probability $\alpha$ or returns to $j$ with a probability $1- \alpha$ in each step. Note that we re-construct subgraph views in each training epoch. For each node, its 1-hop neighbors describe \emph{user-item interaction patterns}. The 2-hop neighbors exhibit \emph{co-visitation patterns} (i.e., users who have interacted with the same item or items which have been interacted by the same user), which are important in shilling attacks~\citep{YangGC17}. We propose to construct the \emph{sequence view} to capture the above two types of patterns better. With the node $j$ as the center, we sort by the ID of its 1-hop nodes and its 2-hop nodes in turn to construct the sequence view $s_j$ of $j$. The difference between the sequence view and the subgraph view is that the sequence view directly separates two data patterns while the subgraph view mixes two patterns up. Using the sequence view emphasizes learning two patterns individually while using the subgraph view learns them as a whole. \vspace{5pt} \noindent\textbf{Multi-view Contrastive Learning.} Contrastive learning aims to maximize the similarity between positive samples while minimizing the similarity between negative samples. A suitable contrast task will facilitate capturing topological properties from the source data. Unlike most contrastive learning methods that only focus on contrasting positive and negative samples in one view, we deploy a multi-view contrast mechanism when designing GS-Encoder so that GS-Encoder can benefit from more supervision signals. The subgraph view of each node is passed to a GNN encoder in GS-Encoder. We adopt GIN~\citep{XuHLJ19} as the GNN encoder, though other GNNs can be adopted. The GNN encoder updates node representations as follows: \begin{equation} \begin{aligned} \mathbf{h}_v^{(b)} = \text{MLP}^{(b)}\big((1 + \epsilon^{(b)}) \cdot \mathbf{h}_v^{(b-1)} + \Sigma_{u \in \mathcal{N}(v)}\mathbf{h}_u^{(b-1)}\big), \end{aligned} \end{equation} where $\mathbf{h}_v^{(b)}$ is the representation of node $v$ at the $b$-th GNN layer, $\mathcal{N}(v)$ is the set of 1-hop nodes to $v$ and $\text{MLP}(\cdot)$ indicates multi-layer perceptron. We use eigenvectors of the normalized graph Laplacian of the subgraph to initialize $\mathbf{h}^{(0)}$ of each node in the subgraph~\citep{QiuCDZYDWT20}. For a node $j$, its representation $\mathbf{h}_j^g$, from the subgraph view, is the concatenation of the aggregation of its neighborhood's representations generated in all GNN layers: \begin{equation} \begin{aligned} \mathbf{h}_j^g = \text{Concat}\big(\text{Readout}(\{\mathbf{h}_v^{(b)}|v \in \mathcal{V}_j \})\,|\,b=0,1,...,\hat{b}\big), \end{aligned} \end{equation} where $\text{Concat}(\cdot)$ denotes the concatenate operation, the $\text{Readout}(\cdot)$ function aggregates representations of nodes in the subgraph of $j$ from each iteration, and $\hat{b}$ is the number of GIN layers. The sequence view of each node $j$ is passed to a LSTM and we use the last hidden state $\mathbf{h}^{s}_j$ as the representation from the sequence view. $\mathbf{h}^g$ and $\mathbf{h}^s$ are further fed to a fully connected feedforward neural network to map them to the same latent space: \begin{equation} \label{eq:mapping} \begin{aligned} \hat{\mathbf{h}}_j^g = \textbf{W}_2 \cdot \sigma(\textbf{W}_1 \mathbf{h}_j^g + \textbf{b}_1) + \textbf{b}_2, \\ \hat{\mathbf{h}}_j^{s} = \textbf{W}_2 \cdot \sigma(\textbf{W}_1 \mathbf{h}_j^{s} + \textbf{b}_1) + \textbf{b}_2, \end{aligned} \end{equation} where {$\textbf{W}_1, \textbf{W}_2, \textbf{b}_1, \textbf{b}_2$} are learnable weights, and $\sigma(\cdot)$ indicates the sigmoid function. In contrastive learning, we need to define positive and negative samples. For each node $j$, we define its positive sample $\mathit{pos}_j$ and negative samples $\mathit{neg}_j$ as the subgraph obtained by random walks starting from $j$ and its corresponding sequence view, and the subgraphs obtained by random walks starting from other nodes and their corresponding sequence views, respectively. Note that, to improve efficiency and avoid processing too many negative subgraphs/sequences, we use subgraph/sequence views of other nodes in the same batch as negative samples. The contrastive loss under the subgraph view is: \begin{equation} \begin{aligned} \mathcal{L}_j^{g} = -log \frac{exp\big(sim(\hat{\mathbf{h}}_j^g, \hat{\mathbf{h}}_{\mathit{pos}_j}^{s})/\tau\big)} {\Sigma_{l \in \mathit{neg}_j}exp\big(sim(\hat{\mathbf{h}}_j^g, \hat{\mathbf{h}}_l^{s})/\tau\big)}, \end{aligned} \end{equation} where $\text{sim}(\cdot)$ denotes the cosine similarity and $\tau$ denotes a temperature parameter. The contrastive loss under the sequence view is defined similarly: \begin{equation} \begin{aligned} \mathcal{L}_j^{s} = -log \frac{exp\big(sim(\hat{\mathbf{h}}_j^{s}, \hat{\mathbf{h}}_{\mathit{pos}_j}^{g})/\tau\big)} {\Sigma_{l \in \mathit{neg}_j}exp\big(sim(\hat{\mathbf{h}}_j^{s}, \hat{\mathbf{h}}_l^{g})/\tau\big)}. \end{aligned} \end{equation} The overall multi-view contrastive objective of GS-Encoder is as follows: \begin{equation} \begin{aligned} \mathcal{L}_{ssl} = \frac{1}{n} \Sigma_{j \in \mathcal{I}} (\lambda_g \cdot \mathcal{L}_j^{g} + \lambda_{s} \cdot \mathcal{L}_j^{s}), \end{aligned} \end{equation} where $\lambda_g$ and $\lambda_{s}$ are hyper-parameters to balance two views, $\mathcal{I}$ is the item set, and $n$ is the number of items. \subsection{Craft Fake User Profiles in the Victim RS} After pre-training the GS-Encoder, the next step is to construct a few fake user profiles and inject them into the victim RS to pollute the target data. Our construction method of fake users is based on three design principles from the literature: \vspace{3pt} \noindent\textbf{Principle 1:} Item-based RS is designed to recommend items similar to past items in the target user's profile~\citep{SarwarKKR01}. \vspace{3pt} \noindent\textbf{Principle 2:} User-based RS is designed to recommend items interacted by similar users of the target user~\citep{HerlockerKR00}. \vspace{3pt} \noindent\textbf{Principle 3:} According to the idea of co-visitation attack~\citep{YangGC17}, the co-visit items of the target item (i.e., the 2-hop neighbors of the target item in the bipartite graph) can affect whether the target item can be recommended. \vspace{3pt} Based on the above principles, the goals of our construction method are: \vspace{3pt} \noindent\textbf{Goal 1:} Based on Principle 1, our goal is to affect the victim RS so that the representation of target item is as similar as possible to representations of the rest of the items. This way, the possibility of recommending the target item can increase. We hope that our attack can achieve the following objective: for any item $i$, $sim(\mathbf{h}^{item}_i, \mathbf{h}^{item}_t)>sim(\mathbf{h}^{item}_i, \mathbf{h}^{item}_j)$, where $t$ is the target item, $j$ denotes any other item, and $sim(\cdot)$ denotes a measure of similarity between items (e.g., cosine similarity), and $\mathbf{h}^{item}_i$ is the representation of item $i$ in the victim RS. \vspace{3pt} \noindent\textbf{Goal 2:} Based on Principle 2, our goal is to affect the victim RS so that representations of users who have interacted with the target item are as similar as possible to representations of other users. This way, the possibility of recommending the target item can increase. We hope that, for any user $u$, $sim(\mathbf{h}^{user}_u, \mathbf{h}^{user}_r)>sim(\mathbf{h}^{user}_u, \mathbf{h}^{user}_e)$, where $r \in \mathcal{N}(t)$ denotes the user who has interacted with the target item $t$, $e \notin \mathcal{N}(t)$ is any other user, and $\mathbf{h}^{user}_u$ is the representation of user $u$ in the victim RS. \vspace{3pt} \noindent\textbf{Goal 3:} Based on Principle 3, our goal is to find possible co-visit items of the target item after a successful attack and fill them in the fake user profiles. \vspace{3pt} However, the above goals are challenging without knowing the details of the victim RS and the target data, i.e., we do not know $\mathbf{h}^{item}$ and $\mathbf{h}^{user}$ in the victim RS. GS-Encoder, which captures the transferable knowledge from the more informative source data, can help us accomplish this task: \vspace{3pt} \noindent\textbf{Step 1:} Use the pre-trained GS-Encoder to generate node representations based on topological information of the incomplete target data; \vspace{3pt} \noindent\textbf{Step 2:} Fine-tune and get \emph{simulated} representations \emph{after the successful attack} (Goal 1 and Goal 2); \vspace{3pt} \noindent\textbf{Step 3:} Based on the simulated, after-attack representations, we search for possible co-visit items of the target items and craft the fake user profiles (Goal 3). \vspace{3pt} Considering that we cannot access the complete target data, we collect a very small portion of the target data that can be publicly accessed. One example is the popular item in the victim RS, some normal users who have bought them and the 2-hop items to the popular item. Such information is typically available. For instance, Amazon provides ``Popular Items This Season'' and Newsegg provides ``Popular Products'' on their homepages, as shown in Fig.~\ref{fig:examples}, and information of their buyers can be found by clicking the popular item. The buyer's homepage may provide information of some items that he/she bought before. Therefore, starting from one popular item, we collect users/items in its 2-hop subgraph via random walks without restart. But we limit the total number of nodes to be lower than $p$ percentage of the target data to keep a low level of knowledge. In addition to the collected subgraph of a popular item, we collect a user set $\mathcal{M}(t)$ containing users who have interacted with the target item $t$. \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{Example_Combined2.pdf} \vspace{-15pt} \caption{Popular items of Amazon and Newegg.} \label{fig:examples} \vspace{-15pt} \end{figure} Based on the collected data from the target data, we construct a small subgraph centering on the popular item, and feed it into the pre-trained GS-Encoder to generate initial representations of users/items in the subgraph: \begin{equation} \begin{aligned} \mathbf{h}_j = \eta_g \cdot \hat{\mathbf{h}}_j^g + \eta_s \cdot \hat{\mathbf{h}}_j^{s}, \end{aligned} \end{equation} where $\eta_g$ and $\eta_s$ are hyper-parameters that balance the effects of the two views, $\mathbf{h}_j$ is the fused representation of node $j$, and $\hat{\mathbf{h}}_j^g$ and $\hat{\mathbf{h}}_j^s$ are subgraph-view representation and sequence-view representation of node $j$ generated by the pre-trained GS-Encoder as shown in Eq.~\ref{eq:mapping}, respectively. For most users and items in the target data that are not collected, we assume that we know the numbers of users ($m$) and items ($n$) in the victim RS, and initialize their representations from a normal distribution $\mathcal{N}(0,0.1)$. This is a reasonable assumption as many RS websites reveal exact numbers or the order of magnitude of users/items. Including users and items that are not in the collected small subgraph makes it possible to generate fake user profiles with items not in the collected subgraph. We continue to fine-tune over the above collected data with the following objective for Goal 1 to simulate representations after a successful attack: \begin{equation} \begin{aligned} \mathcal{L}^{item} = -log \Sigma_{i=1}^{n} \frac{sim(\mathbf{h}_i^{item}, \mathbf{h}_t^{item})}{\Sigma_{j\neq t} sim(\mathbf{h}_i^{item}, \mathbf{h}_j^{item})}. \end{aligned} \end{equation} Similarly, for Goal 2, we fine-tune with the following objective to simulate representations after a successful attack: \begin{equation} \begin{aligned} \mathcal{L}^{user} = -log \Sigma_{u=1}^{m} \frac{\Sigma_{g\in \mathcal{M}(t)} sim(\mathbf{h}^{user}_u, \mathbf{h}^{user}_g)} {\Sigma_{j \notin \mathcal{M}(t)} sim(\mathbf{h}^{user}_u, \mathbf{h}^{user}_j)}. \end{aligned} \end{equation} The overall objective is as follows: \begin{equation} \begin{aligned} \mathcal{L}_{fine-tune} =\mu_{item}\cdot \mathcal{L}^{item} + \mu_{user}\cdot \mathcal{L}^{user}, \end{aligned} \end{equation} where $\mu_{item}$ and $\mu_{user}$ are hyper-parameters that balance the effects of the two loss functions. After fine-tuning, we have new representations of users and items that simulate the representations in the victim RS after a successful attack. Then, we search for possible co-visit items of the target item based on the simulated representations. Similar to other shilling attack methods~\citep{LinC0XLY20,abs-2206-11433}, the fake user profile in PC-Attack\xspace contains three parts: selected items, filler items and the target item. We estimate the potential interest of all users in the target item $t$ after the attack by the inner product of representations, and sample $z$ users according to the probability: \begin{equation} Pro(u|t) = \frac{\mathbf{h}^{item}_t \cdot \mathbf{h}^{user}_u}{\Sigma_{j=1}^{m} \mathbf{h}^{item}_t \cdot \mathbf{h}^{user}_j}. \end{equation} Common items existing in these $z$ profiles are chosen as the selected items. Because popular items are always more accessible than others and appear in many normal users' profiles, we randomly sample $y$ popular items from the collected subgraph according to their degrees as filler items to enhance the invisibility of PC-Attack\xspace. For each fake profile, the above crafting process is conducted independently. \section{Background} \label{sec:pre} Shilling attacks can achieve both push attacks (promote the target item) and nuke attacks (demote the target item). Since attackers can easily reverse the goal setting to conduct each attack~\citep{abs-2206-11433}, we consider push attacks in the sequel for simplicity. In this paper, the \emph{source data} and the \emph{target data} refer to the RS data used for training the attack model and the data in the victim RS, respectively. \hide{ \subsection{Problem Definition} We first give the definition of shilling attack against RS: \begin{definition}[Shilling Attack] \label{def:sa} Given an RS with $m$ users and $n$ items, a shilling attacker injects $m'$ fake user profiles into the victim RS where $m'\ll m$. Each fake user profile is constructed by first registering a new account and then use it to perform $s$ predefined operations. After a successful attack, a target item $t$ appears in many more normal users' top$-k$ recommendation lists than before the attack. \end{definition} $m'$ and $s$ are attack size and profile size~\citep{LinC0XLY20}. Def.~\ref{def:sa} illustrates the push attack, i.e., promote the target item. Nuke attack is the opposite to push attack and it aims at demoting the target item. Since push attack and nuke attack are similar and attackers can easily reverse the goal setting to conduct each attack~\citep{abs-2206-11433}, we consider push attacks in the sequel for simplicity. \vspace{3pt} \noindent\textbf{Terminology:} In this paper, the \emph{source data} and the \emph{target data} refer to the RS data used for training the attack model and the data in the victim RS, respectively. } \subsection{Related Work} \label{sec:related} Early works of shilling attacks rely on heuristics~\citep{GunesKBP14}. Recent works~\cite{DeldjooNM21} mostly adopt the idea of adversarial attack~\citep{YuanHZL19}, and they can be categorized into four groups. \textbf{Optimization methods} study how to model shilling attacks as an optimization task and then use optimization strategies to solve it. \citet{LiWSV16} assumes the victim RS adopts matrix factorization (MF) and they propose methods PGA and SGLD that directly add the attack goal into the objective of MF. RevAdv proposed by~\citet{TangWW20} and RAPU proposed by~\citet{ZhangTLSYZG21} model shilling attacks as a bi-level optimization problem. \textbf{GAN-based methods} adopt Generative Adversarial Network (GAN)~\citep{GoodfellowPMXWOCB14} to construct fake user profiles. The generator models the data distribution of real users and generates real-like data, while the discriminator is responsible for identifying the generated fake users. Along this direction, a large number of methods have sprung up: TrialAttack~\citep{WuLGZC21}, Leg-UP~\citep{abs-2206-11433}, DCGAN~\citep{Christakopoulou19}, AUSH~\citep{LinC0XLY20}, RecUP~\citep{ZhangCZWL21}, to name a few. \textbf{RL-based methods} query the RS to get feedback on the attack. Then, Reinforcement Learning (RL)~\citep{KaelblingLM96} is used to adjust the attack. Representative works include PoisonRec~\citep{SongLHWLLG20}, LOKI~\citep{ZhangLD020} and CopyAttack~\citep{FanDZ0LWT021}. \textbf{KD-based methods} leverage Knowledge Distillation (KD)~\citep{GouYMT21} to narrow the gap between the surrogate RS and the victim RS. The surrogate RS is used to mimic the victim RS when the prior knowledge is not available. Model Extraction Attack proposed by \citet{YueHZM21} falls in this category. \hide{ \vspace{5pt} \noindent\textbf{(1) Optimization methods.} \citet{LiWSV16} assumes the victim RS adopts matrix factorization (MF) and they directly adds the attack goal into the objective of MF. \vspace{3pt} Works in this category study how to model shilling attack as an optimization task and then propose optimization strategies to solve it. \citet{LiWSV16} assumes the victim RS adopts matrix factorization (MF) and they directly adds the attack goal into the objective of MF. For instance, the alternating minimization objective of MF becomes: \begin{equation} \small \begin{aligned} \Theta_{\lambda} = & \argmin_{\mathbf{P},\widetilde{\mathbf{P}},\mathbf{Q}} \Vert \mathcal{G}_{\Omega}(\mathbf{R}-\mathbf{P}^T\mathbf{Q}) \Vert_{F}^{2} + \Vert \mathcal{G}_{\widetilde{\Omega}}(\mathbf{\widetilde{R}}-\widetilde{\mathbf{P}}^T\mathbf{Q}) \Vert_{F}^{2}\\ & +2\lambda_{\mathbf{P}}(\Vert \mathbf{P} \Vert_{F}^2 + \Vert \widetilde{\mathbf{P}} \Vert_{F}^2) + 2\lambda_{\mathbf{Q}}\Vert \mathbf{Q} \Vert_{F}^2,\nonumber \end{aligned} \end{equation} where $\mathbf{R}\in \mathbb{R}^{m\times n}$ is the user-item interaction matrix. $\Omega$ denotes all observed entities in $\mathbf{R}$ and $\widetilde{\Omega}$ is the observed entities set in the fake user-item interaction matrix $\widetilde{\mathbf{R}} \in \mathbb{R}^{m'\times n}$. $\mathbf{P}$, $\widetilde{\mathbf{P}}$, $\mathbf{Q}$ are low-rank latent factor for normal users, fake users and items, respectively. $\lambda_{\mathbf{P}}$ and $\lambda_{\mathbf{Q}}$ are regularization parameters. $\Vert \mathbf{A} \Vert_{F}^2=\sum_{i,j}\mathbf{A}_{i,j}^2$ is the squared Frobenious norm and $\left[\mathcal{G}_{\Omega}(\mathbf{A}) \right]_{i,j}$ equals $\mathbf{A}_{i,j}$ if $(i,j)\in \Omega$ and 0 otherwise. The attack goal becomes finding optimal fake users $\widetilde{P}^{*}$ such that $\widetilde{P}^{*} \in \argmax_{P \in \mathbb{P}} U\big(\widehat{\mathbf{R}}(\Theta_{\lambda}), \mathbf{R}\big)$, where $\widehat{\mathbf{R}}(\Theta_{\lambda})$ is the normal user-item matrix estimated from learned $\Theta_{\lambda}$. $U(\widehat{\mathbf{R}}, \mathbf{R})$ denotes the designed attack utility. Then, they use the Projected Gradient Ascent (PGA) to solve the above optimization problem. Similarly, they design the optimization problem based on the nuclear-norm minimization objective of MF, and apply Stochastic Gradient Langevin Dynamics (SGLD) to solve the problem. \citet{TangWW20} use a surrogate RS to avoid the prior knowledge of the architecture of the victim RS. They model the attacking task as a bi-level optimization problem of the adversarial objective $\mathcal{L}_{adv}$: \begin{equation} \label{eq:Revisiting} \small \begin{aligned} &\underset {\widetilde{\mathbf{R}}}{min} \, \mathcal{L}_{adv}\big(\widehat{\mathbf{R}}(\Theta^{*})\big), \\ s.t.\,\,\,\Theta^*= &\argmin_{\Theta} \Big(\mathcal{L}_{train}\big(\mathbf{R}, \widehat{\mathbf{R}}({\Theta})\big) + \mathcal{L}_{train}\big(\widetilde{\mathbf{R}}, \widehat{\widetilde{\mathbf{R}}}(\Theta)\big)\Big),\nonumber \end{aligned} \end{equation} where $\widehat{\mathbf{R}}({\Theta})$ is the predictions of the surrogate RS on normal users, $\widehat{\widetilde{\mathbf{R}}}(\Theta)$ is the predictions of the surrogate RS on fake users, and $\mathcal{L}_{train}$ is the training objective of the surrogate RS. Then, they propose RevAdv (i.e., Revisit Adv.) that contains exact and approximated solutions to solve the above optimization problem. Following their idea, \citet{ZhangTLSYZG21} propose RAPU that incorporates a probabilistic generative model into the bi-level optimization problem to find users and items to craft fake user profiles. \vspace{5pt} \noindent\textbf{(2) GAN-based methods.} \vspace{3pt} Works in this category adopt Generative Adversarial Network (GAN)~\citep{GoodfellowPMXWOCB14} to construct fake user profiles. The generator models the data distribution of real users and generates real-like data that maximize the generation loss, while the discriminator is responsible for distinguishing the generated fake users from real users. The core idea of these methods is that GAN perform adversarial learning between the generator and the discriminator: \begin{equation}\label{eq:GAN} \small \begin{aligned} \underset{\Theta_G}{min}\,\underset{\Theta_D}{max}\,\mathcal{L}_{adv} = \mathbb{E}_{u\sim \mathcal{U}}[log D(p_u)] + \mathbb{E}_{u'\sim G_{\Theta}} [log\big(1-D(p_{u'})\big)], \end{aligned} \end{equation} where $\theta_G$ and $\theta_D$ are parameters of the generator and the discriminator, respectively. $u$ is a real user profile sampled from the user universe and $p_u$ is the profile of $u$. $u'$ is a fake user generated from the generator distribution. Along this direction, a large number of attacking methods have sprung up: TrialAttack~\citep{WuLGZC21}, Leg-UP~\citep{abs-2206-11433}, DCGAN~\citep{Christakopoulou19}, AUSH~\citep{LinC0XLY20}, RecUP~\citep{ZhangCZWL21}, to name a few. \vspace{5pt} \noindent\textbf{(3) RL-based methods.} \vspace{3pt} Some methods query the RS to get feedbacks on their attacking strategies. Then, Reinforcement Learning (RL) is used to adjust the attack. Representative works include PoisonRec~\citep{SongLHWLLG20} and LOKI~\citep{ZhangLD020}. CopyAttack~\citep{FanDZ0LWT021} also falls into this category. But it harnesses real users from a source domain by copying their profiles into the target domain to construct fake user profiles. The two domains share items. Thus, CopyAttack is able to achieve cross-domain attack. \vspace{5pt} \noindent\textbf{(4) KD-based methods.} \vspace{3pt} Knowledge Distillation (KD)~\citep{GouYMT21} is recently adopted in shilling attack. KD can help attackers narrow the gap between the surrogate RS and the victim RS. Thus, attack methods can get better guidance from the surrogate RS. For instance, \citet{YueHZM21} proposes Model Extraction Attack that utilizes the distilled surrogate RS in their shilling attack. } \subsection{Analysis of Existing Works} \label{sec:analysis} We review existing shilling attack approaches and summarize their \emph{characteristics} in Tab.~\ref{tab:pre}: \begin{itemize}[leftmargin=8pt] \item \textbf{Data Knowledge}: Some methods assume the complete/partial target data is exposed to attackers. A practical attack method should use as less target data as possible. \item \textbf{RS Parameter Knowledge}: Some methods require the knowledge of the learned parameters of the victim RS. Such information is typically not available. \item \textbf{RS Architecture Knowledge}: Some methods require the knowledge of the architecture of the victim RS. Such information is typically not available. \item \textbf{Train with a surrogate RS}: Use a surrogate RS to train the attacker to avoid the prior knowledge of the victim RS. \item \textbf{Require multiple queries}: Query the victim RS multiple times and adjust fake profiles according to the feedback. \item \textbf{Cross-domain Attack}: Use the information in one RS domain to attack another RS domain, e.g., train on the book data in Amazon RS and then attack video items in Amazon RS. Source and target domains share users and/or items. \item \textbf{Cross-system Attack}: Use the information in one RS to attack another RS, e.g., train on the Yelp RS and then attack Amazon RS. Source RS and target RS may not share users and/or items. \end{itemize} \begin{figure*}[t] \centering \includegraphics[width=1\linewidth]{framework.pdf} \vspace{-15pt} \caption{Overview of PC-Attack\xspace.} \label{fig:framework} \vspace{-10pt} \end{figure*} From Tab.~\ref{tab:pre}, we can see that none of the existing methods have all the three properties illustrated in Sec.~\ref{sec:intro}. In other words, \emph{there is still no real practical shilling attack method}. Particularly, we do not find any method that has Property 2 and can achieve cross-system attack. Property 2 partially manifests in CopyAttack. But CopyAttack assumes the source data and the target data share items and it is only able to achieve cross-domain attack. Model Extraction Attack considers a data-free setting and uses limited queries ($c$ queries) to close the gap between the surrogate RS and the victim RS. But its idea only works on sequential RS and the number of required queries is hard to pre-defined. In summary, based on Tab.~\ref{tab:pre}, we can conclude that our method PC-Attack\xspace (illustrated in the next section) has all of the three properties that a practical shilling attack method should have. And it is able to achieve cross-system attack, a difficult but practical setting of real shilling attacks.
1,116,691,501,091
arxiv
\section{Introduction} Since the early 90's, the growing number of galaxy surveys exploring different wavelengths and cosmological volumes have provided an immense body of data to reconstruct the formation and evolution of galaxies (e.g. \citealt{york+00,steidel+03,lefevre+05,lilly+07,walter+08,baldry+10,grogin+11,koekemoer+11,kochanek+12}). At the same time, the $\Lambda$CDM cosmological paradigm of structure formation has produced detailed predictions on the statistical matter distribution in the Universe (e.g. \citealt{percival+01,springel+05,reed+07,viel+09,percival+10,reid+10,klypin+11}), but the theory of galaxy formation within this framework is still developing, through analytical work (e.g. \citealt{white+78,blumenthal+84,mo+98}) and semi-analytic models (e.g. \citealt{kauffmann+93, cole+94,somerville+99,springel+05,bower+06,croton+06,guo+11,henriques+13}), as well as cosmological numerical simulations (e.g. \citealt{diemand+08,crain+09,schaye+10,agertz+11,guedes+11,dubois+14,hopkins+14,vogelsberger+14,schaye+15}; and references therein). Most of the theoretical successes have been achieved at low to moderate redshifts ($z\sim 0 - 1.5$), where the majority and the highest quality data are available. The latter have been fundamental to gauge the characteristics and the parameters of phenomenological, sub-resolution recipes introduced in numerical simulations (as well as in semi-analytic models) in order to model physical processes on scales that are currently inaccessible directly, such as gas radiative cooling, gas chemistry evolution, star formation, stellar feedback, black hole accretion and feedback (e.g. \citealt{stinson+06,dallavecchia+08,gnedin+09,wiersma+09,vogelsberger+13,keller+14}). Large-box cosmological simulations, despite suffering from relatively low resolution, have been able to statistically reproduce the main properties of different galaxy populations, from small, star-forming irregular galaxies to massive quiescent ellipticals \citep{vogelsberger+14,schaye+15}. On the other hand, high-resolution zoom-in simulations focussing on the formation and growth of a limited number of systems have also satisfactory modelled the formation and growth of dwarf galaxies \citep[e.g.][]{governato+10}, Milky-Way-like hosts \citep[e.g.][]{agertz+11,guedes+11}, as well as massive ellipticals at the centre of galaxy groups \citep[e.g.][]{feldmann+10,feldmann+15,fiacconi+15}. At higher redshifts $z \sim 2$, recent observational campaigns have boosted the interest in the population of massive star forming galaxies that show peculiar properties when compared to local counterparts of similar size. Such systems are often characterised by massive discs with baryonic mass $\sim 10^{11}$~M$_{\sun}$ and star formation rates as high as $\sim 100$~M$_{\odot}$~yr$^{-1}$, with a turbulent interstellar medium that accounts for $\gtrsim 30 \%$ of the baryonic mass (e.g. \citealt{genzel+06,foster-schreiber+09,daddi+10,tacconi+10,wisnioski+15}). The new observations have triggered lively discussions in the theoretical community, also requiring the development of new theoretical ideas to explain the observations and to make new predictions (e.g. violent disc instability; \citealt{mandelker+14,inoue+16}). Pushing it forward, the very high redshifts ($z \gtrsim 4$) still remain a partially unexplored territory, clearly because of the larger technical difficulties involved in new, cutting-edge observations. Nonetheless, galaxies at $z > 4$ have been detected in a few different ways. Optical and near infra-red observations have targeted star-forming galaxies by identifying them through the flux dropout in adjacent bands around the Lyman break (e.g. \citealt{madau+96,steidel+99,bouwens+03,oesch+10}). Those have permitted to constrain the evolution of the cosmic star formation rate density as well as of the ultraviolet (UV) luminosity function of star forming galaxies out to $z \gtrsim 8-10$, showing that the latter has a steep low luminosity tail, when compared to local samples (e.g. \citealt{bouwens+07,bouwens+11,oesch+12,oesch+14,bouwens+15}). Moreover, other wavelengths have been effective in providing information about the early galaxy population. Sub-millimetre galaxies are an example of massive (stellar mass $\sim 10^{11}$~M$_{\sun}$), highly star-forming (star formation rates $\gtrsim 500$~M$_{\sun}$~yr$^{-1}$), dusty galaxies that have been detected mostly at $z > 2.5-3$ (e.g. \citealt{chapman+05,younger+09,casey+14}). While those studies have been important to understand the global properties of the first galaxies in the Universe, they have mostly revealed the luminous tail of the galaxy population and they are still not able to characterise their structure (but see \citealt{oesch+10b,debreuck+14}). Nonetheless, both available facilities, such as the Hubble Space Telescope (HST) or the Very Large Telescope (VLT), as well as new observatories that recently came online, such as the Atacama Large Millimeter Array (ALMA), are starting to discover smaller and possibly more typical galaxies at $z \geq 6$. For example, \citet{bradley+12} have used HST imaging to identify a few Lyman break galaxy candidates at $z \approx 7$, lensed by the foreground galaxy cluster A1703. The most luminous likely has a stellar mass $\sim 10^{9}$~M$_{\sun}$ and a star formation rate $\sim 8$~M$_{\sun}$~yr$^{-1}$. Similarly, \citet{watson+15} have combined HST, VLT, and ALMA observations to constrain the stellar mass, star formation and gas content of another highly magnified Lyman break galaxy beyond the Bullet cluster. They also find a stellar mass $\sim 2 \times 10^{9}$~M$_{\sun}$, a star formation rate $\sim 10$~M$_{\sun}$~yr$^{-1}$, and a gas fraction $\sim 40-50\%$, all confined within a physical surface $\sim 1.5$~kpc$^2$. All these objects typically have specific star formation rates $\sim5$-10~Gyr$^{-1}$ (see also e.g. \citealt{tasca+15}). These recent observations, combined with those from the next generation of both space-based (James Webb Space Telescope; JWST) and earth-based (e.g. European Extremely Large Telescope; E-ELT) telescopes, require new interpretations and predictions on the theoretical side, where less has been done compared to low/medium redshifts, except regarding the most massive population of galaxies at $z > 6-8$ (e.g. \citealt{choi+12,oshea+15,paardekooper+15,ocvirk+16,waters+16}). Motivated by this, we investigate the early phases of the formation and the evolution of a galaxy that becomes a massive elliptical at $z=0$. We focus on the first burst of star formation, prior to the assembly of the central supermassive black hole and before active galactic nuclei (AGN) feedback becomes dominant. We use a new high-resolution hydrodynamic run that is part of the recent Ponos program of zoom-in cosmological simulations of massive galaxies \citep{fiacconi+16}. Our new simulation reproduces the main features of recently observed star forming galaxies at $z \sim 7$, and allows us to study in details the properties of the early interstellar medium that drive the star formation and may determine the early feeding habits of the central black hole. The paper is organised as follows. In Section \ref{sec_2}, we describe our numerical techniques and the features of the PonosHydro simulation. We describe our main results in Section \ref{sec_3}, focussing on the properties of the interstellar medium and the early star formation history of the simulated system at $z\sim 6$. We present our conclusions in Section \ref{sec_4}, cautioning the reader about the limitations of our results and discussing the possible implications of our findings. In the following, when not explicitly specified, all lengths and densities are given in physical units. \section{Methods} \label{sec_2} \subsection{Initial conditions} We perform and analyse a new simulation that complements the suite of cosmological, zoom-in simulations presented by \citet{fiacconi+16} and named ``Ponos''. This is a high-resolution version of the run focusing on the halo originally dubbed as ``PonosV''. This new version includes hydrodynamics and self-consistent baryonic physics, and we refer to it as ``PH'' or ``PonosHydro'' in the following. The target halo evolves in a box of of 85.5 comoving Mpc and it reaches a mass $\approx 1.2 \times 10^{13}$~M$_{\sun}$ at $z = 0$. We adopt a $\Lambda$CDM cosmology consistent with the results of \emph{Wilkinson Microwave Anisotropy Probe} 7/9 years, parametrised by $\Omega_{\rm m,0}=0.272$, $\Omega_{\Lambda,0}=0.728$, $\Omega_{\rm b,0}=0.0455$, $\sigma_{8} = 0.807$, $n_{\rm s} = 0.961$, and $H_0 = 70.2$~km~s$^{-1}$~Mpc$^{-1}$ \citep{komatsu+11,hinshaw+13}. The original initial conditions (ICs) are part of the AGORA\footnote{\url{https://sites.google.com/site/santacruzcomparisonproject/home}} code-comparison project \citep{kim+14}. We generate new ICs of the same halo using the {\sc Music}\footnote{\url{http://www.phys.ethz.ch/~hahn/MUSIC/}} code \citep{hahn+11}. Our ICs are optimised to follow the growth of the halo until $z=6$. They consist of a base cube of 128$^3$ particles per side starting at $z=100$, with additional nested levels of refinement to increase the resolution within the Lagrangian region that maps the particles contained within $2.5 R_{\rm vir}$ at $z=6$ on to the ICs. We define the virial radius $R_{\rm vir}$ as the spherical radius that encompasses a mean matter density $\Delta(z) \rho_{\rm c}(z)$, where $\rho_{\rm c}(z)$ is the critical density that the Universe needs to be flat, and $\Delta(z)$ is the $z$-dependent virial over-density defined by \citet{bryan+98}. As a consequence, the virial mass is defined as $M_{\rm vir} = 4 \pi \Delta(z) \rho_{\rm c}(z) R_{\rm vir}^3 / 3$. At $z=6$, the virial mass of the main galaxy is $\approx 1.5 \times 10^{11}$~M$_{\sun}$. In fact, we use the following procedure to determine the high-resolution region on the ICs \citep{fiacconi+16}: (i) we run a $128^3$, dark-matter-only, full-box simulation to identify the main halo at $z=6$; (ii) we trace the particles within 2.5$R_{\rm vir}$ back on to the ICs; (iii) we locally increase the resolution of the ICs by adding one level of refinement within a rectangular box that contains all the identified particles; (iv) we evolve again the ICs until $z=6$; (v) we repeat (ii) and we add two additional levels of refinement within the convex hull that contains all the identified particles. Every level of refinement increases the spatial and mass resolution by a factor 2 and 8, respectively. We iterate steps (iv) and (v) until we add 7 additional levels of refinement above the base cube, introducing gas particles in the last level. We verified with dark-matter-only test runs and in the main run that this procedure allows us to maintain the fraction of contaminating particles\footnote{Here the term ``contaminating particles'' refers to dark matter particles from coarser levels of refinement in the ICs.} in both mass and number well below 0.1\% within the virial volume at all $z\geq 6$. Finally, the highest resolution dark matter and gas particle have a mass $m_{\rm dm} = 4397.6$~M$_{\sun}$ and $m_{\rm g} = 883.4$~M$_{\sun}$, respectively. The force resolution is determined by the softening length, which is set to 1/60 of the mean particle separation at each level of refinement. This corresponds to a minimum dark matter softening $\epsilon_{\rm dm} = 81.8$ physical pc and to a gas softening $\epsilon_{\rm g} = 47.9$ physical pc. The softening is kept constant in physical units during evolution at $z<9$, while it remains constant in comoving coordinates at higher redshifts. The total number of particles in the ICs is 118,694,002, while the total number of particles within the viral radius at $z=6.5$ is 56,213,155. \subsection{Simulation code}\label{sec_sim_code} We evolve the simulation using the {\sc gasoline} code \citep{wadsely+04}. The code computes gravitational interactions using a 4$^{\rm th}$-order (i.e. hexadecapole) multipole expansion of the gravitational force on a KD binary tree, following the original scheme adopted by the {\sc pkdgrav} code \citep{stadel+01,stadel+13}. {\sc gasoline} models the gas dynamics using the smoothed particle hydrodynamics (SPH) algorithm (e.g. \citealt{lucy+77,gingold+77,hernquist+89}). In addition to the standard SPH formulation, the energy equation includes a term for thermal energy and metal diffusion (with a coefficient $C = 0.05$) as introduced and discussed by \citet{wadsely+08} and \citet{shen+10}. This approach reduces the artificial surface tension that arises close to strong density gradients where Kelvin-Helmholtz instability may develop (e.g. \citealt{agertz+07,read+12,hopkins+13,keller+14}). {\sc gasoline} includes several sub-resolution models to treat the radiative cooling of the gas, the formation of stars and the impact of their feedback in terms of supernovae-injected energy and mass released by winds. The gas is allowed to cool radiatively in the optically-thin limit by solving the non-equilibrium reaction network of HI, HII, HeI, HeII and HeIII. We take into account the contribution from metal lines with a temperature floor $T_{\rm floor} \approx 100$~K following \citet{shen+13}. The simulation also includes a uniform, redshift-dependent ultraviolet radiation background due to stellar and quasar reionisation according to \citet{haardt+12}. The implementation of star formation and stellar feedback mostly follows the prescriptions from \citet{stinson+06}. Stars form when: (i) the local density exceeds the threshold ${\rm n}_{\rm SF} = 10$~H~cm$^{-3}$ (ii) the local temperature goes below $T_{\rm SF} = 10^4$~K; (Iii) the local gas over-density is $>2.64$; and (iv) the flow is convergent and locally Jeans unstable. When the above criteria are fulfilled, stars form according to a Schmidt-like low, $\dot{\rho}_{\star} = \epsilon_{\rm SF} \rho_{\rm g} / t_{\rm dyn} \propto \rho_{\rm g}^{3/2}$, where $\rho_{\star}$ is the mass density of formed stars, $\rho_{\rm g}$ is the local gas mass density, and $t_{\rm dyn} = 1/\sqrt{G \rho_{\rm g}}$ is the local dynamical time. The efficiency $\epsilon_{\rm SF} = 0.05$ is a phenomenological parameter meant to capture the average star-formation efficiency. Each stellar particle has an initial mass $m_{\star} = 0.4 m_{\rm g} = 353.4$~M$_{\sun}$ and represents a stellar population with a \citet{kroupa+01} initial mass function. We choose ${\rm n}_{\rm SF}$ to be about the density reached when resolving the local Jeans mass with at least one kernel at the lowest temperature reached by the simulation, i.e. the temperature floor of the cooling function, namely: \begin{equation} {\rm n}_{\rm SF} \lesssim 16.4~\left( \frac{T_{\rm floor}}{100~{\rm K}} \right)^{3} \left( \frac{N}{32} \right)^{-2} \left( \frac{m_{\rm g}}{883.4~{\rm M}_{\sun}} \right)^{-2}~{\rm H~cm^{-3}}, \end{equation} where $N$ is the number of gas particles per kernel. This choice ensures that, even in the most extreme case (i.e. gas at the lowest temperature that has not formed stars yet), the local collapse is resolved. However, since $T_{\rm SF} \gg T_{\rm floor}$, the Jeans mass during collapses that lead to star-formation episodes is likely resolved with a number of particles $\gg N$. Indeed, at ${\rm n}_{\rm SF}$ and $T_{\rm SF}$, the local Jeans mass is resolved with $\sim 1220$ kernel masses, which corresponds to $\sim 39,000$ particles in the regions at the verge of the collapse that might form stars. \begin{table} \caption{List of performed simulations and of their main features. From left to right: label of the simulation, initial redshift of the simulation, final redshift of the simulation, $\alpha$ parameter of the pressure floor (see the text), initial mass of gas particles, mass of the lightest dark matter particles, gravitational softening of the gas, smallest gravitational softening of the dark matter.} \label{tab:summmary} \begin{tabular}{lccccccc} \hline Label & $z_{\rm ini}$ & $z_{\rm end}$ & $\alpha$ & $m_{\rm g}$ & $m_{\rm dm}$ & $\epsilon_{\rm g}$ & $\epsilon_{\rm dm}$ \\ & & & & (M$_{\sun}$) & (M$_{\sun}$) & (pc) & (pc) \\ \hline PH & 100 & 6.5 & 9 & 883.4 & 4397.6 & 47.9 & 81.8 \\ PH\_PF1 & 8 & 6.5 & 1 & 883.4 & 4397.6 & 47.9 & 81.8 \\ PH\_NF$^{a}$ & 8 & 7.1 & 9 & 883.4 & 4397.6 & 47.9 & 81.8 \\ PH\_LR & 100 & 6.5 & 9 & 7067.2 & 35180.8 & 95.8 & 163.6 \\ \hline \end{tabular} $^{a}$ This run does not include neither star formation nor stellar feedback. \end{table} \begin{figure*} \begin{center} \includegraphics[width=2\columnwidth]{./MTREE.pdf} \caption{ Upper row, from left to right: sequence of images at $z=6.5$, 7.1, 8.1 and 9.1, respectively, of the stellar component of the main galaxy in the restframe U, V and J bands. Middle row: merger tree of the main halo as a function of redshift. The size of each circle is proportional to the logarithm of the virial mass. Green circles denote the main branch of the merger tree, while thick, red circles connected by thick, red lines mark the occurrence of a major merger with mass ratio $q < 4$. Lower panel: evolution of the virial mass of the main halo as a function of redshift. Red dots and vertical dashed lines show major mergers together with their mass ratio $q$. The main halo reaches about $10^{11}$~M$_{\sun}$ by $z \gtrsim 6$.} \label{fig_merger_tree} \end{center} \end{figure*} During their lifetime, light stars with mass between 1 and 8~M$_{\sun}$ release $\sim 40 \%$ of the particle mass and metals to the surrounding gas through winds following \citet{weidemann+87}. On the other hand, massive stars between 8 and 40~M$_{\sun}$ are responsible for Type II supernovae, which inject energy, mass and metals to the surrounding gas following the analytical blast wave solution of \citet{mckee+77}. In particular, each Type II supernova releases $10^{51}$~erg in thermal energy to the gas particles within the maximum radius that the blast wave can reach. The cooling of those gas particles is temporarily turned off for the time corresponding to the snowplough phase of the blast wave. Type Ia supernovae also contribute to the supernova feedback budget injecting the same energy as Type II supernovae, but without the shut off of the cooling. They releases a mass of 1.4~M$_{\sun}$, where 0.63 and 0.13~M$_{\sun}$ are constituted by iron and oxygen, respectively. Their frequency is given by the binary fraction estimated by \citet{raiteri+96}. We also adopt the pressure floor described by e.g. \citet{roskar+15} in order to avoid spurious fragmentation. Specifically, the minimum pressure of each gas particle (at a given density and temperature) is $P_{\rm min} = \alpha \max(\epsilon_{\rm g}, h)^2 G \rho^2$, where $h$ is the local SPH smoothing length, $\rho$ is the gas mass density, and $\alpha = 9$ is a safety factor. This choice of $\alpha$ is such that the local Jeans length is always resolved with $N_{\lambda} \approx 6$ ($N_{\lambda} \approx 3$) resolution elements $\max(\epsilon_{\rm g}, h)$, assuming that the Jeans length is defined as $\lambda_{\rm J} = c_{\rm s} \sqrt{\pi / (G \rho)}$ ($\lambda_{\rm J} = \sqrt{15 k_{\rm B} T / (4 \pi G \rho \mu m_{\rm p})}$). The local sound speed of the gas is $c_{\rm s}= \sqrt{\gamma k_{\rm B} T / (\mu m_{\rm p})}$, where $\gamma = 5/3$ is the adiabatic index, $k_{\rm B}$ is the Boltzmann constant, $\mu \approx 0.6$ is the mean molecular weight (assuming that most of the gas is ionised), and $m_{\rm p}$ is the proton mass. The criterion on the Jeans length translates in the local Jeans mass resolved with $\approx (\epsilon_{\rm g} / h)^{3} N_{\lambda}^{3} / 8 \approx 15.6~N_{\lambda}^{3}$ kernel masses, where we substitute $\epsilon_{\rm g} / h = 5$ in the last equality after noting that the distribution of the ratio $h/\epsilon_{\rm g}$ within the main galaxy is peaked at about 0.2. We note that we adopt a pressure floor to avoid numerical fragmentation, while we leave the temperature of the gas free to evolve according to the energy equation and the effect of radiative cooling and heating (e.g. \citealt{richings+15}). This means that the equation of state of the gas does not effectively follow an ideal gas law anymore when relating the pressure to the density and the temperature. The usage of the pressure floor also imply that the gravitational collapse is at least partially suppressed for structures initially collapsing on scales below $N_{\lambda} \epsilon_{\rm g} \sim 150-300$~pc \citep{bate+97}. However, we have checked that the gas phase-space region where the chosen pressure floor kicks in overlaps with the conditions for forming stars only at high densities and low temperatures. This implies that the initial phases of gravitational collapses that lead to star formation are physical and well resolved, while the pressure floor ensures that spurious fragmentation on smaller scales is avoided. Nonetheless, this may have a dynamical role in the evolution of the interstellar medium of the simulated galaxy; therefore, we also run an additional run from $z \approx 8$ (after that the disc has re-formed; see Section \ref{sec_3}) to $z=6.5$ adopting a lower pressure floor with $\alpha = 1$. This additional run is dubbed ``PH\_PF1''. We also restart an additional version of run PH from $z \approx 8$ to $z = 7.1$, including radiative cooling but without star formation and stellar feedback, in order to test the impact of feedback on the properties of the interstellar medium (see Section \ref{sec_ism}). We refer to this run as ``PH\_NF''. In addition to run PH, we have also simulated a lower resolution version, i.e. ``PH\_LR'', adopting the same parameters. This run has a mass and force resolution 8 times and 2 times coarser than the main run, respectively. The initial total number of particle is 17,952,072, while the main halo contains about 5,974,677 particles within the virial radius at $z=6.5$. We use this run for resolution tests that we show in Appendix \ref{appendix_resolution_tests}. We summarise the labelling and the main features of all the simulations in Table \ref{tab:summmary}. \subsection{Halo detection} \label{halo_detection} We identify dark matter haloes (and then the contained galaxies) using the {\sc amiga halo finder} \citep{gill+04, knollmann+09}. Every halo is defined as a gravitationally bound group of at least 100 particles within a virial radius $R_{\rm vir}$. Then we construct the merger tree of the main halo matching the dark matter particle IDs among every two snapshots from $z=6.5$ to $z \simeq 30$ (e.g. \citealt{fiacconi+15}). The main progenitor branch is determined as the progenitor halo that maximises the number $f_{\rm shared} = N_{\rm shared} / \sqrt{N_{\rm h} N_{\rm prog}}$ through each snapshot, where $N_{\rm shared}$ is number of particles shared among the halo and its progenitor, while $N_{\rm h}$ and $N_{\rm prog}$ are the particles of the halo and the progenitor, respectively \citep{fiacconi+15}. \section{Results} \label{sec_3} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{./SFH.pdf} \caption{ Upper panel: time evolution of the star formation rate ($\dot{M}_{\star}$, blue continuous line, left $y$-axes) and of the specific star formation rate ($\dot{M}_{\star} / M_{\star}$, red dotted line, right $y$-axes). The latest two major mergers of the main halo are highlighted with their mass ratio $q$. Lower panel: specific star formation rate as a function of the stellar mass for our simulation at $z=6.5$ (PH, blue square) and for the data from the VUDS survey in the redshift range $4.5 < z < 5.5$ \citep{tasca+15}. The VUDS data are represented as red filled and empty circles when the redshift determination is $\sim 100 \%$ and $\sim 70-75\%$ reliable, respectively. The thin dotted line shows the VUDS completeness limit in the determination of the star formation rates. The dashed and dot-dashed lines show the determination of the main sequence of star forming galaxies at $z \approx 2$ from \citet{daddi+07} and the analytical fit of the main sequence by \citet{schreiber+15} extrapolated to $z=5$, respectively.} \label{fig_SFH} \end{center} \end{figure} \subsection{Evolution of the main galaxy} The global evolution of the main halo of the run PH is summarised in Figure \ref{fig_merger_tree}. Specifically, we show the merger tree of the main halo, highlighting the major mergers with mass ratio $q < 4$ (where we define $q \equiv M_{1} / M_{2} > 1$), and the growth of the virial mass $M_{\rm vir}$ with redshift. The early growth of the main halo is characterised by a sequence of several major mergers with mass ratios between $q=1$ and $q=3$ from $z\approx 18.5$ to $z\approx 11.6$. During this phase, $M_{\rm vir}$ quickly grows from $< 10^8$~M$_{\sun}$ to $\sim 4 \times 10^{9}$~M$_{\sun}$ through the rapid accretion of both dark matter and gas, also triggered by the major mergers. By $z\approx 11$, the main halo has formed, i.e. it has reached about 5\% of its final mass at $z=6.5$, which is $M_{\rm vir} \simeq 1.2 \times 10^{11}$~M$_{\sun}$. Slightly later, at $z\approx 10$, the virial radius of the main halo ($R_{\rm vir} \approx 7.7$~kpc in physical units) begins to overlap with the virial volume of a nearby halo with mass 3.7 times smaller than the main one. This is the last major merger of the main halo during the time span that we have simulated. It completes by $z \sim 9$, though the two central galaxies merge slightly after at $z\approx 8$, as shown by Figure \ref{fig_merger_tree}. Such a time lag between the merger of the two haloes and their central galaxies, $\sim 100$~Myr from $z=9$ to $z=8$, is consistent with the dynamical friction timescale for the central galaxies to sink at the centre of the remnant halo after a $q=4$ merger, namely $\sim 0.2~t_{\rm H}(z) \sim 170$~Myr, where $t_{\rm H}(z) = H^{-1}(z)$ is the Hubble time at $z$ and the last equality is obtained assuming $z=9$ \citep{krivitsky+97,boylan-kolchin+08,jiang+08,hopkins+10}. \begin{figure*} \begin{center} \includegraphics[width=2\columnwidth]{./DISC_FACEON.pdf} \caption{ Evolution of the galactic disc in run PH between $z=7.6$ and $z=6.5$ (from bottom to top). From left to right: gas surface density, local gas temperature in the disc mid-plane, local gas velocity dispersion (computed locally as an SPH weighted average) in the disc mid-plane, stellar surface density, and star formation rate surface density. All quantities are measured within a 200~pc thick slice centred on the disc plane. The star formation rate surface density is determined from the surface density of stellar particles younger than 10~Myr. The circles in the lower-right corners of the first column have radii equal to the gravitational softening of the gas, 47.9 physical pc. } \label{fig_disc_faceon} \end{center} \end{figure*} After the last major merger, the galaxy quickly builds an extended stellar and gaseous disc at $z\approx 7.5$. This is visible e.g. in the mock UVJ image at $z=7.1$ in Figure \ref{fig_merger_tree}. Those images are obtained similarly to \citet{fiacconi+15}: we consider each stellar particle as a stellar population with a \citet{kroupa+01} initial mass function of age $\tau$ and metallicity $Z$ and we determine its total luminosity by interpolating a table based on the synthetic stellar population models of the Padova group \citep{marigo+08,girardi+10,bressan+12}. These tables\footnote{Similar tables can be obtained at \url{http://stev.oapd.inaf.it/cgi-bin/cmd}} span the stellar age interval from 4 Myr to 12.6 Gyr and the metallicity interval from $5 \times 10^{-3}$ to $1.6~Z_{\sun}$. We neglect the effect of dust attenuation. Then, the galaxy evolves nearly in isolation until $z\approx 6.5$, when it starts to interact with another galaxy in a mildly minor $q\approx5$ merger. During this period, the total gas fraction $f_{\rm gas} = M_{\rm gas} / (M_{\star} + M_{\rm gas})$ within $0.1 R_{\rm vir}$ oscillates around 50\% (only mildly increasing with time), while the fraction of cold gas ($T < 10^4$~K) is lower and about 37\%. The stellar mass, measured as the mass within $0.1 R_{\rm vir}$, is $M_{\star} \approx 2.5 \times 10^{9}$~M$_{\sun}$. By changing the filtering scale (e.g. assuming a fixed volume of 3 physical kpc or 20 comoving kpc), the difference on the determination of $M_{\star}$ is less than 20\%. During this phase, the global properties of the main galaxy, namely stellar mass, gas fraction, and star formation rate (see below), are quantitatively similar\footnote{\citet{watson+15} have observed a galaxy at $z = 7.5 \pm 0.2$ with stellar mass $1.7^{+0.7}_{-0.5} \times 10^{9}$~M$_{\sun}$, star formation rate $9^{+4}_{-2}$~M$_{\sun}$~yr$^{-1}$ (from infrared light), and gas fraction $55 \pm 25$ \%.} to those observed in a few, strongly lensed galaxies at $z \sim 7$ by \citet{bradley+12} and \citet{watson+15}. Moreover, the stellar mass is a factor 2 higher than the halo mass-stellar mass relations determined by \citet{behroozi+13}, but yet consistent within the $2\sigma$ uncertainty. Figure \ref{fig_SFH} shows the evolution of the star formation rate of the main galaxy as a function of time. This is continuously increasing with time, with isolated peaks corresponding to the major mergers highlighted in Figure \ref{fig_merger_tree}. Around $z \approx 6.5-7$, the main galaxy is forming $\sim 20$~M$_{\sun}$~yr$^{-1}$ in new stars. On the other hand, the specific star formation rate decreases from $\gtrsim 10$~Gyr$^{-1}$ to $\sim 5-6$~Gyr$^{-1}$ at $z = 6.5$. These numbers are consistent with recent determinations of the typical star formation rate of galaxies in samples at $z > 4$. \citet{smit+12} have computed the star formation rate function (normalised as the number of galaxies per unit volume and unit star formation rate) starting from dust-corrected UV luminosity functions from \citet{bouwens+07,bouwens+11}. They have found that between redshift 6 and 8 the star formation rate corresponding to the knee $L_{\star}$ of a Schechter luminosity function is about 10~M$_{\sun}$~yr$^{-1}$, suggesting that our galaxy is probably slightly larger than $L_{\star}$. By correcting the value of $L_{\star}$ at $z=6.8$ determined by \citet{bouwens+11} as described by \citet{smit+12}, we get a corresponding absolute magnitude after dust correction $M_{\rm UV}^{\star} = -20.6$, which is indeed slightly larger (i.e. less bright) than the rest-frame magnitude $M_{\rm U} = -20.8$ of the stellar component within $0.1 R_{\rm vir}$ measured from run PH. We caution however that the observational measurements have been obtained at 1600~\AA, while our estimate from the simulation comes from U rest-frame. We also compare the specific star formation rate as a function of mass with a population of slightly lower redshift galaxies ($4.5< z<5.5$, with spectroscopic redshifts) observed in the \emph{VIMOS Ultra-Deep Survey} (VUDS) by \citet{tasca+15}. At the mass of a few $10^9$~M$_{\sun}$, our galaxy sits slightly above the bulk of the data at $z \sim 5$ (thought there is large scattering), which in turn are distributed around the relation inferred by \citet{schreiber+15} from \emph{Herschel} data. This is also consistent with the slowly growing trend of the average specific star formation rate with increasing redshift derived by \citet{tasca+15}, $\propto (1+z)^{1.2}$, which would predict $\sim 6$~Gyr$^{-1}$ at $z=6.5$, although most of the galaxies used to get this determination have $M_{\star} > 10^{10}$~M$_{\sun}$. Overall, the global properties of the main galaxy in run PH seems to be consistent with typical star forming galaxies recently observed at similar redshifts \citep{iye+06,bradley+12,watson+15}. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{./ROTATION_CURVE_SB.pdf} \caption{Upper panel: rotation curve $V_{\rm rot}$ at $z = 7.1$ from the cold gas ($T < 50,000$~K). The blue solid line is the average over 50 random line of sights through the galaxy mid-plane, while the shaded region marks the 68\% of the measurements. The red dashed line is the circular velocity $V_{\rm circ} = \sqrt{G M(<r) / r}$ shown for comparison. Lower panel: rest frame surface brightness profiles of the stellar component in U (blue solid), V (green dashed), and K (red dotted) bands at $z=7.1$. The grey shaded region at the centre marks 2 gravitational softenings. The surface brightness profiles include cosmological dimming. The grey solid line shows the best-fit of the profile decomposition of $\mu_{\star}$ into two exponential profiles for the K band as an example. } \label{fig_rot_curve} \end{center} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=2\columnwidth]{./DISC_EDGEON.pdf} \caption{From left to right: edge-on view of the main galactic disc in run PH between $z=7.6$ and $z=6.5$. First row: gas temperature map averaged within a 200~pc thick slice perpendicular to the disc plane and centred on the disc centre. The arrows show the velocity field of the gas. Second row: the same of the top row, but for the gas metallicity. Third row: gas density map averaged within a 200~pc thick slice as for the temperature and the metallicity. Fourth row: stellar surface density projected within 4 kpc centred on the disc centre. The green lines mark equal surface density contours corresponding to $10^{7}$, $10^{8}$ and $10^{9}$~M$_{\sun}$~kpc$^{-2}$, from outside inward. } \label{fig_disc_edgeon} \end{center} \end{figure*} \subsection{Structure of the galactic disc}\label{sec_disc} Figure \ref{fig_disc_faceon} shows the evolution of the galactic disc in run PH between $6.5 < z < 7.6$, after the last major merger is completed. The galaxy is oriented face-on by determining the specific angular momentum of the gas within a sphere of radius $0.1 R_{\rm vir}$, after having centred the galaxy on the minimum of the potential and having removed the systemic velocity evaluated as the mass-weighted velocity of the particles within 500~physical~pc from the centre. During this interval of time, the galaxy appears as a disc with tenuous nearly axisymmetric structures in the stellar component. At $z=7.6$ the two cores originally at the centre of the merging galaxies are finally coalescing at the centre of the remnant, forming a tiny bulge (also visible as a redder component in Figure \ref{fig_merger_tree}). The surface density map of the gas reveals that the gaseous disc is highly inhomogeneous, with many over-dense regions. This is particularly evident at $z=6.5$, when the disc is less axisymmetric and more perturbed than at slightly larger redshifts, with more extended regions of dense gas at and above $\sim 10^8$~M$_{\sun}$~kpc$^{-2}$, possibly because of the interaction with the companion galaxy visible in Figure \ref{fig_merger_tree}. Indeed, the gas is highly multi-phase, as also confirmed by the temperature maps. Local fluctuations can vary from temperatures $< 1000$~K to $\sim 10^6$~K on scales of $\gtrsim 100$~pc. Those high temperatures are likely triggered by local injection of energy from stellar feedback. Such a structure of the interstellar medium is potentially a signature of turbulence, as expected from previous work on high-$z$ galaxies (e.g. \citealt{green+10,bournaud+11,hopkins+12}), though the main driver of turbulence is still debated. Indeed, we observe large fluctuations in the local velocity dispersion of the gas, calculated as SPH kernel-weighted average of the local standard deviation of the velocity, ranging from a few to almost 100~km~s$^{-1}$, with a typical, average value $\sigma_{\rm g} \sim 40$~km~s$^{-1}$ across the disc at different redshifts. We discuss the turbulence within the disc in more detail in Section \ref{sec_turbulence}. High-density regions are naturally associated to larger star formation rates, following a large-scale Kennicutt-Schmidt relation \citep{kennicutt+98,bigiel+08} with slope $n \approx 1-1.5$ at (total) surface densities $> 1$~M$_{\sun}$~pc$^{-2}$, consistent with the local volumetric Schmidt law \citep{gnedin+14}. The star formation within the disc is mostly distributed along the major features (i.e. spiral arms) in the stellar disc, with typical star formation rate surface densities $\gtrsim 1$~M$_{\sun}$~yr$^{-1}$~kpc$^{-2}$, while over-densities are associated with isolated regions of intense star formation as high as $\gtrsim 25$~M$_{\sun}$~yr$^{-1}$~kpc$^{-2}$. Figure \ref{fig_rot_curve} shows the rotation curve $V_{\rm rot}$ and the surface brightness profiles of the main galaxy in run PH at $z = 7.1$. The rotation curve is obtained as the mean over 50 random line of sights through the disc mid-plane. We take into account only cold/warm gas with temperature $< 50,000$~K to mimic the rotation curve that could be obtained through HI/H$\alpha$ observations. The galaxy has a flat rotation curve up to a few kpc from the centre with a nearly constant value about 100~km~s$^{-1}$. The circular velocity $V_{\rm circ} = \sqrt{G M(<r) / r}$ is also flat, but with an asymptotic large velocity about 140~km~s$^{-1}$. This mismatch witnesses the degree of non radial motion in the gas, also visible in the fluctuation of $V_{\rm rot}$ at $R \geq 2$~kpc. The surface brightness profiles of the stellar component in the rest frame U,V, and K band show that the stellar disc is nearly exponential, with a disc scale radius (half light radius) about $460$~physical~pc ($760$~physical~pc) in K band, which is grossly consistent with the size of typical galaxies at $z\sim 7$ \citep{oesch+10b}. The stellar disc drops slightly outside 2~kpc and it shows a tiny steepening for radii below $\sim 200-300$~pc owing to the presence of a tiny bulge. We perform a profile decomposition of the surface brightness in two exponential profiles, one for the disc and on for the bulge, finding a similar value of $B/T \approx 0.03\%$ (i.e. fraction of the total light in the bulge component) across the three photometric bands. Figure \ref{fig_disc_edgeon} shows the edge-on evolution of the disc that forms in run PH. The disc mid-plane corresponds to $y=0$, though it might look difficult to identify a smooth structure because the disc is indeed highly perturbed in the vertical direction, showing a clumpy and discontinuous structure. We calculate the vertical density profile of the gas, azimuthally averaging the gas density in 5 linearly-spaced bins between 20~pc and 2~kpc from the centre of the disc. We find that the gas density drops by almost an order of magnitude after a typical distance from the disc mid-plane between 100~pc and 300~pc, with the slight tendency to increase at larger cylindrical radii (i.e. the disc flares at large radii), as it is mildly hinted by Figure \ref{fig_disc_edgeon}. Indeed, we fitted the vertical profile of the gas density with an exponential law, finding typical vertical scales $\approx 300$~pc. A similar result (even a slightly ticker disc, probably because of the different coupling between feedback energy and dense gas) is obtained analysing the PH\_PF1 run with the reduced pressure floor, suggesting that the latter does not play a major role in setting the vertical structure of the disc. We repeat the same analysis for the stellar disc and we find that the latter is somewhat thinner, with exponential vertical scales typically $\approx 170-200$~pc. This is consistent with stars forming preferentially in the denser and thinner cold gas disc. The vertical temperature structure shows interesting features. The dense, clumpy gas that can be identified in the central disc by comparing with the density map is typically cold, with temperatures between $\gtrsim 100$ and $\sim 8000$~K owing to metal cooling, and it is embedded in an atmosphere of hot gas $\gtrsim 10^5$~K. The hot gas is typically outflowing vertically from the central disc, funnelled through clumps of colder gas, likely suggesting that it originates from episodes of supernova feedback. This gas is indeed polluted to metallicity $\lesssim Z_{\sun}$ and is injected into the circumgalactic medium. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{./OUTFLOWS.pdf} \caption{Upper panel: mass-weighted probability distribution of the outflow velocity $V_{\rm out}$ at $z=6.5$ (blue solid line), $z=7.1$ (red dashed line), and $z=7.6$ (green dotted line). The arrows show the escape velocities at each $z$. The black dotted lines shows ${\rm d}\mathcal{P} / {\rm d}V_{\rm out} \propto \exp(-V_{\rm out}/\tilde{V})$, with $\tilde{V} = 80$~km~s$^{-1}$. Lower panel: mass loading factor $\eta$ as a function of the asymptotic maximum rotation velocity. The red circles are the observational data from \citet{schroetter+15} and references therein, the green square is the average determination of \citet{gallerani+16} from the catalogue of $z \sim 5$ galaxies by \citet{capak+15}, and the blue line shows the results for run PH between $z=7.6$ and $z=6.5$. The blue shaded area is spans the results associated to different shells used to measure $\dot{M}_{\rm out}$ (see the text for details). } \label{fig_outflows} \end{center} \end{figure} Figure \ref{fig_outflows} shows the probability distribution of the outflow velocities $V_{\rm out}$ (i.e. the radial velocity $v_{r} > 0$ of outflowing gas) of the gas particles within a spherical shell between $0.2$ and $0.3~R_{\rm vir}$ from the central galaxy (e.g. \citealt{muratov+15}). The particles are selected to have $\bmath{v} \cdot \hat{\bmath{r}} > 0$, where $\bmath{v}$ is the particle velocity (after removing the systemic velocity of the halo) and $\hat{\bmath{r}}$ is the versor along the direction from the centre of the halo to the particle position. The distribution is roughly exponential, i.e. $\propto \exp(- V_{\rm out} / \tilde{V})$, where $\tilde{V} \approx 80$~km~s$^{-1}$ at $z= 7.1-6.5$ ($\approx 60$~km~s$^{-1}$ at $z= 7.6$) is a typical scale value for the positive radial velocity. In particular, $\tilde{V} \approx \sigma_{r} \equiv \sqrt{\langle V_{\rm out}^2 \rangle - \langle V_{\rm out} \rangle^2}$ (where $\langle \cdot \rangle$ is intended as mass-weighted average), suggesting that the genuinely outflowing material populates the extended tails up to velocities as high as $200-300$~km~s$^{-1}$. However, the fraction of the outflowing mass at high velocity, more specifically with $V_{\rm out}$ larger than the escape velocity $V_{\rm esc}$, is typically low, oscillating between 1 and 5\%, i.e. most of the gas expelled from the disc will be recycled through the halo and will eventually join the central galaxy. Here, we define the escape velocity $V_{\rm esc}$ at the position $\bmath{x}$ of a particle as $V_{\rm esc} = \sqrt{2 |\phi(\bmath{x}) - \phi_{0}|}$, where $\phi$ is the local gravitational potential, and $\phi_0$ is the reference potential at $R_{\rm vir}$ calculated as the average gravitational potential of all particles in a thin spherical shell between $0.95~R_{\rm vir}$ and $R_{\rm vir}$. We have repeated this analysis within two additional spherical shells, (i) with an outer radius $0.2~R_{\rm vir}$ and a thickness of 1 physical kpc, and (ii) between 2 and 2.5 physical kpc, and we have found very similar results, within a factor 2. We have also computed the mass loading factor $\eta \equiv \dot{M}_{\rm out} / \dot{M}_{\star}$, where $\dot{M}_{\rm out} = \Delta r^{-1} \sum_i m_{i} \bmath{v}_{i} \cdot \hat{\bmath{r}}_{i}$ is the mass outflow rate, the sum involves only gas particles $m_{i}$ within the considered shell with $\bmath{v}_{i} \cdot \hat{\bmath{r}}_{i} > 0$, and $\Delta r$ is the radial thickness of the shell. The results between $z=7.6$ and $z=6.5$ are shown in the lower panel of Figure \ref{fig_outflows}, where we compare with the observational determinations of $\eta$ by \citet{schroetter+15} and references therein. They are plotted against $V_{\rm max}$, i.e. the asymptotic maximum of the rotation curve \citep{schroetter+15}. We repeat the same analysis as for Figure \ref{fig_rot_curve} to calculate $V_{\rm max}$ from the simulation data, whose ``uncertainty band'' in Figure \ref{fig_outflows} is associated to the measurements within different shells. We also show the recent determination of $\eta \approx 2$ (where we add a generous uncertainty of 50\%) by \citet{gallerani+16}, which represents the average $\eta$ estimated for the sample of $z \sim 5$ and $M_{\star} \sim 10^{10}$~M$_{\sun}$ galaxies presented in \citet{capak+15}. In this case, we estimate $V_{\rm max}$ by using the stellar mass-halo mass relation by \citet{behroozi+13} to determine the virial velocity of the typical halo where such galaxies are supposed to live in at $z = 5$, with an uncertainty of 50~km~s$^{-1}$. We find $\eta \approx 0.5-1$ for PH, in fair agreement with the observational estimates. We have also repeated the analysis selecting particles with $\bmath{v}_{i} \cdot \hat{\bmath{r}}_{i} > \sigma_{r}$ and finding slightly lower values for $\eta$ by at most 40\%. We caution though that the observational data from \citet{schroetter+15} are a collection of measurements at much lower redshift than our simulation, namely between 0.1 and 0.8. However, it is noteworthy that the observational estimate at $z \sim 5$ by \citet{gallerani+16} is yet consistent with the lower redshift data and in particular with local starbursts \citep{heckman+15}. Moreover, the agreement at face values is promising, given the scattering of order $>10$ in $\eta$ among different successful theoretical/empirical models (see e.g. Fig. 10 of \citealt{schroetter+15}). \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{./DENSITY_PDF.pdf} \caption{Mass-weighted probability density function of the gas density within the galactic disc at $z=7.1$. The black solid line shows the total distribution, while the blue, green, and read solid lines show the distribution of the ``cold'' ($T < 10^3$~K), ``warm'' ($5 \times 10^{4}~{\rm K} < T < 5 \times 10^{5}~{\rm K}$), and ``hot'' ($T > 10^{5}$~K) gas, respectively. The dashed lines of with the same colour show the log-normal best fit of each component. The thin grey line shows the compositions of the log-normal best fits, renormalised by the fractional mass occupied by the three selected components \citep{robertson+08}. The fits describe fairly accurately the total density distribution. } \label{fig_den_pdf} \end{center} \end{figure} On the other hand, clumps and streams of cold gas are typically raining down radially on the galactic disc. Some of that gas is mildly polluted $Z \sim 0.2~Z_{\sun}$ and typically lives in over-dense regions, compressed by the surrounding hot outflows, where the cooling time is short, likely because of the metals already seeded by previous stellar outflows \citep{costa+15}. However, some of the inflowing material has low metallicity, possibly coming from direct feeding through cold flows. We select gas particles at $z=6.5$ within a cubic box 6~physical kpc in size centred on the main galaxy (removing a slab 400~pc thick centred on the disc mid-plane) with temperature $T < 5 \times 10^{4}$~K, i.e. the gas that is (or will be soon) able to form stars. We trace them back among snapshots to $z=8$, storing the time-evolution of the density, temperature, and metallicity. We select the gas that is directly inflowing from cold flows as the gas whose temperature never exceeds $10^{5.5}$~K, finding that it represents $\sim 50-60\%$ of the mass of cold gas originally selected in the trial volume. A fraction $\sim 5-10\%$ of that gas is also accreted at nearly primordial composition as its metallicity always remains below $0.1~Z_{\sun}$. We plan to devote a more detailed analysis to the gas inflow form larger scales as well as to the recycling of the gas within the halo in a forthcoming publication. \subsection{The properties of the disc turbulence}\label{sec_turbulence} The gaseous disc of the main galaxy is turbulent and multi-phase. We show the mass-weighted probability density function of the gas within the disc at the representative redshift $z=7.1$ in Figure \ref{fig_den_pdf}. We select the gas within a cylinder with a radius of 2~physical~kpc and 500~pc thick, centred on the disc mid-plane. The density distribution is not approximately a log-normal distribution, as generally found in simulations of isothermal turbulence (e.g. \citealt{padoan+97,price+10}). Instead, it peaks at a few M$_{\sun}$~pc$^{-3}$ ($\sim 100$~H~cm$^{-3}$), with an extended tail at low densities well below $10^{-2}$~M$_{\sun}$~pc$^{-3}$ ($\sim 0.2$~H~cm$^{-3}$) and a fast decline around $\sim 10$~M$_{\sun}$~pc$^{-3}$ ($\sim 250$~H~cm$^{-3}$). This more complicated shape is a natural consequence of the mixing different phases of gas temperatures \citep{robertson+08}. We can identify three main, nearly-isothermal components\footnote{In the following, we dub the different components within quotes in order to avoid confusion with the common phases of the interstellar medium \citep{ferriere+01}. In fact, their names are just labels for the temperature ranges used for the selection and, though similar to some of the phases of the interstellar medium, they do not refer directly to them.} characterised by an approximately log-normal distribution: a ``cold'' component at $T < 10^{3}$~K, a ``warm'' component with $5 \times 10^{4} < T/{\rm K} < 5 \times 10^{5}$, and a ``hot'' component a temperatures larger than $5 \times 10^{5}$~K. They are associated with gas at average densities of 100, 10, and 1~H~cm$^{-3}$, respectively. As a whole, they represent $65\%$ of the total gas in the disc, subdivided between ``cold', ``warm'', and ``hot'' as $\approx60 \%$, $\approx 29 \%$, and $\approx 11\%$, respectively. The log-normal fits of the ``warm'' and ``cold'' components do not agree very well with the data at large densities. This is possibly due to the effect of the pressure floor that limits the development of very high over-densities. Indeed, repeating the same analysis on the run PH\_PF1 we find a lower peak of the ``cold'' component, which instead extends up to densities a factor $\sim 2-3$ larger, up to about 20~M$_{\sun}$~pc$^{-3}$. In this case, the high density tail is nicely described by the log-normal distribution and it does not show a power law behaviour expected for highly self-gravitating turbulent flows (e.g. \citealt{scalo+98,federrath+08,kritsuk+11}), at least at the densities of a few hundreds H~cm$^{-3}$ that we are able to probe at the resolution of our simulation. \begin{figure} \begin{center} \includegraphics[width=0.9\columnwidth]{./TURBULENCE_PLOT.pdf} \caption{Upper panel: 2-dimensional surface density power spectra of the gas in the galactic disc in a box $L_{\rm box} = 4$~kpc. Blue solid, red dashed, and green dotted lines refer to $z=6.5$, 7.1, and 7.6, respectively. Thick and thin lines are associated with run PH and PH\_PF1, respectively. Lower panel: 3-dimensional velocity power spectra, obtained as average between 3 boxes $L_{\rm box} = 512$~pc (see the text for details). The grey bands show the minimum-maximum range between the boxes at each redshift for run PH. The line style is the same as above. The dashed lines show $P_{v} \propto k^{-5/3}$ and $\propto k^{-2}$ for visual guidance. In both panels, the vertical dotted lines mark the $k$ associated with the gravitational softening, while the grey shaded regions refers to $k < k_{\rm box}$ and $k > k_{\rm Nyq}$. } \label{fig_turbulence} \end{center} \end{figure} We analyse the properties of the turbulence by computing the power spectrum of velocity and density fluctuations at different scales. Given any quantity $w$, its two-points correlation function within a volume $V$ is defined as: \begin{equation} \xi_{w}(\bmath{l}) = \frac{1}{V} \int_{V} w(\bmath{x} + \bmath{l})~w(\bmath{x})~{\rm d}^{3}\bmath{x}, \end{equation} which can be generalised for a vectorial quantity by means of the inner product $\bmath{w}(\bmath{x}) \cdot \bmath{w}(\bmath{x}+\bmath{l})$. The power spectrum of $w$ is the Fourier transform of $\xi_{w}$, which can be rearranged as: \begin{equation} p_{w} (\bmath{k}) = \frac{1}{(2 \pi)^{3/2} V} \left| \int_{V} w(\bmath{x})~e^{-i \bmath{k} \cdot \bmath{x}}~{\rm d}^{3}\bmath{x} \right|^{2}. \end{equation} We further assume that each mode is isotropic and it depends only on the module of the wavenumber $k = |\bmath{k}|$. Then, the power spectrum becomes: \begin{equation} P_{w}(k) = k^{2} \int_{4 \pi} p_{w}(k, \theta, \phi) \sin\theta~{\rm d}\theta~{\rm d}\phi, \end{equation} where $(\theta, \phi)$ are the spherical coordinates in $\bmath{k}$-space. \begin{figure*} \begin{center} \includegraphics[width=2\columnwidth]{./QTOOMRE.pdf} \caption{Radial profiles of Toomre parameter $Q$ at $z=6.5$, 7.1, and 7.6. Solid, dashed, dot-dashed, and dotted lines refer to the total $Q_{\rm tot}$ (according to \citealt{romeo+11}), the gas $Q_{\rm g}$ without the contribution of the gas velocity dispersion, the gas $Q_{\rm g,turb}$ including the gas velocity dispersion, and the stellar $Q_{\star}$ for stars in the disc with $j_{z} / j_{\rm circ} > 0.5$ (see the text for additional details). The vertical grey region marks the gravitational softening, while the horizontal ones show the regions of instability and marginal stability. } \label{fig_qtoomre} \end{center} \end{figure*} We perform this calculation for the gas surface density and for the velocity. In the first case, we build a two-dimensional surface density map $\Sigma_{i, j}$ by projecting the SPH gas density field within a cube of $L_{\rm box} = 4$~physical~kpc per side centred on the main galaxy (oriented face-on as described in Section \ref{sec_disc}) on a grid of $512 \times 512$ square grids $\Delta x \approx 7.8$~pc per side, similar to the typical smoothing length within the disc, i.e. $\sim 10$~pc. Then, we use the Fast Fourier Transform to numerically calculate the two-dimensional power spectrum $P_{\Sigma}$, after having padded with zeros the boundaries of the grid in order to reduce the aliasing in Fourier space produced by the non-periodic and finitely sampled content of the grid. In the second case, we analyse the velocity fluctuations of the interstellar medium in the disc. First, we selected three boxes within the disc with side $L_{\rm box} = 512$~physical~pc at 1.5~kpc from the centre and at 120$\degr$ angular separation from each other, in order to avoid any correlation among them and to minimise the effect of differential rotation since the rotation curve is roughly flat outside the central kpc (see Figure \ref{fig_rot_curve}). Each box typically contains from at least a few tens of thousands to a few hundreds of thousands gas particles. We subtract the systemic velocity of each cube as the mass-weighted gas velocity and we then interpolate the SPH-averaged gas velocity on a cubic grid of $128\times 128\times 128$ using the {\sc tipgrid} code\footnote{{\sc tipgrid} was written by Joachim Stadel and it is available at \url{http://astrosim.net/code/doku.php?id=home:code:analysistools:misctools}.}. Then, we finally proceed as described above to calculate the velocity power spectrum $P_{v}$. The velocity power spectrum $P_{v}$ is normalised such as its integral over $k$ gives the total velocity dispersion $\sigma_{\rm g}^2$ of the gas in each box, while $P_{\Sigma}$ is normalised such as its integral over $k$ gives 1. In both cases, we compute the power spectrum between $k_{\rm box} = 2 \pi / L_{\rm box}$ and the Nyquist wavenumber $k_{\rm Nyq} = \pi / \Delta x$. The results of these calculations are shown in Figure \ref{fig_turbulence}. The power spectrum of the surface density grossly shows a two power laws behaviour, with a break at $k \sim 30$~kpc$^{-1}$, which corresponds to a scale length $\sim 200$~pc. This is roughly consistent with our previous estimate of the disc scale height, as generally found in simulations \citep{bournaud+10} as well as observations \citep{elmegreen+01}. This suggests a transition between more two- and three-dimensional like turbulence at low and high $k$ (or large and small length scales), respectively. The power law exponents oscillate significantly with redshift, ranging roughly between -1.0 and -1.6 for low $k$ and between -2.3 and -2.7 for high $k$, which in both cases is slightly less than what \citet{bournaud+10} found. Comparing the results with the run PH\_PF1, we find similar scatterings, but marginally different exponents (note however that the power spectra at $z=7.1$ are fairly similar among the two runs). Typically, the large scale modes have slightly shallower slopes between -0.6 and -1.2, while the power at small scales decays faster with $k$, typically having steeper slopes around -3. Moreover, the transition between the low- and high-$k$ branch and the latter as well are at somewhat higher normalisation compared with the reference run. This different behaviours imply that there is slightly more power in large scale modes and at the transition between two- and three-dimensional modes at smaller scales than in the reference simulation, likely because the lower pressure support promotes slightly larger over-densities to develop under the influence of self-gravity, though the gross structure of the disc (i.e. the disc thickness) roughly remains the same. At globally smaller scales, the velocity power spectra (lower panel of Figure \ref{fig_turbulence}) show much less differences in shape among the runs with different pressure floors, i.e. runs PH and PH\_PF1, though there are fluctuations of factors $\sim 2-3$ in the normalisations at different redshifts. This suggests that the development of turbulence is only mildly influenced by the presence of the pressure floor at any scale larger than the gravitational softening. The typical power law slopes measured between $k_{\rm box}$ and $k = 100$~kpc$^{-1}$ (i.e. at scales larger than the gravitational softening) is about 1.6 with little variations among runs and redshifts. This looks consistent with a Kolmogorov-like turbulence spectrum, i.e. for incompressible, subsonic turbulence; however, when we remove from the fits the first 2 modes, which are poorly sampled due to the box size, we obtain a slope close to -2, again with little scattering among runs and redshifts. Such a slope matches the Burgers law expected for compressible supersonic turbulence (e.g. \citealt{federrath+13}). However, it is difficult to unequivocally discriminate between the two descriptions; in fact, the gas is a mixture of phases characterised by different temperatures and Mach numbers since the density distribution of each box is similar to what shown in Figure \ref{fig_den_pdf}. We typically find a transonic or mildly supersonic turbulence with Mach number $\sigma_{\rm g} / \langle c_{\rm s} \rangle \gtrsim 1$, where $\langle c_{\rm s} \rangle$ is the mean sound speed determined from the mass-weighted average temperature of the gas (i.e. averaged over the different phases) about $10^5$~K. Nonetheless, we argue that the gas within the disc might be slightly better described by a Burgers-like law because of its compressible nature and the larger mass fraction in cold and supersonic gas. \begin{figure*} \begin{center} \includegraphics[width=2\columnwidth]{./DISC_NOFB_COMPARISON.pdf} \caption{Gas surface density maps of the gaseous disc of the main galaxy at $z=7.1$ in runs PH (left column) and PH\_NF (right column). The projections are face-on (upper row) and edge-on (lower row); sizes are in physical units.} \label{fig_disc_nofb} \end{center} \end{figure*} \subsection{What shapes the interstellar medium at high redshift?} \label{sec_ism} Several physical processes may contribute to fostering turbulence in the interstellar medium. Among them, gravity and stellar feedback are often considered to be the dominant ones \citep[e.g.][]{gomez+02,joung+06,brunt+09,pan+15}. Gravity can trigger turbulent motions in a disc -- gravito-turbulence -- when the local cooling time is factor of few to several the local orbital time. Despite that the exact transition is still matter of debate \citep{gammie+01,meru+11a,meru+11b,paardekooper+11,lodato+11}, a gravito-turbulent state is typically characterised by a Toomre parameter $Q \approx 1.5-2$. Figure \ref{fig_qtoomre} shows the Toomre parameter of the disc in run PH at different redshifts. We measure the Toomre parameter within a disc 2.5 physical kpc in radius and 500 physical pc thick. The general definition of the Toomre parameter is: \begin{equation}\label{eq_toomre_q} Q = \frac{\kappa V}{A G \Sigma}, \end{equation} where $\kappa = \sqrt{2 (V_{\phi}/R)^2 (1 + {\rm d}\log V_{\phi} / {\rm d}\log R)}$ is the epicyclic frequency defined through the azimuthal velocity $V_{\phi}$ and the polar radius $R$, and $\Sigma$ is the surface density (either of the gas or the stars). The factor $A$ is $A_{\rm g} = \pi$ and $A_{\star} = 3.36$ for gas and stars, respectively. The velocity $V$ is the radial velocity dispersion $V = \sigma_{R}$ for stars, and the sound speed $V = c_{\rm s}$ for gas. If the gas is turbulent with a radial velocity dispersion $\sigma_{{\rm g,} R}$, the ``turbulent'' gas Toomre parameter $Q_{\rm g, turb}$ adopts the corrected velocity $V = \sqrt{c_{\rm s}^2 + \sigma_{{\rm g,} R}^2}$. We correct the Toomre parameter given by equation (\ref{eq_toomre_q}) for finite disc thickness effects following \citet[][see also \citealt{romeo+94,romeo+13,inoue+16}]{romeo+11}; we multiply $Q$ by: \begin{equation} T = \left\{ \begin{array}{lc} 1 + 0.6 (\sigma_{z} / \sigma_{R})^2 & \sigma_{z}/\sigma_{R} < 1/2 \\ 0.8 + 0.7 (\sigma_{z} / \sigma_{R}) & \sigma_{z}/\sigma_{R} \geq 1/2 \\ \end{array}, \right. \end{equation} where $\sigma_{z}$ and $\sigma_{R}$ are respectively the vertical and (polar) radial velocity dispersion, either of gas or stars. We generally measure the velocity dispersions as $\sigma = \sqrt{\langle v^2 \rangle - \langle v \rangle^2}$, where $\langle \cdot \rangle$ is the SPH average on the smoothing kernel of the appropriate velocity $v$. The Toomre parameter describes the stability of rotating discs. However, the stellar component has a tiny bulge at the centre, which is dispersion-dominated. Therefore, we calculate the ratio $\epsilon = j_z / j_{\rm circ}$, where $j_{z}$ is the $z$ component of the specific angular momentum of a particle computed in a reference frame centred on the galaxy, after having aligned the $xy$ plane with the galactic mid-plane. The maximum angular momentum of a particle at distance $r$ from the centre is the angular momentum on a circular orbit, namely $j_{\rm circ} = r V_{\rm circ}(r)$, where we estimate the circular velocity $V_{\rm circ} \approx \sqrt{G M(<r) / r}$. In order to identify the stars that belong to the rotating disc, we select those with $\epsilon > 0.5$, i.e. the stars whose angular momentum perpendicular to the disc plane is close to maximal. Finally, we calculate the total Toomre parameter as: \begin{equation} Q_{\rm tot}^{-1} = \left\{ \begin{array}{lc} W Q_{\star}^{-1} + Q_{\rm g}^{-1} & Q_{\star} \geq Q_{\rm g} \\ Q_{\star}^{-1} + W Q_{\rm g}^{-1} & Q_{\star} < Q_{\rm g} \\ \end{array}, \right. \end{equation} where $W = 2 V_{\star} V_{\rm g} / (V_{\star}^2 + V_{\rm g}^2)$, where $V_{\rm g}$ and $V_{\star}$ stand for the velocity adopted in equation (\ref{eq_toomre_q}) for the gas and the stars, respectively \citep{romeo+11}. Figure \ref{fig_qtoomre} shows that the galactic disc is overall stable to gravitational perturbations, with the Toomre parameter typically ranging from $\sim 4$ to $\gtrsim 10$ across the redshift interval $z= 6.5-7.6$, i.e. after the disc rebuilt. This is true for both the stellar and the gas components alone, and also when they are combined in $Q_{\rm tot}$, though the latter is slightly lower than the singular components. Such a high value for $Q_{\rm g, turb}$ likely indicates that the gaseous disc is not in a gravito-turbulent state. The only exception is at $z=6.5$, when the Toomre parameter of the gas component approaches 1.5-2 close to the centre, which in fact corresponds with a clumpy region in the gas, where over-densities might have been tidally enhanced by the close passage of the satellite galaxy that will eventually merger with the main one. We have checked that $Q_{\star}$ goes to $\lesssim 1$ within the inner 100-200~physical~pc if we do not select particles with $\epsilon > 0.5$, that is kinematically consistent with the presence of a central tiny bulge. Figure \ref{fig_qtoomre} also compares the Toomre parameter of the gas with and without the contribution of turbulence (i.e. $Q_{\rm g, turb}$ and $Q_{\rm g}$, respectively). When the effect of turbulent motions is include, the Toomre parameter is naturally larger. Nonetheless, such effect accounts for an increase in $Q$ up to a factor $\sim 2$. This implies that the pressure support alone would be enough to prevent widespread fragmentation in the disc, because the average temperature $\sim 10^{5}$~K within the disc is such that the average sound speed of the gas is $\langle c_{\rm s} \rangle \sim \sigma_{\rm g} \sim 50$~km~s$^{-1}$, i.e. pressure support and non-thermal turbulence contributes similarly to the stability of the gaseous disc. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{./VDISP_NOFB_COMPARISON.pdf} \caption{Comparison between the disc properties in runs PH (blue solid line), PH\_PF1 (red dashed line), and PH\_NF (green dotted line). From top to bottom: radial velocity dispersion of the gas, $V_{\phi} / \sigma_{\rm g}$ ratio of the gas, and turbulent Toomre parameter $Q_{\rm g, turb}$ of the gas. The vertical grey region common to all panels marks the gravitational softening of the gas. } \label{fig_vdisp_nofb} \end{center} \end{figure} The high value of the gas Toomre parameter and the mild sensitivity of the latter to turbulence suggest that (i) feedback may have a major role in regulating the stability of the disc, and (ii) disc might not be in a gravito-turbulent state, i.e. feedback is also the main energy source for the gas turbulence. However, the latter point is hard to demonstrate because gravity can still contribute by accelerating free falling cold clouds from the surroundings that eventually dissipate their kinetic energy by mixing with the denser gas of the disc. In order to test the impact of feedback on the structure of the interstellar medium, we have restarted the run PH without star formation nor feedback from $z\approx 8$, i.e. run PH\_NF. Figure \ref{fig_disc_nofb} shows the comparison between the gaseous disc at $z=7.1$ in runs PH and PH\_NF. The two discs are visibly different. Under the effect of feedback, the disc in run PH is overall slightly more extended in radius and it has a lower surface density on average. The gas in run PH has larger density contrasts on small scales and a more flocculent structure than in run PH\_NF, which on the other hand shows a clear two-armed spiral, initially triggered by the quasi-resonant tidal interaction with a flying-by satellite \citep[e.g.][]{donghia+10}. The disc in run PH\_NF is also thinner than in run PH and it presents a central bulge-like structure that is not dispersed by star formation and feedback and that is also mildly elongated in a bar-like fashion. Figure \ref{fig_vdisp_nofb} tries to quantify the morphological differences between the results of runs PH and PH\_NF (comparing also run PH\_PF1) at $z=7.1$. We compare the radial velocity dispersion of the gas $\sigma_{R}$, the ratio $V_{\phi}/\sigma_{\rm g}$ between the gas azimuthal velocity $V_{\phi}$ and the three-dimensional velocity dispersion $\sigma_{\rm g}$ ($V_{\phi}/\sigma_{\rm g}$ quantifies the rotational support of the gaseous disc), and $Q_{\rm g, turb}$. We note that the gas in run PH\_NF tends to have a slightly lower $\sigma_{R} \sim 25$~km~s$^{-1}$ than in both runs PH and PH\_PF1, except for a ``bump'' at $\sim 300-500$~pc where the rotating disc joins the central spheroid. Below this radius, $\sigma_{R}$ drops to a few km~s$^{-1}$ in run PH\_NF, while it remains almost constant in the other cases, likely because the central gaseous bulge in run PH\_NF is mostly pressure supported. This suggests an overall similar amount of turbulence with and without stellar feedback\footnote{The similarity of gas velocity dispersion achieved with and without feedback in presence of gravito-turbulence was pointed out in \citet{agertz+09}.}; however, other dynamical properties of the disc are very different among the runs. Without stellar feedback, the disc is thinner and denser, and consistently more rotationally supported than in the runs including feedback, as shown by $V_{\phi} / \sigma_{\rm g} \sim 3$, almost a factor of 2 larger than in the other cases. Moreover, while $Q_{\rm g, turb} \sim 10$ is almost constant across the disc in run PH, it drops to $\lesssim 2$ within $R \sim 1.5$~kpc when the feedback is absent. This is due to the combined effect of the larger surface density of the disc and of the lower local support provided by both turbulence and thermal pressure, since the gas temperature across the disc typically corresponds to $c_{\rm s} \sim 10$~km~s$^{-1}$ or lower when feedback is not included. Within the central 500~pc (i.e. the gaseous bulge), $Q_{\rm g, turb}$ drops well below unity in run PH\_NF, but it loses its physical significance because the central region is not rotationally supported. The gas velocity dispersions are similar in the case with and without feedback but the stability properties of the discs are not. This suggests that a turbulent state with similar amplitude of turbulent motions can be achieved in different ways. When feedback is active, the disc settles in a ``hot and turbulent'' configuration with a high Toomre parameter that results from efficient gas heating inside and beyond the disc. Alternatively, when radiative cooling effectively counterbalances heating, the disc remains in a marginally unstable ``cold and turbulent'' state. While all these pieces of evidence do not unambiguously disentangle the relevance of stellar feedback and gravity as sources of turbulence, they do show that stellar feedback has a major impact in shaping the interstellar medium of such high redshift galaxies. We argue that stellar feedback is possibly the main source of turbulent energy in this case. Indeed, only when we turn off stellar feedback the galaxy quickly readjusts to a new state that looks very similar to gravito-turbulence in terms of both morphology and structure of the disc, whereas when the feedback is active, the gaseous disc remains globally Toomre stable on a longer timescale. This implies that supernova feedback affects the interstellar medium enough to prevent gravito-turbulence to take place and in turn it suggests indirectly that feedback mainly powers the turbulent cascade in the gaseous disc. However, we caution that the interpretation of the differences between the runs with and without feedback is complicated by other aspects: (i) the PH\_NF run is restarted when the gas has already a notable amount of turbulent motions, hence its new dynamical state does not necessarily reflect only the onset of gravito-turbulence; (ii) the galaxy is still accreting both fresh material from larger scales and material recycled through the gaseous halo that likely contributes to additional turbulent motions when it joins the galactic disc \citep{klessen+10}. Regarding (i), though, it is noteworthy that the gas cools on timescales shorter than the orbital times when feedback is not active, which suggests that gravito-turbulence is needed to sustain the velocity dispersion in this new ``cold'' state of the disc. In addition to that, the lack of stellar feedback makes PH\_NF grow more than PH owing to the lower ``resistance'' to gas inflows. As a consequence, the baryonic mass (in particular the gas component) within $0.1 R_{\rm vir}$ of PH\_NF is $\sim 2$ times larger than in PH, which leads to a steepening of the circular velocity curve within $\sim 1.5$~kpc; the latter peaks at $\sim 190$~km~s$^{-1}$ and then asymptotically declines to $\sim 160$~km~s$^{-1}$. This possibly stabilises the disc as it increases $\kappa$ in the numerator of equation (\ref{eq_toomre_q}). At the same time, however, the lack of feedback increases the gas disc surface density through the enhancement of inflows and lowers the value of $V$, potentially decreasing $Q$. Since we observe a lower value of $Q$ in PH\_NF, this suggests that feedback mostly influences the structure of the disc by means of both internal and external effects, namely the contribution to local turbulence and the indirect redistribution of mass through the enhancement of inflows, respectively. All these evidences are consistent with the possibility that feedback has a prominent role in shaping the disc turbulence in our simulations; nonetheless, this complex interplay between feedback and gravity prevents a clear identification of the ultimate cause of the turbulent cascade in the interstellar medium. Finally, we mention that, as described in Section \ref{sec_turbulence}, we have also computed the power spectrum of the density-weighted velocity $w = \rho^{1/3} v$ in run PH in order to distinguish between a solenoidal or compressive mode of energy injection in turbulence. We find slopes typically $\sim -1.8$ that would favour a dominant solenoidal mode, according to the results of \citet{federrath+13}. This looks consistent with recent work showing that supernova-driven turbulence is only mildly compressive, at least on the molecular cloud scale \citep{pan+15}, and it would support the interpretation that stellar feedback may be the main responsible for turbulence, though it is not clear whether and how this estimate is degenerate with and sensitive to the multi-phase structure of the gas. \subsection{Mass flow through the disc} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{./MDOT_PROFILE.pdf} \caption{Upper panel: radial mass flow through the gaseous disc. Blue solid, red dashed, and green dotted lines refer to $z=6.5$, 7.1, and 7.6, respectively. The shaded regions with the same colours mark the symmetric inflow-outflow estimated by equation (\ref{eq_mdot}). Negative values of $\dot{M}$ are associated to inflows. Lower panel: radial profile of $V_{\phi} / \sigma_{\rm g}$ at different redshifts. } \label{fig_mdot_profile} \end{center} \end{figure} The turbulence within the disc can cause motions and mass transfer through the galactic disc. The upper panel of Figure \ref{fig_mdot_profile} shows the radial mass flow through the galactic disc. This is measured within a disc of 2 kpc radius and 400 pc thickness centred on the disc mid-plane. For each radial bin of width $\Delta R$ in polar coordinates, we measure the mass flow as $\dot{M} = \Delta R^{-1} \sum_{j} m_{j} v_{j,R}$, where $m_j$ and $v_{j,R}$ are the mass and the radial mid-plane velocity of the $j$-th gas particle, respectively, and the sum extends over the particles within each radial bin. Negative values of $\dot{M}$ are associated to inflows of gas through the galactic disc. \begin{figure*} \begin{center} \includegraphics[width=2\columnwidth]{./ANGULAR_MASS_FLOW.pdf} \caption{Mollweide projections of the radial mass flow rate per unit steradian (negative and positive values represent inflow and outflow, respectively) at different scales and times. From top to bottom: redshift 6.5, 7.1, and 7.6, respectively. From left to right: mass flow rate in a spherical shell between 100 and 120 physical pc, between 500 and 550 physical pc, and between 1 and 1.2 physical kpc. The equator of the map corresponds with the disc mid-plane. } \label{fig_mdot_map} \end{center} \end{figure*} The figure shows that the gas is not steady inflowing toward the central region over the time span of $\sim 160$~Myr between $z=7.6$ and $z=6.5$. Instead, $\dot{M}$ fluctuates from negative to positive values (i.e. from inflow to outflow) at different locations in the disc with typical absolute values within $\sim 10$~M$_{\sun}$~yr$^{-1}$. However, the gas flows in more strongly at $z = 6.5$, up to about 30~M$_{\sun}$~yr$^{-1}$ between $R \approx 300$~pc and $\approx 2$~kpc, possibly because of the disturbance of the satellite galaxy visible in Figure \ref{fig_merger_tree}. We can fairly estimate the absolute value of $\dot{M}$ over the disc as: \begin{equation} \label{eq_mdot} \dot{M} \sim M(<R) / t_{\rm turb} \sim M(<R) \sigma_{R} / R, \end{equation} where $M(<R)$ is the enclosed mass and $t_{\rm turb} \sim R / \sigma_R$ is the turbulence crossing time, which also approximates the dissipation time of the turbulent kinetic energy if not continuously replenished \citep{maclow+99,elmegreen+00}. However, it does not constrain the sign of $\dot{M}$, i.e. whether gas would preferentially inflow or outflow. The overall behaviour seems qualitatively consistent with mass transport due to turbulence induced by feedback and not gravito-turbulence, as discussed above. Indeed, \citet{goldbaum+15} have used controlled simulations of gravitationally unstable disc galaxies to show that gravito-turbulence would be able to sustain a net mass inflow over time through the disc plane because of the coherent torquing of the gas from persistent spiral arms. Specifically, they use models of Milky Way-like galaxies and they find inflows $\sim 1-2$~M$_{\sun}$~yr$^{-1}$. The latter have lower absolute values than in our case, owing to the different conditions of the gas, i.e. different surface density, surface star formation rates, velocity dispersion, etc., but the main difference remains the steady inflows found by \citet{goldbaum+15} in gravito-turbulent disc models. When we average $\dot{M}(R)$ over time between $z=8.1$ and $z=6.5$, we find large positive and negative fluctuations around zero, hinting against a net and continuous mass inflow due to gravito-turbulence, consistently with the previous arguments. However, we note that we do not have enough time resolution in the dumped snapshots to firmly assess the convergence of the timely-averaged $\dot{M}(R)$ around zero across the entire disc. Close to the centre, the gaseous disc becomes proportionally thicker, with an increasing aspect ratio $H / R \sim \sigma_{\rm g} / V_{\phi} \gtrsim 1$ at radii $R \lesssim 300-500$~pc, as shown in the lower panel of Figure \ref{fig_mdot_profile}. Therefore, the impact of three-dimensional turbulence in the mass flow close to the centre may also affect the angular distribution of moving matter and the direction of streaming gas that might eventually reach the galactic nucleus. We show that in Figure \ref{fig_mdot_map} through Mollweide angular projections of the mass flow rate per unit of steradian, ${\rm d}\dot{M} / {\rm d}\Omega$, at different redshifts and at different radial distances from the centre. Specifically, we select gas particles within spherical test shells with radius $r$ and thickness $\Delta r$. Then, we tessellate the sphere by means of the {\sc healpix} algorithm\footnote{For further information, see \url{http://healpix.sourceforge.net/}. We use the {\sc python} implementation {\sc healpy}, freely available at \url{https://github.com/healpy/healpy}.} and we calculate the mass flow through each angular tassel as ${\rm d}\dot{M} / {\rm d}\Omega = \Delta\Omega^{-1} \Delta r^{-1} \sum_j m_j v_{j, r}$, where $\Delta \Omega = 4 \pi / N$ is the solid angle of the $N$ equal tassels, and the sum extends to the particles within each tassel. The (instantaneous) mass flow is largely anisotropic on several scales. Most of the mass flow occurs through the disc plane, including both inflows and outflows at $\gtrsim 5$~M$_{\sun}$~yr$^{-1}$. On large scales $\sim 1$~kpc (the same order as the disc size), significant inflows and outflows proceed through ``pockets'' of gas localised in solid angle from medium ($\sim 30\degr$) to high ($\gtrsim 60\degr$) latitudes both above and below the disc plane. This is consistent with the large scale behaviour of the gas triggered by the interplay of stellar feedback and gravity discussed in Section \ref{sec_disc}; indeed, most of the inflowing gas at latitudes far from the disc plane typically has low mass-weighted mean temperature $\lesssim 10^4$~K, while the contrary is true for the outflowing gas, likely pushed away by supernova blast waves. At intermediate scales $\sim 500$~pc, the inflow-outflow episodes are mostly confined around the disc plane up to latitudes as high as $\sim 30-40\degr$, consistently with the typical thickness of the gaseous disc $\sim 100-300$~pc as seen on the scale of the test shell, while fewer streams of gas cross the test shell at higher latitudes. On even smaller scales $\sim 100$~pc, the test shell is almost entirely embedded in the thick gaseous disc. At this scale, the dynamics of the gas within the galactic disc is dominated by turbulent motion and not by rotation, as hinted by the lower panel of Figure \ref{fig_mdot_profile}. As therefore expected, the mass flow on small scales is highly anisotropic, with well-defined inflow-outflow regions representing cross sections through the test shell of moving over-dense gas clouds at every latitude. Consistently with Figure \ref{fig_mdot_profile}, the integrated ${\rm d} \dot{M} / {\rm d} \Omega$'s shown in Figure \ref{fig_mdot_map} effectively decrease from larger to smaller scales; however, the value of the net mass flow is different because the former is integrated over a cylindrical shell selecting the flow along the disc plane, while the latter represents the radial flow through a spherical shell. \section{Discussion and conclusions} \label{sec_4} In this paper we present the results from PonosHydro, a high-resolution, zoom-in cosmological simulation meant to model the early evolution of a present-day massive galaxy down to $z \sim 6$, whose global properties appear to be consistent with the available data for galaxies of similar stellar masses at those redshifts \citep{iye+06,bradley+12,watson+15}. Specifically, we study the assembly of the galaxy during its first starburst phase, before the supermassive black hole can exert significant feedback and quench star formation. We focus on the properties of the interstellar medium and the transport of mass across the galactic disc and study the conditions that determine the early evolution of the central regions of present day massive galaxies. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{./PHASE_DIAGRAM.pdf} \caption{Comparison of the phase diagram of run PH at $z=7.1$ (upper panel) and ErisMC at $z=3$ (lower panel). The colour bar shows the logarithm of the mass fraction per bin in the density-temperature plane. The red dotted line in the upper panel marks the density and star formation thresholds adopted in run PH. } \label{fig_phase_diagram} \end{center} \end{figure} Before we discuss the potential implications of our findings, we briefly comment on the possible shortcomings of our calculations. Our results may be subject to the feedback model that we have used and this could in principle quantitatively affect our conclusions, at least to some extent. Various feedback schemes differently change the temperature of the gas locally after supernovae explosions. The delayed-cooling blast wave feedback produces gas at $\sim 10^5$~K and at densities $\sim 10-100$~H~cm$^{-3}$ by injecting energy in the surrounding of the star-forming regions and preventing them from cooling. Before this gas expands adiabatically and gets blown away, it significantly contributes to the global stability of the galaxy against the onset of gravito-turbulence and fragmentation. While the contribution of this phase might be artificially enhanced by the feedback scheme, we argue that the physical conditions of run PH (in particular, the high specific star formation rate and the relatively small mass) are more prone to form a warm and dense gas phase than in local disc galaxies. Figure \ref{fig_phase_diagram} shows the density-temperature diagram of the gas within 3 and 10 physical kpc around the main galaxy in run PH at $z=7.1$ and ErisMC\footnote{ErisMC is a simulation of a Milky Way-like galaxy that adopts a sub-grid model very similar to ours and therefore it allows to compare the thermodynamics of the gas in different physical conditions, though with the same feedback scheme. For further details about ErisMC, see \citet{shen+12}. Unfortunately, a snapshot at redshift lower than 3, which might have strengthened our considerations providing a more quiescent galaxy, is not available.} at $z=3$, respectively. Despite that both simulations show some amount of gas around $10-100$~H~cm$^{-3}$ and $\sim 10^5$~K, this accounts for an order of magnitude less mass fraction in the more quiescent ErisMC, that has almost ten times smaller specific star formation rate \citep{shen+12}. Similar results have been obtained with different feedback models (see e.g. Fig. 11 of \citealt{hopkins+12b}, where the warm phase is more sub-dominant compared to the cold phase at $\sim 100$~H~cm$^{-3}$ in a Milky Way-like galaxy than in a high-$z$-like galaxy). Moreover, the latest generation of stronger feedback models, designed to capture well the run between stellar mass and halo mass across cosmic scales and epochs, tend to produce lower density, thicker discs by driving more powerful and hot ($T \gtrsim 10^7$~K) gas outflow \citep{hopkins+14,keller+14}. Such galactic discs would likely be less gravitationally unstable and yet turbulent in the gas component (e.g. \citealt{hopkins+12b,hopkins+12,mayer+16}). Therefore, we conclude that a different and stronger feedback model would likely lead to qualitatively similar conclusions, namely that the gaseous discs of typical star-forming $z \sim 6$ galaxies would be maintained turbulent and stable against gravitational fragmentation by feedback, albeit future tests with different feedback schemes could better assess potential differences. Two main processes have been often advocated in the literature to shape the global dynamics of the interstellar medium: gravitational instability and supernova feedback. In Section \ref{sec_3} we discussed the features of the gaseous disc of run PH in terms of star formation, outflows, turbulence, mass transport, and gravitational stability, arguing that feedback likely plays a dominant role in the early evolution of a typical $z \sim 6-7$ galaxy. This seams to be somewhat different from what is usually expected both at low ($z \sim 0$) and intermediate ($z \sim 2$) redshift. Recently, \citet{goldbaum+15,goldbaum+16} have thoroughly explored the relative role of gravity and stellar feedback with controlled simulations of present day Milky Way-like galaxies. Consistently with previous results (e.g. \citealt{agertz+09,bournaud+10,agertz+15}), they find that stellar feedback is important to locally regulate star formation and to create a multi-phase interstellar medium, but the galaxy nonetheless settles to a gravito-turbulent state with a Toomre parameter $Q \sim 1$ that mostly controls the velocity dispersion and the mass transport through the disc. Those results match the observations of nearby spiral galaxies with low star formation rates (e.g. \citealt{tamburro+09,bagetakos+11}). They are conceptually similar to what has been often argued for massive $\sim 10^{11}$~M$_{\sun}$, gas-rich galaxies at $z \approx 2$, i.e. star forming galaxies that have not been quenched yet and are likely the progenitors of the most massive quiescent galaxies at $z = 0$, undergoing so called violent disc instability, i.e. gravitational instability that leads to the formation of massive star-forming clumps (e.g. \citealt{dekel+09,ceverino+10,mandelker+14,inoue+16}; but see also \citealt{hopkins+12,tamburello+15}). In this respect, massive discs at $z \approx 2$ would be the most extreme manifestation of the ``cold and turbulent'' regime that eventually leads to gravitational instability and fragmentation in the gas, since they are already as massive as the most massive discs in the local Universe but proportionally more gas rich, perhaps because they appear near the peak of the cosmic star formation history \citep{madau+14}. Motivated by this analogy, \citet{goldbaum+16} have proposed that gravitational instability is the dominant process setting mass transport and fuelling star formation over cosmic time. However, our results suggest that this might not be the case for typical $z \sim 6-7$ galaxies with stellar and gas mass $\gtrsim 10^{9}$~M$_{\sun}$, where the disc dynamics and mass transport seems to be significantly influenced by stellar feedback. Indeed, even in the favourable conditions of massive gaseous discs at $z \approx 2$, a phase of violent disc instability with its associated gravito-turbulent can or cannot occur depending on how effective is the feedback model at heating the gas and generating mass-loaded outflows. Recent work has shown that modern strong feedback models tend to suppress gravitational instability and fragmentation, and at the same time that blast wave feedback cannot suppress disc instability when conditions are favourable for its emergence \citep{mayer+16}, corroborating at least the qualitative distinction between the two cases. Interestingly, \citet{ceverino+16} recently reach analogous conclusions on the role of stellar feedback by looking at a galaxy with stellar mass $M_{\star} \sim 10^9$~M$_{\sun}$ comparable to ours but at $z \sim 1$. They find a qualitatively similar evolution over time of the Toomre parameter and the $V_{\phi}/\sigma_{\rm g}$ ratio despite they use a different approach to model stellar feedback, i.e. through non-thermal radiation pressure in addition to thermal dump without shut-off cooling. We thus argue that the dominant role of stellar feedback in the early evolutionary phase of massive galaxy progenitors is likely controlled by the combination of the high specific star formation rate ($\gtrsim 5$~Gyr$^{-1}$) and of the relatively low mass at $z > 5$ ($\sim 10^{9}$~M$_{\sun}$). The first favours the impact of stellar feedback on the interstellar medium, while the latter proportionally weakens the dynamical role of gravity to lead to instabilities and eventually fragmentation. Those specific star formation rates are expected for galaxies on the main star forming sequence at $z > 5$ (Figure \ref{fig_SFH}; \citealt{schreiber+15,tasca+15}); as our galaxy is consistent with the main sequence, this suggests that the ``hot and turbulent'' regime that we characterise here could be typical of star forming galaxies at $z > 5$ with baryonic/stellar masses comparable to ours. These should be fairly typical galaxies, as recent surveys begin to find \citep{bradley+12,capak+15,watson+15}. In particular, recent ALMA observations by \citet{maiolino+15} tend to qualitatively support the idea that stellar feedback has a dominant role in the early assembly of normal star-forming galaxies at $z\sim 6- 7$. This has immediate observational implications, as we would predict a significant amount of warm/hot gas with temperature $5\times 10^{4} \lesssim T/{\rm K} \lesssim 5 \times 10^{5}$ inside and around the disc, possibly $\sim 0.1$ of the gas mass. Note that these are temperatures more akin to the circumgalactic medium distributed in the virial volume around galaxies at low redshift \citep{werk+13}, but in our case it would be inside or surrounding the galactic disc. Possible analogues in the local Universe may serve as preliminary test bed for our predictions. Those might be low-mass starburst galaxies, such as the prototypical M82, that has stellar mass and star formation rate rather similar to the main galaxy of run PH (with a factor 3-4 lower specific star formation rate due to the higher stellar mass; e.g. \citealt{forster+03,greco+12}). While gas densities are expected to be lower at $z=0$, M82 might also host a significant fraction of warm/hot, turbulent gas in its disc, at least in the central kpc where the starburst is ongoing. This seems to be confirmed by observations (e.g. \citealt{griffiths+00}) and at least in qualitative agreement with our results since gas temperature and phases found in our simulations would be somewhat dependent on the specific feedback model. Detailed characterisation of the warm/hot interstellar medium in low mass starburst galaxies could thus provide useful constraints to test our scenario. In this ``hot and turbulent'' regime, the mass transport is influenced by intense and clustered stellar feedback episodes. As a result, the gas flow through the disc is fluctuating and anisotropic, with no sustained coherent gas inflow within the disc. A coherent circumnuclear disc, which can be a way to funnel accretion towards the ultimate stage of the accretion disc, is not clearly seen to form at $\sim 100$~pc scales (though this would be barely resolved at our resolution). This might have some implications for the feeding of a central massive black hole \citep{gabor+13,dubois+15}. On one hand, mean inflow rates could be small; however, episodic accretion events at high rates could occur through the infall of massive gas clouds, as we observe inflow rates that can occasionally peak at $\gtrsim 5$~M$_{\sun}$~yr$^{-1}$. Nonetheless, if super-Eddington accretion is assumed, recent models show that even episodic accretion is enough for the rapid growth of central black holes (e.g. \citealt{lupi+16,pezzulli+16}). Accretion may also occur in an anisotropic way, with large fluctuations in the angular momentum of the accreting matter, which would have implications for the nature of the accretion disc itself, if any, and for the evolution of the spin of the central black hole. However, we defer additional speculations on the evolution of a massive black hole in such environments to a forthcoming investigation. As steady central gas inflows are not sustained, bulge/spheroid formation from dynamical and/or secular disc instabilities are unlikely to take place (e.g. \citealt{guedes+13}). Indeed, the disc of PonosHydro remains nearly bulgeless for the whole simulation (see Figure \ref{fig_rot_curve}). However, we know from lower resolution runs going to $z=0$ that the galaxy will develop a dominant spheroid at lower redshifts as it grows to become a massive early-type galaxy (Fiacconi et al., in preparation). Since it has several mergers occurring at later times ($2 < z< 4 $; \citealt{fiacconi+16}), it is likely that such mergers will be the dominant driver of spheroid growth \citep{fiacconi+15}. However, if the ``hot and turbulent'' regimes characterises main sequence galaxies at $z > 5$, this would imply that bulge formation may occur after the first billion year of evolution, possibly post-dating the growth of the massive black hole at the centre (see also \citealt{dubois+15} and \citealt{habouzit+16}). Therefore, we predict that gas-rich star forming discs at $z > 5$ should not host a significant bulge. The exploration described in this paper leads to interesting predictions about the early assembly of massive galaxies. However, our interpretations remain rather speculative, both from the theoretical and the observational point of view. On one hand, future simulations including different subgrid models are necessary to quantitatively assess the different nature of galaxies at low and high redshifts, also comparing the results from codes with intrinsically different treatments of hydrodynamics (e.g. \citealt{kim+16}). On the other hand, more detailed characterisation of high redshift galaxies are starting to be available from current observational facilities (e.g. ALMA). However, it is going to be in the next future that forthcoming observatories (e.g. JWST, E-ELT) will provide deep enough data to definitely test our predictions and at the same time to better guide the theoretical study of galaxies at the cosmic dawn. \section*{Acknowledgements} We thank the anonymous Referee for constructive comments that helped us to improve the quality of the paper. We acknowledge useful discussions with Arif Babul, Rychard Bouwens, Nick Gnedin, Raffaella Schneider, Sijing Shen, and Debora Sijacki. We thank Oliver Hahn and the AGORA collaboration for help with the initial conditions of the simulations. The simulations have been run on the ZBOX4 cluster at the University of Zurich and on the Pitz Dora cluster at CSCS, Lugano. We acknowledge the use of the {\sc python} package {\sc pynbody} (\citealt{pontzen+13}; publicly available at \url{https://github.com/pynbody/pynbody}) in our analysis for this paper. D.F. is supported by the Swiss National Science Foundation under grant \#No. 200021\_140645. D.F. also acknowledges support by ERC Starting Grant 638707 ``Black holes and their host galaxies: coevolution across cosmic time''. Support for this work was provided to P.M. by the NSF through grant AST-1229745, and by NASA through grant NNX12AF87G. P.M. also acknowledges a NASA contract supporting the WFIRST-EXPO Science Investigation Team (15-WFIRST15-0004), administered by GSFC, and thanks the Pr\'{e}fecture of the Ile-de-France Region for the award of a Blaise Pascal International Research Chair, managed by the Fondation de l'Ecole Normale Sup\'{e}rieure. \bibliographystyle{mnras}
1,116,691,501,092
arxiv
\section{Motivation} The PIENU experiment at TRIUMF \cite{pienu} is aiming at a measurement of the branching ratio $R=\Gamma (\pi\rightarrow e\nu + \pi\rightarrow e\nu\gamma)/ \Gamma (\pi\rightarrow \mu\nu + \pi\rightarrow \mu\nu\gamma)$ with precision $<$0.1\%. The principal instrument used to measure positron energies from $\pi^{+} \rightarrow e^{+}\nu$ decays ($E_{e^{+}}=70$~MeV) and $\pi^{+} \rightarrow \mu^{+} \nu$ followed by $\mu^{+} \rightarrow e^{+} \nu \overline\nu$ decays ($E_{e^{+}}=0-53$~MeV) is a large single crystal NaI(Tl) detector \cite{bina}. Detailed knowledge of the crystal response is essential to reaching high precision, especially for determining the low energy tail response below 60~MeV \cite{triumf}. In the following, results of measurements of the response of the NaI(Tl) crystal to mono-energetic positron beams are presented along with Monte Carlo (MC) simulations including photonuclear reactions. \section{Experiment Setup} The 48~cm diameter, 48~cm long NaI(Tl) crystal \cite{bina} under study was surrounded by two adjacent rings of 97 pure CsI crystals \cite{BNLE787}. Each ring was comprised of two layers of 8.5~cm thick, 25~cm long crystals. Positrons from the M13 beamline at TRIUMF \cite{m13} were injected into the NaI(Tl) crystal to study its response. The positrons were produced by 500~MeV protons from the TRIUMF cyclotron striking a 1~cm thick beryllium target. After defining the beam momentum at the first focus, the M13 beam line is equipped with two more dipole magnets and two foci with slits before the final focus at the detector. The vacuum window was a 0.13~mm thick, 15~cm diameter Mylar foil. With this geometry, slit scattering and the effect of the vacuum window were expected to have negligible effect on the low energy tail. The incoming beam was measured with a telescope (see fig.~\ref{setup}) consisting of 6 planes of wire chambers arranged in the orientation of X-U-V-X-U-V, where U(V) was at $60^{\circ}$($-60^{\circ}$) to the vertical direction, a plastic scintillator (5$\times$5~cm$^2$ area, 3.2 mm thickness), and the NaI(Tl) calorimeter. The beam momentum width and horizontal (vertical) size and divergence were 1.5\% in FWHM, 2cm (1cm) and $\pm$50mrad ($\pm$90mrad), respectively. The beam composition was 63\% $\pi^{+}$, 11\% $\mu^{+}$ and 26\% $e^{+}$. \begin{figure}[t] \resizebox{\columnwidth}{!}{\includegraphics{figure1.eps}} \caption{Schematic description of the experimental setup (not to scale). The beam comes from the right and impinges on the NaI(Tl) crystal surrounded by two rings of 97 CsI crystals. In front of the NaI(Tl), there are 6 planes of wire chambers and a plastic scintillator.} \label{setup} \end{figure} \section{Measurement and Results} A 70~MeV/c positron beam was injected into the center of the NaI(Tl) crystal. The beam timing with respect to the 23~MHz cyclotron radio frequency provided particle identification based on time-of-flight (TOF) together with the energy loss in the beam scintillator, allowing selection of positrons for studying the crystal response function. Events due to positrons from decays of muons previously stopped in the NaI(Tl) crystal were suppressed by requiring wire chamber hits, and using TOF and pileup cuts. Pion and muon contamination was reduced in the data to the 0.08\% level. The CsI crystals were used in veto mode to select events without shower leakage from the NaI(Tl) as well as for tagging events with delayed particle emission. Leakage from the NaI(Tl)'s downstream face was not detected but minimized by the 19 radiation length thickness of the crystal. The resulting positron energy spectrum is shown in fig.~\ref{momscan} (dark shaded histogram). The main peak at 70~MeV has an asymmetrical shape due primarily to shower leakage with a width of 2.7\% (FWHM). Subtracting the calculated beam momentum width in quadrature gave a NaI(Tl) crystal resolution of approximately 2.2\% (FWHM). Besides the main peak at 70~MeV, there are two additional structures at 62 and 54~MeV. Studies were made to determine whether the additional peaks had either instrumental or physical origin. Using different settings of the momentum-defining and collimating slits, which enhanced or suppressed slit scattering, no effect on the positron energy spectrum was found including the relative intensity of the peaks. Also, different tunes of the beamline ({\em e.g.} different focusing) did not change the measured energy spectrum. The beam momentum was varied in order to observe the corresponding position of the peaks. Fig.~\ref{momscan} also shows the spectra for 60 and 80~MeV/c beam momenta shifted and plotted on top of the reference histogram at the nominal momentum of 70~MeV/c. Signals from the CsI crystals were used to suppress the low energy tail due to shower leakage to enhance the second and third peaks. For all three beam momenta, the relative positions of the low energy peaks remained unchanged. The beam position dependence of the NaI(Tl) spectrum was also tested using wire chamber information, without finding any effect. Based on these tests, it is unlikely that there is an influence of the beam settings in the appearance of the additional structures in the energy spectrum. In fig.~\ref{timing} (top), the deposited energy in the NaI(Tl) crystal is shown as a function of the CsI hit time. The horizontal band at the beam energy corresponds to accidental events, while the coincident ones from shower leakage are concentrated around 0~ns. There are delayed events in the low energy region that correspond to the second and third peaks. If delayed events between the vertical lines are selected, the shaded spectrum in fig.~\ref{timing} (bottom) is obtained. The first peak (at approximately 70 MeV) is consistent with accidental coincidences. The second and the third peaks were enhanced after the delayed coincidence requirement. These results are consistent with the hypothesis of neutrons escaping the NaI(Tl) and giving a delayed signal in the CsI. Moreover, the energy deficits of the second and third peaks are consistent with the separation energies for one ($E_{n}=9.14$ MeV) and two neutrons ($E_{2n}=16.3$ MeV) emitted from $^{127}$I. Since only the first hit is plotted in fig.~\ref{timing}, the observed secondary peaks are not due to the slow component of the CsI pulse. The yield of the second peak is consistent with the 30\% solid angle and estimated 10\% detection efficiency of the CsI calorimeter for neutron capture. A delay of 100~ns is also consistent with the TOF of $<1$~MeV neutrons. To estimate the number of neutrons involved in the second and third peaks, two Gaussian functions on a background with an exponential shape were fitted to both histograms in fig.~\ref{timing} (bottom). The ratio $N_{2}$ ($N_{3}$) of the number of events in the second (third) peak before and after the delayed coincidence requirement is proportional to the product of the neutron detection efficiency and the number of neutrons involved, $n_{2}$ ($n_{3}$). The quantity $N=N_{3}/N_{2} = n_{3}/n_{2}$ indicates the ratio of the neutron multiplicities, which was found to be $N=2.1 \pm 0.2$. Assuming that one neutron is involved in the second peak, this result suggests that the third peak arises when two neutrons escape from the crystal. The previous branching ratio experiment \cite{triumf} was not able to detect these peaks because of the poorer energy resolution of the NaI(Tl) crystal ($3-4$\% FWHM) employed. \begin{figure}[t] \resizebox{\columnwidth}{!}{\includegraphics{figure2.eps}} \caption{Normalized NaI(Tl) energy spectra for incident positron beam momenta 60, 70, and 80~MeV/c. The spectra were shifted and aligned to the peak at 70~MeV/c. Histograms are scaled differently for easier comparison. } \label{momscan} \end{figure} \begin{figure}[t] \resizebox{\columnwidth}{!}{\includegraphics{figure3.eps}} \caption{(Top) Deposited energy versus CsI hit timing. (Bottom) The shaded histogram represents events selected by the timing cut (between the lines) shown on the top figure.} \label{timing} \end{figure} \begin{figure}[t] \resizebox{\columnwidth}{!}{\includegraphics{figure4.eps}} \caption{Comparison between data (filled circles with error bars) and simulation. The simulation was performed with (light shaded) and without (dark shaded) hadronic reaction contributions. The histograms are normalized to the same area.} \label{mc} \end{figure} \begin{figure}[t] \resizebox{\columnwidth}{!}{\includegraphics{figure5.eps}} \caption{Simulation of the kinetic energy of the neutrons produced in (white histogram) and those that escaped from (shaded histogram) the NaI(Tl) crystal.} \label{nescaped} \end{figure} \section{Simulation} A MC simulation was developed, including all physics effects available in the GEANT4 package \cite{geant4,geant4-2}. In particular, photonuclear reactions with neutron(s) emission, scattering and absorption were taken into account using the QGSP\_BERT physics processes list. In fig.~\ref{mc}, the spectra obtained with the same detector setting with a monochromatic beam is shown. If in the simulation only electromagnetic interactions were considered (dark shaded histogram), the low energy tail shows no structure. If hadronic interactions were included, additional structures appear (light shaded histogram), which are similar to those observed in the data (filled circles). A closer look at the simulated data shows that photonuclear reactions followed by neutron escape from the crystal are indeed responsible for the additional peak structures. Positrons entering the crystal produce an electromagnetic shower. One or more photons of the shower can be captured by $^{127}$I nuclei. In the MC simulation, nuclear photoabsorption is generally followed by emission of neutrons (94\%), protons (4\%) or $\alpha$-particles (2\%). The kinetic energy and the separation energy of the neutron are not observed by the NaI(Tl) crystal if the neutron escapes. The second peak in the deposited energy spectrum starts at $E_{1n}$ below the beam energy, where this reaction channel opens. According to the MC, the origin of the third peak in the spectrum is due to emission and escape of two neutrons. The neutrons can come from a single nucleus or from two separate ones (due to more photo-absorptions in the same shower). Both cases contribute to the third peak which starts at an energy consistent with either the energy threshold of two neutron emission or twice the single separation energy $E_{1n}$. The distribution of the kinetic energy for escaping neutrons is shown in fig.~\ref{nescaped}. The dashed histogram represents the kinetic energy of the neutrons after nuclear emission, while the shaded histogram shows the kinetic energy after escape from the NaI(Tl) crystal. The difference between the two spectra is due to elastic and inelastic scattering reactions in the NaI(Tl) crystal. In fig. \ref{elastic}, the correlation between the number of elastic scatterings and the neutron kinetic energy at production is shown. Figs.~\ref{nescaped} and \ref{elastic} suggest that, although the primary source of the second and third peaks is low energy neutron emission from photonuclear reactions, many neutron elastic scatterings significantly lower the escaping neutron kinetic energy, returning ``lost'' energy to the NaI(Tl) crystal. The agreement between simulation and experiment is not perfect. Given the high number of interactions which a neutron can experience in a large crystal, a small error in the models for elastic and inelastic scattering can be amplified. Moreover, in GEANT4 photonuclear reactions are parameterized on a limited data set of nuclides \begin{figure}[!t] \resizebox{\columnwidth}{!}{\includegraphics{figure6.eps}} \caption{Simulation of the number of elastic scatterings as a function of the kinetic energy of the neutrons after escaping the nucleus.} \label{elastic} \end{figure} \section{Conclusions} The response of a large NaI(Tl) crystal to a positron beam of 70~MeV/c was investigated in preparation for the PIENU experiment at TRIUMF. Low energy structures were observed in the energy spectrum and the mechanism for their origin was found to be consistent with neutron emission due to photo-absorption followed by neutron escape from the crystal. \section*{Acknowledgments} We wish to thank M.~Kovash (University of Kentucky) for providing us with his $\gamma$-ray spectrum measured with a similar NaI(Tl) crystal, A.~Sandorfi (Brookhaven National Laboratory) for useful comments and for arranging the loan of the NaI(Tl) crystal, and S.~Chan, C.~Lim and N.~Khan for the engineering and installation work of the detector. We are also grateful to Brookhaven National Laboratory for providing the NaI(Tl) and CsI crystals. This work was supported by the Natural Science and Engineering Council (NSERC) and the National Research Council of Canada through its contribution to TRIUMF. One of the authors (MB) has been supported by US National Science Foundation grant Phys-0553611.
1,116,691,501,093
arxiv
\section{Introduction} \label{Sec:Int} The idea of detecting gravitational waves (GWs) by monitoring the arrival times of radio pulses from neutron stars (i.e., by \emph{pulsar timing}) was first proposed by Sazhin \cite{1978SvA....22...36S} and Detweiler \cite{1979ApJ...234.1100D}; its modern formulation by Hellings and Downs~\cite{1983ApJ...265L..39H} emphasizes the importance of searching for \emph{correlations} in the pulse-timing time deviations among an \emph{array} of intrinsically stable millisecond pulsars. The last few years have seen a strong renewed interest in these searches, with the formation of three major pulsar timing programs: the European Pulsar Timing Array (EPTA, \cite{2011MNRAS.414.3117V}), the North American Nanohertz Observatory for Gravitational Waves (NANOGrav, \cite{2013ApJ...762...94D}), and the Australian Parkes Pulsar Timing Array (\hbox{PPTA}, \cite{2013PASA...30...17M}), which have now joined into a global collaboration, the International Pulsar Timing Array (IPTA, \cite{2010CQGra..27h4013H}). The most promising \emph{known} sources of GWs for PTAs are in-spiraling supermassive black hole binaries (SMBHBs). Some estimates suggest that these will be detected by PTAs as soon as $\sim$2016--2020~\cite{2013MNRAS.433L...1S}. The first detection could plausibly identify the inspiral waves from an orbiting SMBHB (see, e.,g.,\cite{2009MNRAS.394.2255S,2010MNRAS.407..669Y,2013arXiv1307.4086S}), the burst waves that follow its coalescence \cite{2010MNRAS.401.2372V,2012ApJ...752...54C} or a stochastic background from many SMBHBs (see, e.,g., \cite{2003ApJ...583..616J,2013MNRAS.433L...1S}). Pulsar timing already provides the most stringent upper limit on $\Omega_\mathrm{GW} \equiv \rho_\mathrm{GW}/\rho_0$ (the ratio of the energy density in GWs to the closure density of the Universe), and is beginning to impact standard theories of hierarchical structure formation via constraints on the SMBH merger rate. In this article we explore the \emph{discovery} potential of PTAs. Our main motivation is to minimize the risk that current observing strategies and planned data-analysis pipelines artificially preclude the discovery of various types of sources. For instance, most pulsars in PTAs are currently observed with irregular cadences of $\sim 2\mbox{--}4$ weeks. The observational strategies for most pulsar timing arrays are currently optimized to be sensitive to the gravitational wave background (based on strategies as determined by \cite{2005ApJ...625L.123J}). This is appropriate for GWs at the lowest observable frequencies (of order the inverse of the total observation time, $\sim 10^{-8}$ Hz), where PTAs are particularly sensitive. However a search for GW bursts lasting (say) $10^5$\,s would clearly benefit greatly from coordinated timing observations (using a few radio telescopes) that get repeated several times a day. Thus we address the following questions: \begin{itemize} \item Is there a strong motivation for increasing the observing cadence to improve our sensitivity to GWs with frequencies $\sim 10^{-6}\mbox{--}10^{-5}$ Hz? \item What constraints can we impose on the PTA discovery space based on simple energetic, statistical, and causality arguments? \end{itemize} In addressing the first question, an important issue that arises is whether, even if strong sources exist in this band, our sensitivity might be degraded by degeneracies between GW effects and small errors in the timing-model parameters of the monitored pulsars. In addressing the second question, we are necessarily retracing some of the trails blazed by Zimmermann and Thorne~\cite{Zimmermann:1982wi} (hereinafter ZT82) in their classic paper, ``The gravitational waves that bathe the Earth: upper limits based on theorists' cherished beliefs.'' However there are important differences between our paper and theirs: \begin{itemize} \item ZT82 restricted attention to sources at $z \alt 3$, while we consider the case of very high-$z$ sources as well. \item Unlike ZT82, we include the ``memory effect'' among potential observables; its detection turns out to be especially promising in the high-$z$ case. \item ZT82 restricted attention to GWs in the frequency range $10^{-4} < f < 10^{4}\,$Hz (the band of interest for ground-based and space-based interferometers), while we focus on GWs with $f \alt 10^{-5} \,$Hz. (However, there are several instances for which the ZT82 estimates extend trivially to lower frequency; we will note these instances in our paper as they arise.) \end{itemize} This paper is organized as follows. In Sec.\ \ref{sec:determ} we describe a simple general framework for thinking about pulsar timing observations, and we characterize how the detection signal-to-noise ratio scales with quantities such as the number of pulsars surveyed, the timing accuracy provided by each pulsar observation, the observing cadence, and the total observation time. We also briefly review pulsar timing noise, with some emphasis on its red noise component. In Sec.\ \ref{sec:source-review} we summarize salient results regarding PTA searches for SMBHBs and cosmic strings, largely to provide points of comparison with possible unknown GW sources. In Sec.\ \ref{sec:degen} we demonstrate that the timing residual signatures of GWs in the $10^{-7.95}\mbox{--}10^{-4.5}\,$ Hz band are {\it not} degenerate with small errors in the pulsar parameters, except for very narrow frequency bands; had this been otherwise, there would have been little point in considering more fundamental constraints on possible sources in this band. In Secs.\ \ref{sec:z1} and \ref{sec:highredshift} we investigate what constraints on source strengths arise from fundamental considerations of energetics, statistics and causality. In Sec.~\ref{sec:galactic} we discuss how are estimates get modified for highly beamed sources, and for sources in our Galaxy. In Sec.\ \ref{sec:summ} we summarize our conclusions, listing some caveats. Regarding notation, we adopt units in which $G=c=1$. Also, the signal frequency $f$, observation time $T_\mathrm{obs}$ and signal duration $T_\mathrm{sig}$ all refer to time as measured in the observer's frame, at the Solar system barycenter. \section{The PTA signal-to-noise ratio for GW signals of known shape} \label{sec:determ} \subsection{Signal-to-noise ratio for white noise signals} In the rest of this paper, we are going to assume an idealized, general scaling law for the detection signal-to-noise ratio (SNR) of an individual GW source, as observed by a pulsar timing array: to wit, \newcommand{\mathrm{SNR}}{\mathrm{SNR}} \newcommand{\mathrm{GW}}{\mathrm{GW}} \newcommand{\mathrm{noise}}{\mathrm{noise}} \newcommand{\mathrm{rms}}{\mathrm{rms}} \begin{equation} \label{eq:snr} \mathrm{SNR}^2 = M N \left\langle \frac{\delta t_\mathrm{GW}^2}{ \delta t_\mathrm{noise}^2} \right\rangle, \end{equation} where \begin{itemize} \item $\delta t_\mathrm{GW}$ is the \emph{timing residual} due to GWs; \item $\delta t_\mathrm{noise}$ is the noise in the residuals, which includes contributions from the observatory, from pulse propagation, and from intrinsic pulsar processes; \item $\langle \cdots \rangle$ denotes the average over all pulsars in the PTA and over all observed pulses; \item $M$ is the number of pulsars in the PTA; and \item $N$ is the total number of observations for each pulsar. \end{itemize} In what follows, purely for simplicity we will assume that $\delta t_\mathrm{rms}$ is roughly the same across PTA pulsars and observations, so we define \begin{equation}\label{replace} \left\langle \frac{\delta t_\mathrm{GW}^2}{ \delta t_\mathrm{rms}^2} \right\rangle = \frac{\langle \delta t_\mathrm{GW}^2 \rangle}{\delta t_\mathrm{rms}^2} \, . \end{equation} with $\delta t_\mathrm{rms}$ a representative rms value for the noise. The term ``timing residual'' requires definition: it is the difference between the time of arrival (TOA) of a train of pulses \emph{observed} at the radio telescope and the TOA \emph{predicted} by the best-fitting \emph{timing model} for a pulsar. This deterministic model includes parameters (such as the sky position of and distance to the pulsar) that affect the propagation of signals to the observatory, as well as parameters (such as the pulsar period and its derivatives and, if needed, orbital elements for pulsars in binaries) that describe the intrinsic time evolution of the pulsar's emission. The pulses from millisecond pulsars are usually too weak to be observed individually, so the TOAs refer to \emph{integrated} pulse profiles obtained by ``folding'' the output of radiometers with the putative pulsar period over observations with durations of tens of minutes to an hour. Typically, such pulsar timing observations are repeated at intervals of two to four weeks, yielding sparse data sets; however, the individual observations are often run quasi-simultaneously at multiple receiving frequencies (typically one hour to two days apart, since the feeds need to be switched), yielding a set of TOAs at the same \emph{epoch}. See \cite{2008LRR....11....8L,2003LRR.....6....5S} and references therein for more detail. In analogy with other applications in GW data analysis \cite{maggiore2008}, our scaling for the SNR can be motivated by considering a ratio of \emph{likelihoods}: namely, the likelihood of the residual data $r_i$ (with $i$ indexing both epochs and pulsars) under the hypothesis that $r_i = g_i + n_i$, with $g_i$ describing a GW signal of known shape, and $n_i$ denoting noise; and the likelihood of the residuals under the noise-only hypothesis $r_i = n_i$. For Gaussian noise, when the GW signal is really present, the likelihood ratio is \begin{equation} \label{eq:likes} \exp \, \{g_i (C^{-1})^{ij} g_j/2 + n_i (C^{-1})^{ij} g_j\} \end{equation} (summations implied), where $C_{ij} = \langle n_i n_j \rangle$ is the variance-covariance matrix for the noise. The first term in the exponent, which depends only on the GWs, is identified as $\mathrm{SNR}^2/2$, while the second term is a random variable with mean zero and variance (over noise realizations) equal to $\mathrm{SNR}^2$. This can be proved, e.g., by considering that Gaussian noise with covariance $C$ can be written as $\sqrt{C} \, \bar{n}$, with $\sqrt{C} \sqrt{C}^T = C$ the Cholesky decomposition of $C$, and with $\bar{n}$ a vector of uncorrelated, zero-mean/unit-variance Gaussian variables. Then \begin{eqnarray*} && \langle (n_i (C^{-1})^{ij} g_j) (n_l (C^{-1})^{lm} g_m) \rangle \\ &&\quad = (C^{-1})^{ij} g_j \sqrt{C}_i^k \langle \bar{n}_k \bar{n}_p \rangle \sqrt{C}_l^p (C^{-1})^{lm} g_m \\ &&\quad = (C^{-1})^{ij} C^{il} (C^{-1})^{lm} g_j g_m \\ &&\quad = (C^{-1})^{jm} g_j g_m~. \end{eqnarray*} Equation \eqref{eq:snr} follows immediately under the (strong) assumption that noise is uncorrelated and homogenous among pulsars and epochs, so that it can be represented by $(C^{-1})^{ij} = \delta^{ij} / \delta t^2_\mathrm{rms}$. We are assuming also that the sampling of pulsars and epochs in the dataset is sufficiently broad and non-pathological that $\sum_i g^2_i \simeq M N \langle \delta t^2_\mathrm{GW} \rangle$; that is, that the sampling can effectively perform an average over time and pulsar sky position. If the noise is uncorrelated (i.e., \emph{white}), but not homogeneous, Eq.\ \eqref{eq:snr} still stands, provided that $\delta t^2_\mathrm{rms}$ can be taken to represent a suitable \emph{averaged} noise. Under these assumptions, Eq.\ \eqref{eq:snr} is remarkable in that the actual \emph{form} of the signal to be detected appears only through its variance $\langle \delta t^2_\mathrm{GW} \rangle$, and that the structure of observations appears only through their overall number $M \times N$ and rms noise $\delta t^2_\mathrm{rms}$. By contrast, one may have imagined that detecting (say) quasi-sinusoidal signals of high frequency $f_\mathrm{GW}$ would require rapid-cadence observations spaced by $\Delta t \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 1/f_\mathrm{GW}$, according to the Nyquist theorem. However, that theorem is a statement about the \emph{reconstruction} of the whole of a function on the basis of a set of regularly spaced samples, but it does not apply to our case---computing the likelihood that a signal of known shape is present in the data \cite{2001AIPC..567....1B}. In effect, we are checking that the measured data are consistent with our postulated signal: for uncorrelated noise, it does not matter \emph{when} we check, but only \emph{how many times} we do it. \subsection{Relaxing the assumption of white noise}\label{sec:relax} There are two important considerations that challenge our assumption of white, uncorrelated noise. The first is that the residuals include a stochastic contribution due to the over-fitting of noise (and possible GWs) at the time of deriving the timing model. We discuss this further in Sec.\ \ref{sec:degen}, where we show empirically that the detection of quasi-sinusoidal signals at most frequencies would not be affected. From a formal standpoint, van Haasteren and colleagues \cite{2009MNRAS.395.1005V} show that it possible to \emph{marginalize} the likelihood over timing-model parameter errors $\delta \xi$ by replacing the inverse covariance in Eq.\ \eqref{eq:likes} with $C^{-1} - C^{-1} M (M^T C^{-1} M)^{-1} M^T C^{-1}$, where $M$ is the \emph{design matrix} for the timing model fit, so that the extra contribution to the residuals has the form $M \delta \xi$ (A similar strategy of ``projecting out'' parameter errors was employed earlier by Cutler and Harms~\cite{2006PhRvD..73d2001C}, in the context of removing residual noise from slightly incorrect GW foreground subtraction). For uncorrelated noise, Eq.\ \eqref{eq:snr} is modified only by restricting the computation of $\langle \delta t^2_\mathrm{GW} \rangle$ to the GW components that are not absorbed away by the timing model (and this is indeed what we investigate in Sec.\ \ref{sec:degen}). The second important consideration (and for which the GW frequency \emph{does} matter) is the impact of correlated noise. The physically interesting case here is that of long-term correlations, which generate \emph{red} noise that is stronger at low frequencies. To understand the impact of red noise, we study a toy model in which the $N$ observations are organized in $P$ ``clumps'' of $Q$ TOAs taken at nearby times (with $N = P \times Q$), and where noise consists of two components: uncorrelated noise with variance $\sigma^2$ and noise with variance $\kappa^2$ that is completely correlated within clumps, and completely uncorrelated between clumps. (We use $\kappa$ since $\kappa \acute{o} \kappa \kappa \iota\nu o\zeta$ is Greek for ``red.'') We consider a single pulsar, although the generalization to more is trivial. The resulting $C$ has the structure \begin{equation} C = \sigma^2 I + \kappa^2 \sum_{i=1}^{P} O_i, \end{equation} where each $O_i$ is a matrix that has ones for every component corresponding to a combination of samples in the same burst, and zeros everywhere else. Each $O_i$ can also be written as $u_i u_i^T$, where $u_i$ is a vector that has ones for the components in clump $i$, and zeros everywhere else. From the block structure of $C$ and the Woodbury lemma \cite{hager1989}, it follows that \begin{equation} C^{-1} = \sigma^{-2} I - \frac{\sigma^{-2}}{P + \kappa^{-2}/\sigma^{-2}} \sum_{i=1}^{P} O_i. \end{equation} If the characteristic frequency of the GW signal is ``slower'' than the timescale of a clump (i.e., the time over which the $Q$ samples in a clump are collected), then the sum $\sum_i g_i^T O_i g_i \simeq P Q^2 \langle \delta t_\mathrm{GW}^2 \rangle$, because the same value of $g$ is being summed over and over in each burst. It follows that \begin{equation} \label{eq:snrmod} \mathrm{SNR}^2 = \frac{\langle \delta t_\mathrm{GW}^2 \rangle PQ}{\sigma^2 + Q \kappa^2} = \frac{\langle \delta t_\mathrm{GW}^2 \rangle P}{\sigma^2/Q + \kappa^2}; \end{equation} that is, the repeated observations in each clump average out the uncorrelated component of noise (as $\propto 1/\sqrt{Q}$), but not its correlated part. Increasing the number of observations in a clump provides diminishing returns as $\sigma^2/Q \rightarrow \kappa^2$. Let us follow the other branch of our derivation: if the characteristic frequency of the GW signal is ``faster'' than the timescale of the clumps, then, barring special coincidences, $\sum_i g_i^T O_i g_i \simeq P Q \langle \delta t_\mathrm{GW}^2 \rangle$, and $\mathrm{SNR}^2$ reduces (modulo an $O[1/Q]$ correction) to the general expression \eqref{eq:snr}, with $N = PQ$. \subsection{Noise characteristics inferred from observational data}\label{sec:TN} In this section, we consider the characteristics of noise for real pulsars. Namely, to what extent is our analysis applicable to timing residuals from actual PTAs? For the \emph{radiometer} noise due to thermal effects in the receiving system, the assumption of no correlations (i.e., ``white'') is well justified: for observations over a radio frequency bandwidth $\Delta\nu$, the correlation timescale is $(\Delta\nu)^{-1}$, so this noise contribution is effectively uncorrelated in time. Further, from thermodynamic considerations, the assumption of Gaussianity is also well justified. Pulsars can show correlated, red spectrum fluctuations in their TOAs, and Cordes and Shannon \cite{2010arXiv1010.3785C} present a summary of various effects, ranging from intrinsic spin fluctuations to magnetospheric and propagation effects; see also \cite{2010ApJ...725.1607S}. These effects have spectral densities $\propto f^{-x}$, with $x$ typically $> 1$ and in some cases $> 4$. On timescales $\sim$ 5 years ($f \sim 10^{-8.2}\,\mathrm{Hz}$), the residuals appear to be dominated by white components (\citep{2009MNRAS.400..951V,2013ApJ...762...94D}; see also Figs.\ 10 and 11 of \citep{2013PASA...30...17M} for a visual representation of noise effects in PPTA pulsars). Even if $\sigma^2 \approx \kappa^2$ at frequencies $\sim 10^{-8.2}\,\mathrm{Hz}$, at higher frequencies ($\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 10^{-7}\,\mathrm{Hz}$), the variance from white processes will exceed that of any red processes with relatively shallow spectra ($x \approx 1$) by a factor of approximately 15; for red processes with steeper spectra ($x \approx 4$), the ratio will be even larger. \textcolor{red}{A recent global experiment in which several telescopes observed PSR J1713+0747 over the course of 24 hours presents an opportunity to test the extent to which white-noise processes dominate potential red-noise processes.} In our toy model, the red-noise component of the variance is amplified by the clump multiplicity $Q$ [Eq.\ \eqref{eq:snrmod}]. For more general observation schemes and red-noise processes, we may think of the number of clumps $P$ as $T_\mathrm{obs} / T_\mathrm{red}$, where $T_\mathrm{obs}$ is the total duration of observation, and $T_\mathrm{red}$ is the correlation timescale of the most significant red-noise process; then $Q \simeq N (T_\mathrm{red} / T_\mathrm{obs})$. For GW signals with frequency $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 1/T_\mathrm{red}$, our toy model would then suggest that \begin{equation} \label{eq:snrmodtwo} \mathrm{SNR}^2 = \frac{\langle \delta t_\mathrm{GW}^2 \rangle}{\sigma^2/N + \kappa^2 (T_\mathrm{red}/T_\mathrm{obs})}; \end{equation} that is, the $1/\sqrt{N}$ averaging of noise becomes limited by red noise once $N \sim (\sigma^2/\kappa^2) \times (T_\mathrm{obs}/T_\mathrm{red})$---an interesting scaling in its own right. For GW signals with frequencies $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 1/T_\mathrm{red}$, the simpler scaling \eqref{eq:snr} applies. In the remainder of this paper, we neglect the effects of red noise in the scaling of \hbox{SNR} and assume the expression of Eq.\ \eqref{eq:snr}. Our assumption is correct because one or more of the following circumstances will be true (or true \emph{enough}) in practice: \begin{itemize} \item The characteristic GW frequency of interest will be greater than $1/T_\mathrm{red}$ for the most significant red-noise component. \item For a majority of the pulsars in the \hbox{PTA}, the white-noise variance will exceed that of the most dominant red-noise process for the time scales of interest. \item The number of observations will not saturate the averaging of white noise with respect to sub-dominant red noise (i.e., in the ``clump'' picture, $\sigma^2/Q > \kappa^2$). \end{itemize} \section{Brief review of prospects for PTA searches of supermassive--black-hole binaries and cosmic strings} \label{sec:source-review} Here we collect a few salient points concerning PTA searches for SMBHBs and cosmic strings, mostly to provide points of comparison with the hypothetical sources we consider in the next sections. We refer the reader to the literature cited below for more details. \subsection{The detectability of GWs from supermassive black hole binaries} When two galaxies merge, the SMBHs at their centers are brought together by tidal friction from the surrounding stars and gas. It seems likely that their separation eventually shrinks to the point at which gravitational radiation emission dominates the inspiral, and the two SMBHs eventually coalesce \cite{1980Natur.287..307B}. The GWs from all in-spiraling SMBHBs in the observable Universe contribute to a stochastic background of GWs with characteristic amplitude $h_c \sim h_\mathrm{rms} \sqrt{f}$ given by \begin{equation} \label{hc_smbh} h_c \approx A (f/f_0)^{-\beta} \end{equation} in the PTA band, where $\beta \approx {2/3}$ and $A$ is predicted to be in the range $5 \times 10^{-16}\mbox{--}5 \times 10^{-15}$ for $f_0 = 10^{-8}$ Hz \cite{2003ApJ...583..616J,2003ApJ...590..691W,2012arXiv1211.4590M,2013MNRAS.433L...1S}. Depending on the actual $A$, the first PTA detection of GWs is expected between 2016 and 2020 \cite{2013arXiv1305.3196S}. The background is expected to be dominated by binaries with chirp masses $M_c \equiv (m_1 m_2)^{3/5} (m_1 + m_2)^{-1/5} \sim 10^8 M_\odot$ at $z \alt 2$. At frequencies above $f \approx 10^{-8}$ Hz, sources are sparse enough that the central limit theorem does not apply, so the distribution is significantly non-Gaussian and a few brightest sources would appear above the background. Thus, the first PTA discovery could either be an individual strong (and possibly nearby) source, or the full background. \subsection{The detectability of GWs from cosmic strings} There are several mechanisms by which an observable network of cosmic (super)strings could have formed in the early Universe~\cite{2007arXiv0707.0888P}. Simulations have shown that string networks rapidly approach an attractor: the distribution of straight strings and loops in a Hubble volume becomes independent of initial conditions. The network properties {\it do} depend on two fundamental parameters: the string tension $\mu$ and the string reconnection probability $p$. The size of string loops at their birth should in principle be derivable from $\mu$ and $p$, but the studies are difficult and different simulations have produced very different answers. Therefore most astrophysical analyses today assume that the size of loops at their birth can be parametrized as $\alpha \, H^{-1}(z)$, where $H^{-1}(z)$ is the Hubble scale when the loop is ``born,'' and where $\alpha$ is treated as a third unknown parameter. We refer the reader to ~\cite{1997stgr.proc....3A,2007arXiv0707.0888P} for nice reviews. To make matters more complicated, Polchinski has argued that the distribution of loop size at birth is actually bimodal, with both relatively large and small loops being produced at the same epoch~\cite{2008PhRvD..77l3528D}. Regarding the string tension $\mu$, physically motivated values range over at least six orders of magnitude: $10^{-12} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} \mu \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 10^{-6}$. Once formed, string loops oscillate and therefore lose energy and shrink due to GW emission. These waves form a stochastic GW background. In addition to this approximately Gaussian background, the cusps and kinks that form on the string loops emit highly beamed GW bursts~\cite{2001PhRvD..64f4008D,2005PhRvD..71f3510D} Depending on the string parameters, PTAs could discover the stochastic background, the individual bursts, both or neither. The current limit on $\Omega_\mathrm{GW}(f)$ from pulsar timing is $\Omega_\mathrm{GW}(f\sim1\,{\rm yr}^{-1}) \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 1\times 10^{-8}$~\cite{2011MNRAS.414.3117V}, corresponding to a limit on the string tension of $G\mu\leq4.0\times10^{-9}$. \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{absorption-params.pdf} \caption{GW power absorbed by fitting for various pulsar parameters as a function of GW frequency, for pulsar J0613$-$0200. ``PSD ratio'' refers to the pre-fit power spectral density value for the given frequency, divided by its post-fit value. All simulated GWs were sinusoids at the given GW frequency. For each panel, only the indicated parameters were used for fitting, while the other parameters were held fixed at the values given in \cite{2013PASA...30...17M}. At high frequencies, only narrow features are evident (mostly due to fitting of the pulsar's binary motions), but low-frequency GW signals are significantly absorbed by standard fitting parameters. \label{fig:frange}} \end{figure*} \textcolor{red}{Curt will probably just leave out next paragraph, but not 100\% sure yet. If included, he needs to check that his factor 10 agrees with what's in Hogan's paper: Note that the above limits assumed that cosmic strings are uniformly distributed within the Universe. However Hogan and dePies pointed out that for $\mu \alt 10^{-12}$, the recoil of string loops due to GW emission is sufficiently small that their center-of-mass velocity drops to the point at which they cluster around galactic halos. If so, at the Earth the GW signal from strings is dominated by loops within our own Galactic halo. As shown more generally in Sec.\ \ref{galactic}, the result is to increase the estimate of the SNR from strings by a factor $\sim 10$, compared to the estimate based on a uniform distribution (of course, for the same $\mu$, $\alpha$, and $\Omega_{\mathrm{GW}}$. } \subsection{Current constraints on $\Omega_\mathrm{GW}(f)$} As mentioned, the current limit on $\Omega_\mathrm{GW}(f)$ from pulsar timing is $\Omega_\mathrm{GW}(f\sim1\,{\rm yr}^{-1}) \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 1\times 10^{-8}$~\cite{2011MNRAS.414.3117V}. By comparison, the limit from first-generation ground-based interferometers is $\Omega_\mathrm{GW}(f\sim 100\,\mathrm{Hz}) < 6.9\times 10^{-6}$ \cite{2009Natur.460..990A}. From Big Bang nucleosynthesis, we know also that any GW stochastic background that existed already when the Universe was three minutes old satisfies $\Omega_\mathrm{GW} < 1.5 \times 10^{-5}$ today~\cite{2008PhRvD..78d3531B}. Combined measurements of CMB angular power spectra (which are sensitive to lensing by a stochastic GW background) with matter power spectra also yield $\Omega_\mathrm{GW} \alt 10^{-5}$ today, but this method is sensitive to any GWs produced before recombination at $z \approx 1100$~\cite{2006PhRvL..97b1301S}. For GWs generated in the low-z universe, combining results from Planck, WMAP, SDSS, and $H_0$ measurements gives the limit $\Omega_\mathrm{GW} \alt 6 \times 10^{-3}$~\cite{2013arXiv1307.0615A}. \textcolor{red}{[Curt: could add for completeness CMB bound on extreme low frequencies: $\Omega_{\mathrm{GW}} < 10^{-14}$ for $f< 10^{-16}$ Hz.]} \section{Spectral absorption effects from pulsar timing-model fitting} \label{sec:degen} The best knowledge of pulsar parameters comes from the iterative observation and refinement of a timing model, which predicts the times of arrival of all the pulses as a function of all relevant parameters, such as the period and period derivatives of the pulsar's intrinsic spin; the position, proper motion, and parallax of the pulsar; and possibly parameters that describe the motion of the pulsar in a binary system. Depending on the cadence and total time of observation, and on the shape and duration of the GWs, the effects of the GWs on pulse arrival times may correlate with the effects of changing the pulsar parameters, so the GW power may be partly on entirely absorbed by the parameter-fitting process (see, e.,g., the study of the effect of a GW background on pulsar timing parameter estimation \cite{2011MNRAS.417.2318E}). As a specific study of this effect, here we investigate the absorption of sinusoidal GWs to demonstrate frequency-dependent signal loss to pulsar parameter fitting. To do this, we use the {\sc Tempo2} software suite \citep{2006MNRAS.369..655H} to simulate a set of timing residuals for pulsar J0613-0200 \cite{2013PASA...30...17M}, as observed with the Parkes observatory. We generate one TOA every other day for $T_\mathrm{obs} =$ 1,000 days, at a random time compatible with the pulsar being visible from the observatory, and we add a white-noise component with rms amplitude of 100 ns. Into these simulated residuals we inject sinusoidal GWs from a circular SMBHB binary located at $\mathrm{RA} = \mathrm{Dec} = 0$, varying the GW frequency $f$ between $10^{-7.95}$ and $10^{-4.5}$ Hz (corresponding to GW periods of $\sim$ 1,000 days to eight hours), and setting $h_{+} = h_{\times} = 10^{-3} f$, so that the SNR is fixed. For each GW frequency, we measure the power spectral density of the relevant frequency component before the timing-model fit and after seven different types of fit: a fit against the full set of parameters and individual fits for pulsar frequency, frequency derivative, position, proper motion, parallax, and binary period. Figure~\ref{fig:frange} shows the ratio of the power spectral densities in each case, as a function of the source GW frequency. In effect, we are showing the \emph{absorption spectrum} of sinusoidal GWs, as filtered by the timing-model fit. Above $10^{-7}$\,Hz, $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 95\%$ of the signal is preserved even in a full parameter fit, with only narrow absorption features. It is clear that most of these features are specific to this pulsar's binary orbit (and its harmonics), and would not appear at the same frequencies for other pulsars in a PTA. However, absorption features originating from non-binary parameters will occur in all pulsars. Specifically, absorption at $f = 1 / \mathrm{year}$ (corresponding to pulsar position/proper motion) and $1 / 6$ months (corresponding to parallax) can result in up to 100\% loss of the GW signal. Similarly, as the GW period approaches the total duration of pulsar observations, fitting the pulsar spin frequency and frequency derivative results in significant signal absorption. The sensitivity to GWs at these lower frequencies would be better in a longer data set (see, e.g., the low-frequency sensitivity curves in \cite{2010MNRAS.407..669Y}). At high frequencies, only two narrow absorption features may be common across a PTA: these correspond to the observing cadence (here at $1/(2\,{\rm days})=10^{-5.238}$ Hz), and to the sidereal day (at $1/(23.934 \, \mathrm{hr})\simeq10^{-4.935}$ Hz). The former can be avoided with higher-cadence or irregular observations, but the latter reflects the limitations of using a single observatory, which can only observe a source while it is above the horizon. In our simulation we have chosen random observation times within the window of coverage, but more structured observing cycles can engender even deeper features. By contrast, this feature can be avoided for a polar target that never sets. To summarize, our example study suggests that for a majority of PTAs a high-frequency GW signal will be well preserved through the standard timing-model fitting process, save for narrow features at roughly the observing cadence and the sidereal day. GWs at frequencies close to either (1\,year)$^{-1}$) or (6\,months)$^{-1}$ will be significantly impacted, as will GWs with periods approaching the longest-duration pulsar observations. \section{Discovery space for sources in the low-redshift Universe, $z \alt \emph{O}(1)$} \label{sec:z1} In this section we begin to characterize the PTA discovery space for the case of sources in the low-redshift Universe, by which we mean $z \alt \emph{O}(1)$. We imagine that there is \emph{some} heretofore undiscovered GW source, and we ask what it would take for it to be detectable via pulsar timing. We consider separately the case of \emph{modeled signals} (for sources already conceived by theorists, so that a parameterized waveform model can be used in a matched-filtering search), the case of \emph{unmodeled bursts}, and the case of the \emph{gravitational memory} effect from modeled sources. We will assume that the GW sources are distributed isotropically and that we do not occupy a preferred location in space \emph{and} time with respect to them---that is, we assume that the Earth is not improbably close (spatially) to one of the sources, and that the sources have been emitting GWs for a significant fraction of the last $10^{10}$ years. We parametrize our projections in terms of the energy density $\Omega_\mathrm{GW}$. Because we consider sources in the low-redshift Universe, in what follows we ignore redshift effects. Nevertheless our results at $z \sim 1$ match on nicely to our results for high-z in Section VII. \subsection{Discovery space for modeled GW signals in the low-redshift Universe} \label{sec:gwsig} As we established in Sec.\ \ref{sec:determ}, the SNR modeled GW signals as observed by a PTA is \newcommand{\mathrm{sig}}{\mathrm{sig}} \newcommand{\mathrm{obs}}{\mathrm{obs}} \begin{equation} \label{eq:snr2} \begin{aligned} \mathrm{SNR}^2 &= \frac{\langle \delta t_\mathrm{GW}^2 \rangle}{\delta t_\mathrm{rms}^2} M N \\ &= \frac{\langle \delta t_\mathrm{GW}^2 \rangle}{\delta t_\mathrm{rms}^2} M p \min \{T_\mathrm{sig},T_\mathrm{obs}\}, \end{aligned} \end{equation} where $\langle \delta t_\mathrm{GW}^2 \rangle$ and $\delta t_\mathrm{rms}^2$ are the mean-square-averaged timing residuals due to GWs and measurement/pulsar noise; $M$ is the number of pulsars in the array; $N$ is the number of times each pulsar is observed, which we rewrite in terms of the cadence of observation $p$ (e.g., 1/day), the total duration of observation $T_\mathrm{obs}$ (e.g., 3 years), and the typical duration of the GW signal $T_\mathrm{sig}$. For a sinusoidal GW signal of frequency $f$ and rms amplitude at Earth $h = \sqrt{h_+^2 + h_\times^2}$, the root-mean-square timing residual averages\footnote{To derive Eq.\ \eqref{eq:dt} we compute the Estabrook--Wahquist \cite{1975GReGr...6..439E} fractional Doppler response (for the pulsar ``Earth term'' alone) to a sinusoidal GWs given by $h_+(t) + i h_\times(t) = (h/\sqrt{2}) \exp 2 \pi i f t$, take the antiderivative to obtain the corresponding pulse-time delay, square and average over time, sky position, and polarization angle. \textcolor{red}{[MV: this specification of $h_+(t)$ and $h_\times(t)$ may too restrictive; my calculation should be valid whenever $h_+$ and $h_\times$ are uncorrelated and have the same amplitude $= h/\sqrt{2}$.}} to \begin{equation} \label{eq:dt} \bar{\delta} t_\mathrm{GW} \equiv \sqrt{\langle \delta t_\mathrm{GW}^2 \rangle} = \frac{1}{4\sqrt{3}\pi} \frac{h}{f} \simeq \frac{1}{20} \frac{h}{f}. \end{equation} Furthermore, the average rate at which the sources radiates energy in GWs is $\dot{E} = (\pi^2/2) h^2 f^2 d^2 \simeq 5 h^2 f^2 d^2$ \cite[Eq.\ (1.160)]{maggiore2008}, where $d$ is the distance to the source, and $G = c = 1$ (as we will set throughout). The GW energy density from source of this kind is \begin{equation} \Omega_\mathrm{GW} \rho_0 \simeq (\dot{E} T_\mathrm{sig})(R_4 \tau_0), \end{equation} where $R_4$ is the spacetime rate--density of sources, and $\tau_0 \sim 10^{10}$ yr is the current age of the Universe. Approximating the closure density $\rho_0 = 3 H^2/{8 \pi}$ as $\tau_0^{-2}/10$ (since $\tau_0 \simeq H^{-1}$) and rewriting $R_4 \equiv (V_R T_R)^{-1}$ in terms of a fiducial volume $V_R$ and the total event rate $T_R$ in that volume, we can re-express the expected GW-induced timing residual as \begin{equation} \label{eq:dtOM} \bar{\delta} t_\mathrm{GW} \simeq \frac{1}{150} f^{-2}\, d^{-1}\, \bigg(\frac{\Omega_\mathrm{GW} V_R T_R}{\tau_0^3\, T_\mathrm{sig}}\bigg)^{1/2} \, . \end{equation} We estimate the distance to the closest source that would be observed over time $T_\mathrm{obs}$ by setting \begin{equation} \frac{4}{3} \pi \, d^3 \max \{T_\mathrm{obs},T_\mathrm{sig}\} R_4 = 1 \end{equation} (where the maximum accounts for the persistence of multiple emitting sources if $T_\mathrm{sig} > T_\mathrm{obs}$), whence \begin{equation} \label{eq:dnear} d_\mathrm{near} \simeq \left[ \frac{3}{4\pi} \frac{V_R T_R}{T_\mathrm{sig}} \min\{1,T_\mathrm{sig}/T_\mathrm{obs}\} \right]^{1/3}. \end{equation} Folding together all the results of this section, we obtain the corresponding, largest SNR that would be observed as \begin{equation} \label{eq:snrnear} \begin{aligned} \mathrm{SNR}^2_\mathrm{near} & \simeq 10^{-4} \frac{\Omega_\mathrm{GW}}{f^4 \tau_0^3}\, \frac{M p\, T_\mathrm{obs}}{\delta t^{2}_\mathrm{rms}} \bigg[ \frac{V_R\, T_R}{T_\mathrm{sig}} {\rm min}\{1,\frac{T_\mathrm{sig}}{T_\mathrm{obs}}\}\bigg]^{1/3} \\ & \simeq 2 \times 10^{-4} \frac{\Omega_\mathrm{GW}}{f^4 \tau_0^3}\, \frac{M p\, T_\mathrm{obs}}{\delta t^{2}_\mathrm{rms}} d_\mathrm{near}. \end{aligned} \end{equation} We would now like to determine how large an $\mathrm{SNR}_\mathrm{near}$ we could expect for a given $\Omega_\mathrm{GW}$, and for given observational parameters $M$, $p$, $T_\mathrm{obs}$, and $\delta t^2_\mathrm{rms}$. This amounts to maximizing $\mathrm{SNR}_\mathrm{near}$ with respect to the GW-source parameters $V_R$, $T_R$, and $T_\mathrm{sig}$; since these appear together in $d_\mathrm{near}$, we obtain the largest possible $\mathrm{SNR}_\mathrm{near}$ by setting $d_\mathrm{near} = \tau_0$, the Hubble distance. We dare not place the GW source farther, since we are considering the ``local'' Universe and neglecting redshift effects. Note that the scaling $\mathrm{SNR}^2_\mathrm{near} \propto d_\mathrm{near}$ of Eq.\ \eqref{eq:snrnear} seems counterintuitive, since we would naively think of the strongest sources as the closest. However, while the squared GW strain $h^2$ at the Earth scales as $1/d^2$, it also scales with the total energy $\Delta E$ that is emitted by each source, and that is ``available'' to each source given a fixed $\Omega_\mathrm{GW}$; this $\Delta E$ increases with decreasing source density, and is proportional to $d_\mathrm{near}^3$. This surprising intermediate result was already shown in ZT82 \cite{Zimmermann:1982wi}. We can now plug in fiducial values for the observational parameters (as well as $\tau_0 = 3 \times 10^{17}$ s), arriving at \begin{equation} \label{eq:fid1} \begin{aligned} \max\{\mathrm{SNR} \}& \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 10 \bigg(\frac{f}{10^{-7} \, {\rm Hz}}\bigg)^{\!\!-2} \bigg[\frac{\Omega_\mathrm{GW}}{10^{-5} }\bigg]^{1/2} \times \mathrm{obs.} \\ & \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 0.03 \bigg(\frac{f}{10^{-5} \, {\rm Hz}}\bigg)^{\!\!-2} \bigg[\frac{\Omega_\mathrm{GW}}{10^{-2}}\bigg]^{1/2} \times \mathrm{obs.}, \end{aligned} \end{equation} where \begin{equation} \mathrm{obs.} = \bigg[\frac{\delta t_\mathrm{rms}}{10^{-7} \, \mathrm{s}}\bigg]^{-1} \bigg[\frac{M \, p\, T_\mathrm{obs} }{10^{4}}\bigg]^{1/2}. \end{equation} While we derived these constraints for the case of small $z$, we shall see below that they become even stronger for high-$z$ sources. The fiducial values for $f$ and $\Omega_\mathrm{GW}$ in the second row of Eq.\ \eqref{eq:fid1} are motivated by our original question, whether PTA searches should be extended to frequencies as high as $\sim 10^{-5}$ Hz. The current upper limit (from structure formation) on the energy density of hot dark matter is $\Omega_\mathrm{HDM} \alt 1.5 \times 10^{-2}$ (at $95\%$ confidence) \textcolor{red}{[need ref]}; this limit applies also to $\Omega_\mathrm{GW}$. Our conclusion is that a PTA detection of GWs at frequencies above $\sim 3 \times 10^{-5}$\,Hz should be considered very unlikely on fundamental grounds. \subsection{Discovery space for unmodeled GW bursts in the low-redshift Universe} Quite simply, a burst is a signal with $T_\mathrm{sig} \sim 1/f$. Since it contains only $\sim 1$ cycles, its instantaneous SNR (i.e., GW amplitude over rms noise) is the same as its matched-filtering SNR, up to a factor of order one (after the data has been filtered to remove the noise that is outside the band of interest). Now, whatever the $T_\mathrm{sig}$, we can still adjust $R_4$ so that $d_\mathrm{near}$, as defined in Eq.\ \eqref{eq:dnear}, is equal to $\tau_0$. For instance, if $T_\mathrm{sig} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} T_\mathrm{obs}$ and $T_\mathrm{obs} = 10^8$ s, this requires one burst every $10^8$ s within a Hubble volume. So for this rate, the instantaneous SNR is the same as given in Eq.\ \eqref{eq:fid1} for modeled signals. This seems promising, because since bursts require no model for their detection, they could potentially reveal phenomena that nobody ever thought of. At the same time, their detection would require the utmost care in excluding instrumental and astrophysical artifacts. \subsection{Discovery space for GW memory in the low-redshift Universe} GWs with memory (for a recent review see \cite{2010CQGra..27h4036F}) cause a permanent deformation -- a ``memory'' of the passage of the waves -- in the configuration of an idealized GW detector. They are emitted by systems with unbound components (linear memory), and by \emph{generic} GW sources because of the contribution of the energy--momentum of their ``standard'' GWs to the changing radiative moments of the source (nonlinear memory). Several authors have discussed the detectability of the GW memory effect by PTAs for known source types, especially merging massive--black-hole binaries~\cite{2009MNRAS.400L..38S,2010MNRAS.401.2372V,2010MNRAS.402..417P,2012ApJ...752...54C}. Here we consider the effect from the point of view of the PTA discovery space, and again we ask in which region of parameter space PTAs could discover previously unimagined sources by way of their GW memory. For a source at distance $d$ from Earth, which emits a total energy of $\Delta E$ in GWs, the amplitude of the memory effect is \cite{2010MNRAS.401.2372V} \newcommand{\mathrm{mem}}{\mathrm{mem}} \begin{equation} \label{eq:mem1} h_\mathrm{mem} \sim \frac{\alpha}{\sqrt{6}} \frac{\Delta E}{d}, \end{equation} where $\alpha < 1$ is a factor determined by the asphericity of the energy outflow (more precisely, from its quadrupolar part). In addition to the general assumptions we made in Sec.\ \ref{sec:z1}, we will postulate that most of the GW energy from any one source is emitted on a timescale $T_\mathrm{sig} \ll T_\mathrm{obs}$. Then we can approximate the ``turn on'' of the memory effect as a step function, and the effect on any pulsar is to create a timing residual that grows linearly in time: \begin{equation} \label{eq:mem2} \delta t_\mathrm{GW} \sim \theta(t-t_0) h_\mathrm{mem} t, \end{equation} where the memory passes over the Earth at time $t_0$. In any single pulsar, a linear-in-time residual can be interpreted simply as a glitch causing an instantaneous change in the pulsar frequency. However, all pulsars in the PTA would show such apparent glitches at the same time, with relative amplitudes following a simple pattern on the sky~\cite{2010MNRAS.401.2372V} determined by four parameters (the sky-location angles and two amplitudes that specify the transverse--trace-free part of the metric), so in principle the detection problem is well posed. The corresponding PTA SNR is \cite{2010MNRAS.401.2372V} \begin{equation} \label{eq:mem-snr} \mathrm{SNR}_\mathrm{mem} \sim \frac{1}{20} \frac{h_\mathrm{mem} \, T_\mathrm{obs}}{\delta t_\mathrm{rms}} (M p\, T_\mathrm{obs})^{1/2} \, , \end{equation} where the factor $1/20$ accounts for the facts that $\delta t_\mathrm{GW}$ will typically be zero for a significant fraction of $T_\mathrm{obs}$, and that a large part of the effect will be absorbed in the pulsars' timing models (and especially by the fitting of their periods and period derivatives) \cite{2010MNRAS.401.2372V}. Note that GW memory effect is essentially a low-frequency effect: SNR can build up precisely because memory remains constant, but non-zero, for a sizable fraction of $T_{obs}$. Thus there is no particular advantage to high-cadence timing measurements. We can now derive how large an SNR we may expect for detecting GW memory for a given $\Omega_\mathrm{GW}$ and for given observational parameters. As above, we relate the energy density in GWs to the energy emitted in GW bursts, \begin{equation} \label{eq:mem4} \Omega_{\mathrm{GW}} \sim 10 \, \Delta E \, R_4\, \tau^3_0; \end{equation} we then combine Eqs.\ \eqref{eq:mem1}, \eqref{eq:mem-snr}, and set $d = d_\mathrm{near} = (4 \pi R_4\, T_{\mathrm{obs}} / 3)^{-1/3}$, to obtain \begin{equation} \label{eq:mem5} \mathrm{SNR}_\mathrm{mem,near} \sim \frac{\alpha}{300} \, \frac{\Omega_{\mathrm{GW}}}{\tau_0^3} R_4^{-2/3} T_{\mathrm{obs}}^{4/3} \frac{(M p \, T_{\mathrm{obs}})^{1/2}}{\delta t_{\mathrm{rms}}}. \end{equation} Again, for fixed $\Omega_\mathrm{GW}$ we maximize $\mathrm{SNR}_\mathrm{mem,near}$ by taking $R_4$ to be as small as possible, subject to the constraint that $d_\mathrm{near} < \tau_0$, leading to \begin{equation} \label{eq:snrmaxmem} \begin{aligned} \max\{\mathrm{SNR}_\mathrm{mem} \}& \simeq \frac{\alpha}{500} \, \frac{\Omega_{\mathrm{GW}}}{\tau_0} T^2_{\mathrm{obs}} \frac{(M p \, T_{\mathrm{obs}})^{1/2}}{\delta t_\mathrm{rms}} \\ & \simeq 700 \, \alpha \, \bigg[\frac{\Omega_{\mathrm{GW}}}{10^{-2}}\bigg] \bigg[\frac{T_{\mathrm{obs}}}{10^8 \, \mathrm{s}}\bigg]^2 \times \mathrm{obs.} \end{aligned} \end{equation} Comparing Eqs.~(\ref{eq:fid1}) and (\ref{eq:snrmaxmem}), we see that--depending on the values of $\Omega_{\mathrm{GW}}$ and $f$--the memory effect from a burst could be much more detectable than its direct waves. More generally, comparing $\mathrm{SNR}_\mathrm{mem}$ with the \emph{direct} SNR for the same source, as given by Eqs.\ \eqref{eq:snr2} and \eqref{eq:dt}, we find: \newcommand{\mathrm{dir}}{\mathrm{dir}} \begin{equation} \label{eq:ratio_lowz} \begin{aligned} \frac{\mathrm{SNR}_\mathrm{mem}}{\mathrm{SNR}_\mathrm{dir}} &= \frac{1/20}{1/20} \frac{h_\mathrm{mem} T_\mathrm{obs}}{h / f} \left(\frac{M p \, T_\mathrm{obs}}{M p \, T_\mathrm{sig}}\right)^{1/2} \\ &= \frac{1/20}{1/20} \frac{\pi^2 \alpha}{2 \sqrt{6}} h \, f^3 \, d \, T_\mathrm{sig} \, T_\mathrm{obs} \left(\frac{M p \, T_\mathrm{obs}}{M p \,T_\mathrm{sig}}\right)^{1/2} \\ &= \frac{1}{1/20} \frac{\pi^2 \alpha}{2 \sqrt{6}} \, \mathrm{SNR}_\mathrm{dir} \, \frac{\delta t_\mathrm{rms} \, T_\mathrm{sig}^{-4} \, T_\mathrm{obs}^2 \, d}{(M p \,T_\mathrm{obs})^{1/2}} \\ &\simeq 10^6 \, \alpha \, \mathrm{SNR}_\mathrm{dir} \bigg[\frac{T_\mathrm{sig}}{10^5 \, \mathrm{s}} \bigg]^{-4} \bigg[\frac{T_\mathrm{obs}}{10^8 \, \mathrm{s}} \bigg]^{2} \bigg[\frac{d}{\tau_0} \bigg] \big[\mathrm{obs.} \big]^{-1} \end{aligned} \end{equation} where in the second row we have used the fact that $h_\mathrm{mem} \simeq (\alpha / \sqrt{6}) (\Delta E/d)$ and $\Delta E = (\pi^2/2) h^2 f^2 d^2 \times T_\mathrm{sig}$; in the third row we have substituted $\mathrm{SNR}_\mathrm{dir} = (1/20) (h/f) \delta t_\mathrm{rms}^{-1} (M p T_\mathrm{sig})^{1/2}$ and replaced $f$ with $1/T_\mathrm{sig}$, as appropriate for a burst signal. Since $\mathrm{SNR}_\mathrm{dir}$ scales as $h_{\mathrm{dir}}$ while $\mathrm{SNR}_\mathrm{mem}$ scales as $h^2_{\mathrm{dir}}$, the memory effect dominates for a sufficiently strong signal. \section{Discovery space at high redshift} \label{sec:highredshift} In the previous section we have considered sources at small $z$, neglecting cosmological effects. We now turn to sources in the early Universe, at $z \gg 1$. Again, we will assume that the sources are isotropically distributed and that the Earth does not have a preferred location in spacetime with respect to them. The especially interesting cases are GW memory, which we discuss first, and unmodeled bursts. We begin by collecting a few useful formulas. Let $t \equiv \int a^{-1}(\tau) \,\mathrm{d}\tau$ be the conformal time coordinate, in terms of which the (spatially flat) Robertson--Walker metric becomes \begin{equation} ds^2 = a^2(t)\big[-dt^2 + dx^2 + dy^2 + dz^2\big] \,. \end{equation} \newcommand{\mathrm{eq}}{\mathrm{eq}} We find it useful to define the high-$z$ epoch into the radiation-dominated era for $z \ll z_\mathrm{eq}$ and the matter-dominated era for $z \gg z_\mathrm{eq}$, where $z_\mathrm{eq} \approx 3,200$ (the redshift at which the energies of matter and radiation were equal). Then we can approximate $a(\tau) \propto \tau^{1/2}$ for $\tau < \tau_\mathrm{eq}$ and $a(\tau) \propto \tau^{2/3}$ for $\tau > \tau_\mathrm{eq}$ (of course, we now know that the Universe is dark-energy, rather than matter dominated for $z \alt 1.7$, but we neglect this correction in keeping with the back-of-the-envelope spirit of this paper). We use the subscript ``0'' to refer to present Universe (e.g., $\tau_0 \sim 10^{10}$ years is the present age of the Universe), and we choose our spatial coordinates so that $a_0 \equiv a(\tau_0) = 1$. Then \begin{equation}\label{tz} t(z) = \begin{cases} (1 + z_{\mathrm{eq}})\big(3 \tau_{\mathrm{eq}}^{2/3}\,\tau^{1/3}(z) - \tau_{\mathrm{eq}}\big) & z < z_{\mathrm{eq}}, \\ (1 + z_{\mathrm{eq}})\big(2 \tau_{\mathrm{eq}}^{1/2}\,\tau^{1/2}(z) \big) & z > z_{\mathrm{eq}}, \, . \end{cases} \end{equation} and in particular, \begin{equation}\label{t0} t_0 \simeq (1 + z_{\mathrm{eq}})\big(3 \tau_{\mathrm{eq}}^{2/3}\,\tau_0^{1/3} \big) \,, \end{equation} and therefore \begin{equation}\label{t0tz} \frac{t_0}{t(z)} \simeq \begin{cases} (1 + z)^{1/2} & z < z_{\mathrm{eq}}, \\ \frac{3}{2} (1+z) (1 + z_{\mathrm{eq}})^{-1/2} & z > z_{\mathrm{eq}}, \, . \end{cases} \end{equation} Now consider GW bursts produced at $z \gg 1$. The size of the particle horizon at redshift $z$ is $\sim t(z)$ in co-moving coordinates, and so the number of such particle-horizon volumes within our horizon volume today is $\sim [t_0/t(z)]^3$. Let $B$ be the average number of GW bursts coming from each horizon volume $[t(z)]^3$. Let the energy (as measured at $z$) of a typical burst $\Delta E(z)$; by today that energy has been redshifted to $\Delta E_0 = \Delta E_z/(1+z)$. The total energy today, within a Hubble volume, from all such bursts at redshift $z$ is $\Delta E_0 \,B\, [t_0/t(z)]^3$, and it satisfies \begin{equation}\label{DelE_Omeg} \Delta E_0 \,B\, [t_0/t(z)]^3 \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} \frac{1}{10} \Omega_{\mathrm{GW}} \tau_0 \, . \end{equation} We write ``$\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$'' instead of ``$\simeq$'' because there could be other significant sources for $\Omega_{\mathrm{GW}}$, besides this early-Universe contribution. \subsection{Discovery space for GW memory from sources at high $z$} \label{sec:zlarge} The generalization of Eq.\ \eqref{eq:mem1} to sources at arbitrary $z$ is \begin{equation} h_{\mathrm{mem}} \sim \frac{\alpha}{\sqrt{6}} \frac{\Delta E(z) (1+z)}{D_L}, \end{equation} where $\Delta E(z)$ is the locally measured energy loss and $D_L$ is the luminosity distance to the source (this follows from the propagation of GW-like perturbations in the Robertson--Walker spacetime \cite{maggiore2008} and from the definition of $D_L$). The energy carried by those emitted waves today is $\Delta E_0 = \Delta E(z)/(1+z)$, while for high $z$ we have $D_L \approx 3\tau_0 (1+z)$. Thus we have \begin{equation}\label{hmemz} h_{\mathrm{mem}} \simeq \frac{\alpha}{8} \frac{\Delta E_0\,(1+z)}{\tau_0} \,. \end{equation} It is instructive to determine the high-$z$ version of Eq.\ \eqref{eq:ratio_lowz} for the ratio $\mathrm{SNR}_{\mathrm{mem}}/\mathrm{SNR}_{\mathrm{dir}}$. The only change in the derivation is the replacement $d \rightarrow 3 \tau_0 (1+z)$, leading to: \begin{equation} \begin{aligned} \frac{\mathrm{SNR}_{\mathrm{mem}}}{\mathrm{SNR}_{\mathrm{dir}}} \simeq & \, 3 \times 10^{13} \, \alpha \, \mathrm{SNR}_\mathrm{dir} \\ & \times \bigg[\frac{1+z}{10^7}\bigg] \bigg[\frac{T_\mathrm{sig}}{10^5 \, \mathrm{s}} \bigg]^{-4} \bigg[\frac{T_\mathrm{obs}}{10^8 \, \mathrm{s}} \bigg]^{2} \bigg[\frac{d}{\tau_0} \bigg] \big[\mathrm{obs.} \big]^{-1} \end{aligned} \end{equation} By combining Eqs.~(\ref{t0tz}), \eqref{DelE_Omeg}, and \eqref{hmemz}, we can constrain $\mathrm{SNR}_{\mathrm{mem}}$ given $B$ and $\Omega_\mathrm{GW}$: \begin{equation} h_{\mathrm{mem}} \alt \frac{\alpha}{80} \frac{\Omega_{\mathrm{GW}}}{B} \times \begin{cases} \frac{1}{(1+z)^{1/2}} & 1 \ll z \ll z_{\mathrm{eq}}, \\ \frac{(1+ z_{\mathrm{eq}})^{3/2}}{3 (1+z)^{2}} & z \gg z_{\mathrm{eq}} \, ; \end{cases} \end{equation} the corresponding SNR follows from Eq.\ \eqref{eq:mem-snr}. We want to have a high probability of seeing one such signal within the observation time $T_\mathrm{obs}$. Since the local rate can be shown~\footnote{Briefly, this can be shown by using Eq.~(10) of \cite{2006PhRvD..73d2001C}, approximating the term $4\pi (a_0 r_1)^2 \equiv 4\pi \big(a_0 (t_0 - t(z) ) \big)^2$ by $4\pi (a_0 t_0)^2 \equiv 4\pi (\tau_0)^2$ and using $\dot n(z) (d\tau_1/dz)\Delta z = \dot n(z) \Delta \tau_1 = (B/\tau^3_0)(t_0/t(z))^3$.} to be $R \sim 4 \pi (B/\tau_0) [t_0/t(z)]^3$. Imposing $R \, T_\mathrm{obs} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 1$ leads to \begin{equation} \begin{aligned} \max\{\mathrm{SNR}_\mathrm{mem}\} &\simeq \frac{\alpha}{125} (1+z) \frac{\Omega_\mathrm{GW}}{\tau_0} T^2_{\mathrm{obs}} \frac{(M p \, T_{\mathrm{obs}})^{1/2}}{\delta t_{\mathrm{rms}}} \\ & \simeq 270 \, \alpha \, \bigg[\frac{1+z}{10^7}\bigg] \bigg[\frac{\Omega_{\mathrm{GW}}}{10^{-10}}\bigg] \bigg[\frac{T_{\mathrm{obs}}}{10^8 \, \mathrm{s}}\bigg]^2 \times \mathrm{obs.}, \end{aligned} \end{equation} a factor of order $(1+z)$ larger than the limit we derived in Eq.\ \eqref{eq:snrmaxmem} for sources at $z \alt 1$. We regard this as a promising result, since current constraints on $\Omega_{\mathrm{GW}}$ still leave a great deal of room for possible discovery. \subsection{Discovery space for unmodeled GW bursts at high $z$} We now examine the prospects for detecting a GW burst from high $z$. The total energy emitted by such a source is \begin{equation} \label{eq:dirz} \Delta E(z) = \Delta E_0 \,(1+z) \simeq \frac{\pi^2}{2} h^2 f^2 T_{\mathrm{sig}} D^2_L; \end{equation} using Eq.\ \eqref{DelE_Omeg} and $D_L \sim 3 \tau_0 (1+z)$, we then have \begin{equation} \begin{aligned} h^2 \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} & \,\, 2\times 10^{-3}\ \frac{\Omega_{\mathrm{GW}}}{B} (f \, \tau_0)^{-1} (f \, T_{\mathrm{sig}})^{-1} \\ & \times \begin{cases} (1+z)^{-5/2} & 1 \ll z \ll z_{\mathrm{eq}}, \\ (1/3) (1+ z_{\mathrm{eq}})^{3/2}\, (1+z)^{-4} & z \gg z_{\mathrm{eq}} \, . \end{cases} \end{aligned} \end{equation} Again, a high probability of observing a signal constrains the rate $R$ according to $R \max\{T_\mathrm{sig},T_\mathrm{obs}\} \agt 1$, leading to \begin{equation} \label{eq:dirhighz} \begin{aligned} \max\{\mathrm{SNR}_\mathrm{dir} \} & \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} \frac{1}{120} \bigg[\frac{\Omega_{\mathrm{GW}}}{1+z}\bigg]^{1/2} \frac{\big(M p \, T_{\mathrm{obs}}\big)^{1/2}}{(f\,\delta t_{\mathrm{rms}})(f\, \tau_0)} \\ & \approx 10 \bigg[\frac{f}{10^{-7} \, \mathrm{Hz}}\bigg]^{-2} \bigg[\frac{\Omega_\mathrm{GW}}{10^{-5} (1+z)}\bigg]^{1/2} \times \mathrm{obs.} \end{aligned} \end{equation} This is basically the same limit we found for the largest-SNR burst at $z < 1$, but multiplied by the factor $(1+z)^{-1/2}$. \section{Corrections for beaming and for Galactic sources} \label{sec:galactic} So far our estimates of signal strengths have implicitly assumed that the radiation is not strongly beamed. We have also implicitly assumed that detectable PTA sources will be extra-Galactic. In this section we briefly show how our estimates get modified if one drops these assumptions. Both these issues were addressed by ZT82 \cite{Zimmermann:1982wi}, but here we extend their considerations to large $z$. \subsection{Modifications for highly beamed radiation} Assume that the GW energy is beamed into solid angle $4\pi F$. To see how $\max\{\mathrm{SNR}\}$ for ``direct'' radiation scales with $F$, we will take $\Omega_{\mathrm{GW}}$ and the total radiated energy to be fixed, which together imply a fixed rate density. For the case $z \alt 1$, we can approximate space as Euclidean, so the distance $d$ to the closest source beaming in our direction scales as $d \propto F^{-1/3}$; the observed $h$ scales as $h \propto F^{-1/2}/d$; and altogether $h \propto F^{-1/6}$. We see that the effect of beaming on $\max\{\mathrm{SNR}\}$ is extremely weak; for instance, a beaming factor $F = 10^{-3}$ yields only a factor $\sim 3$ increase in the potential SNR. This very weak dependence was already noted by ZT82 in the $z \alt 3$ case. For $z \gg 1$, to account for beaming, on the right hand side of Eq.~\eqref{eq:dirz} we would replace $\Delta E_0$ with $\Delta E_0/F$. However the condition $R \, T_{\mathrm{obs}} \agt 1 $ gets replaced by $R \, F \, T_{\mathrm{obs}} \agt 1 $, which leads to $\Delta E_0 \propto \Omega_{\mathrm{GW}} B^{-1} F$. Thus the $F$ factors cancel, and beaming has basically no effect on $\max\{\mathrm{SNR}\}$ for high-$z$ sources. Note that our low-$z$ and high-$z$ upper limits, Eqs.\ \eqref{eq:fid1} and \eqref{eq:dirhighz} respectively, have slightly different character: for the former we maximize the SNR from the nearest detected source, for the latter we fix $z$ and therefore luminosity distance under the constraint of detecting at least one source during the experiment. What about memory? The effect of beaming is negligible, since the memory component of GW strain is not beamed, even when direct waves are. The dominant effect is that the parameter $\alpha$ changes by a factor of order one compared to the case of quadrupole emission. \textcolor{red}{MV: need ref?} \subsection{Modifications for Galactic sources} \label{galactic} Throughout Secs.\ \ref{sec:z1} and \ref{sec:highredshift} we have assumed that the Earth does not occupy a preferred location in the Universe. However the Earth lies in the Galaxy; how might that modify our results? For sources at low $z$, universe, we showed in Sec.\ \ref{sec:gwsig} that, for fixed $\Omega_{\mathrm{GW}}$, detection SNR is maximized for sources whose event rate is once per $T_{\mathrm{obs}}$ in a Hubble volume. For a Galactic source to be observable, this rate must increase to once per $T_{\mathrm{obs}}$ per Milky-Way-like galaxy, or $\sim 10^9$ times greater. To maintain the same $\Omega_{\mathrm{GW}}$, the energy $\Delta E$ radiated per event must decrease by a factor $10^9$. (We must also assume that the Galaxy can sustain such a rate of events.) On the other hand, the distance to the extra-Galactic source is $\sim 3$ Gpc, compared to $\sim 10$ kpc for a randomly located Galactic source. For the direct radiation, $h \propto \Delta E^{1/2}/d$, so the ratio \begin{equation} \frac{\max\{\mathrm{SNR}_{\mathrm{dir}}^{\mathrm{Gal}}\}}{\max\{\mathrm{SNR}_{\mathrm{dir}}^{z \sim 1}\}} \sim 10^{-9/2} \frac{3 \, \mathrm{Gpc}}{10 \, \mathrm{kpc}} \sim 10, \end{equation} as was first shown by ZT82 \cite{Zimmermann:1982wi}. Thus, besides being intrinsically less plausible, putative Galactic sources increase $\max\{\mathrm{SNR}_{\mathrm{dir}}\}$ by only an order of magnitude, compared to the $z \sim 1$ case. While we have undertaken the above calculation in the spirit of completeness, we point out that to account for an overall $\Omega_{\mathrm{GW}} \sim 10^{-2}$ (say), these putative Galactic explosions would have to release $\sim 50 M_{\odot}$ in GW energy roughly every $\sim 3\,$yr, and it would appear difficult to construct a plausible physical mechanism for such explosions that would not already have been detected by other means. For the memory effect, $h \propto \Delta E/d$, so we may estimate a ratio \begin{equation} \frac{\max\{\mathrm{SNR}_{\mathrm{mem}}^{\mathrm{Gal}}\}}{\max\{\mathrm{SNR}_{\mathrm{mem}}^{z \sim 1}\}} \sim 10^{-9/2} \frac{3 \, \mathrm{Gpc}}{10 \, \mathrm{kpc}} \sim 10^{-3.5}. \end{equation} Finally, we note that if we had focused on sources in the Local Group instead of just the Milky Way, the event rate for sources outside the Milky Way would be dominated by Andromeda. Since Andromeda has roughly the same mass as the Milky Way but is $\sim 100$ times further away than our Galactic Center, the strongest such events would be $\sim 100$ times weaker than Galactic events. \section{Conclusions and Caveats} \label{sec:summ} In the paper we have constrained and characterized the GW discovery space of PTAs on the basis of energetic and statistical considerations alone. In Secs.\ \ref{sec:z1} and \ref{sec:highredshift} we showed that a PTA detection of GWs at frequencies above $\sim 3 \times 10^{-5}$ Hz would either be an extraordinary coincidence, or have extraordinary implications; this effect results from an analysis of fundamental constraints on possible sources across the PTA sensitivity range, rather than deficiencies in PTA detection itself. We showed also that GW memory can be more detectable than direct GWs, and that memory increasingly dominates the total SNR of an event for sources at higher and higher redshifts; indeed, GW memory from high-$z$ sources represents a large discovery space for PTAs. Although we assumed modest beaming in our estimates, in Sec.\ \ref{sec:galactic} we argued that even extreme beaming would have a minor impact on detection SNRs. Similarly, although we assumed that the strongest GW sources during PTA observation would be extragalactic, our constraint on $\max\{\mathrm{SNR}\}$ rises only by a factor $\sim 10$ for Galactic sources. Throughout the paper we adopted an SNR scaling law valid for white pulsar noise; in Sec.\ \ref{sec:determ} we explained, on the basis of toy model and of the observational characterization of pulsar noise, why this was appropriate. In Sec.\ \ref{sec:degen} we demonstrated how to properly incorporate the effects of red noise in PTA searches, and we demonstrated that the effects of periodic GWs between $\sim 10^{-7.5}$ and $10^{-4.5}\,$ Hz band would \emph{not} be degenerate with small errors in the standard pulsar parameters, except in a few very narrow bands. Theoretical upper limits are akin to no-go theorems, and the authors are well aware that the history of the latter in physics is replete with examples of results that, while strictly correct, turned out to be misleading because their assumptions were overly restrictive. For this reason, our chief motivation in doing this research was not to rule out possibilities, but to uncover promising but neglected areas of search space. With this in mind, we now recall some of the assumptions that we have made, and point out some of the ways that Nature could be side-stepping them. \begin{itemize} \item In this paper we assumed that the Earth is not in a preferred location in the Universe. In Sec.~\ref{sec:galactic} we considered the case in which relevant GW sources are clustered in galaxies, but still assumed that the Earth is not in some preferred location within the Milky Way. \item Even if the Earth does not occupy a preferred location with respect to relevant GW sources, some millisecond pulsars might do so. For instance, if two or more pulsars are located in a globular cluster that \emph{also} contains a BH binary with masses $\agt 1000 M_{\odot}$ the correlated timing residuals due to the binary's GWs impinging on the pulsars could well be detectable (see, e.g., \cite{2005ApJ...627L.125J}). \item In this paper we assumed that at any redshift $z$ there are no structures (such as phase-transition bubbles) that are significantly larger than the contemporaneous horizon size $t(z)$. This is a reasonable way to incorporate causality constraints for processes that are not correlated on super-horizon scales to begin with, but it certainly does not hold for all cases: for instance, inflation would imprint correlations on much larger scales. So \emph{a priori} there could arise strong GW sources that violate this assumption. \end{itemize} It might be worthwhile to try to come up with reasonable physical scenarios that violate one or more of our assumptions. \begin{acknowledgments} CC gratefully acknowledges support from NSF Grant PHY-1068881. MV is grateful for support from the JPL RTD program. This work was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract to the National Aeronautics and Space Administration. Copyright 2013 California Institute of Technology. \end{acknowledgments}
1,116,691,501,094
arxiv
\section{Introduction} Fuelling the ever-growing need for energy\cite{EIA2013} by fossil combustibles is expected to have dramatic, global consequences on climate and ecosystems. These environmental effects, in combination with the depletion of fossil fuel reserves, have led to a pressing need for developing technologies for harnessing renewable energy.\cite{Lewis2006,New2011} In this scenario, bio-electrochemical systems - such as microbial fuel cells\cite{Rabaey2005,Logan2006,Yang2011,Jiang2013} (MFCs) and biological photovoltaic cells\cite{Tsujimura2001,Rosenbaum2005,Pisciotta2010,Bombelli2011,Samsonoff2014} (BPVs) - may help to alleviate the present concerns by utilising living organisms as inexpensive, readily available catalysts to generate electricity. A particularly advantageous feature of BPVs is that they consist of living photosynthetic material that allows for continuous repair of photo-damage to key proteins. Whereas MFCs use heterotrophic bacteria to convert the chemical energy stored in organic matter, BPVs use photosynthetic organisms capable of harnessing solar energy. In MFCs operating with \textit{Geobacter sulfurreducens}, the oxidation of acetate can proceed with a Coulombic efficiency of $\sim100\%$.\cite{Nevin2008} Nevertheless, the availability of acetate and other organic substrates is not endless which imposes a limiting factor to this approach. By contrast, in BPV-type systems, the conversion efficiencies of light into charges remain low ($\sim0.1\%$),\cite{McCormick2011} but the primary fuel (i.e., solar light) is virtually unlimited. Consequently, a significant research effort is required towards understanding which processes limit the performance of biophotovoltaic cells, both in terms of biophysics and engineering. In this context, miniaturisation of BPVs provides highly attractive possibilities for high-throughput studies of small cell cultures, down to individual cells, in order to learn about differences in genetically identical organisms as well as to direct the evolution of efficient cell lines in bulk\cite{Carter2006,Bershtein2008,Keasling2008} and in microfluidics.\cite{Agresti2010} Furthermore, the distances which the charge carriers have to migrate within the devices can be shortened dramatically, reducing resistive losses in the electrolyte.\cite{Rabaey2005} The readily achievable conditions for laminar flow and sessile state of the anodophilic photosynthetic cells also permit operation without the use of a proton-exchange membrane.\cite{Choban2004,Kjeang2009,Wang2013,Ye2013} To date, efforts have focussed on miniaturised microbial fuel cells.\cite{Chiao2006,Crittenden2006,Siu2008,Hou2009,Qian2009,Qian2011,Wang2011c,Hou2012,Ye2013,Jiang2013} In order to exploit the high power densities available through the decrease of the length scales of the charge transport and the decrease of the electrolyte volume, we have developed a simple fabrication method for microfluidic biophotovoltaic ($\mu$BPV) devices\cite{Chiao2006} that do not require an electron mediator or a proton-exchange membrane. Besides increasing efficiency and simplicity of the device, relinquishing mediator and membrane also reduces the cost of potential large-scale applications.\cite{Bond2003,Reguera2006,Malik2009,McCormick2011} \begin{figure*} \includegraphics{Figure1.pdf} \caption{(a) Schematic of the device before insertion of the electrodes, seen at an angle through the glass slide. The lithographically defined PDMS pillars retain molten metal due to its surface tension, and the hole provides an opening for insertion of the \ce{Pt} electrode. (b) Model of the full device including platinum cathode and \ce{InBiSn} anode. (c) Schematic representation of the microfluidic biophotovoltaic device in action. \textit{Synechocystis} cells settled by gravity on the \ce{InBiSn} electrode deliver electrons to the latter by oxidising water. On the platinum cathode oxygen and hydrogen ions are supplied with electrons and combine to water, which closes the circuit. (d) Top view of the device design. (e) True-colour image of a device filled with a solution containing Coomassie blue to visualise the $25~\mu\textrm{m}$ high channels. (f) True-colour image of a device immediately after injection of \textit{Synechocystis} cells at a chlorophyll concentration of around $100~\mu$M. (g) True-colour image of a device filled with \textit{Synechocystis} cells that were allowed to settle on the anode during 24~h.} \label{sch:device} \end{figure*} We use soft lithography\cite{McDonald2002} to form microscopic channels which we equip using microsolidics\cite{Siegel2007} with a self-aligned electrode from a low-melting point alloy\cite{So2011,Li2013b,Herling2013} (InSnBi) and a platinum electrode sealed inside microfluidic tubing. A scheme of such a device is shown in Fig.~\ref{sch:device}(a-c), and the specific design including the external measurement circuit is presented in Fig.~\ref{sch:device}(d). True-colour microscopy photographs of a device filled with Coomassie blue, with freshly injected \textit{Synechocystis} cell, as well as with cells that have settled on the anode during 24 hours are shown in Fig.~\ref{sch:device}(e), (f), and (g), respectively. The possibility of omitting the mediator arises from the physical proximity of the settled cells and the anode which forms the bottom of the device, as well as the choice of electrode materials. The latter ensures that \ce{H}$^+$ is preferably reduced at the cathode since platinum catalyses this reaction. The inherently small size (below 400~nL) of our microfluidic approach permits studies of minute amounts of biological material. Moreover, our $\mu$BPV works without any additional energy supply, such as inert gas purging to keep the anodic chamber anoxic and/or oxygen gas purging in the cathodic chamber to facilitate the reformation of water,\cite{Yagishita1997,Torimura2001,Tsujimura2001} or a bias potential applied to polarise the electrodes and improve the electron flux between anode and cathode.\cite{Malik2009} The use of soft lithography allows for fast in-house prototyping and for the utilisation of the range of techniques developed for integrated circuits. Despite the small volumes contained in microfluidic devices, such approaches can be scaled up by parallelisation,\cite{Hou2012,Romanowsky2012} and the surface-to-volume ratio can be designed to outperform macroscopic approaches significantly.\cite{Wang2011c} \section*{Results} The microfluidic BPV device described here operates as a microbial fuel cell with submicroliter volume, generating electrical power by harnessing the photosynthetic and metabolic activity of biological material. Its anodic half-cell consists of sessile \textit{Synechocystis} cells - performing water photolysis (\ce{2H_2O}$\rightarrow$\ce{4H^+ +4e^- +O_2}) and subsequent ``dark'' metabolism - as well as an anode made from an InSnBi alloy and a light source. \subsection*{Current and power analyses} A $\mu$BPV was loaded with wild type \textit{Synechocystis sp.}~PCC 6803 cells (subsequently referred to as \textit{Synechocystis}) suspended in BG11 medium - supplemented with NaCl - at a final chlorophyll concentration of 100~nmol~Chl\,mL$^{-1}$. The exoelectrogenic activity of three biological replicates of sessile cells was characterised under controlled temperature conditions sequentially in the same device. \begin{figure*} \includegraphics{Figure2.pdf} \caption{(a) Comparison of the voltage output from the same microfluidic device loaded with salt medium only (BG11) or Syenechocystis cells in medium in the dark and with light. The $x$-axis has been converted to a current density through division of the measured current by the surface of the \ce{InSnBi} anode, and the error bars show the standard deviations for three consecutive, independent repeats on the same device. Inset: Response of the biophotovoltaic device as well as of an abiotic control under sequential illumination. (b) Power density generated by the microfluidic devices filled with salt or cells in dark/illuminated environment.} \label{gr:BPV} \end{figure*} The $\mu$BPV was rested for 24~hours, permitting the formation of cellular films on the anodic surface and stabilising the open circuit potential. Polarisation and power curves were then recorded by connecting different resistance loads to the external circuit in the dark or under illumination with white LED light (see Methods), and are shown in Fig.~\ref{gr:BPV}. In the dark, significant power output was observed relative to the control sample containing no cells. This observation is consistent with the breakdown of stored carbon intermediates accumulated during the light period.\cite{Bombelli2011} The peak power output of $275\pm20~\textrm{mW}\,\textrm{m}^{-2}$ was established at a current density of $2840\pm110~\textrm{mA}\,\textrm{m}^{-2}$. Under illumination the microfluidic BPV loaded with \textit{Synechocystis} showed an increase in both current and power output. The peak power density was $P/A=294\pm17~\textrm{mW}\,\textrm{m}^{-2}$ established at a current of $2940\pm85~\textrm{mA}\,\textrm{m}^{-2}$. Crucially, both the dark and the light electrical outputs were significantly higher than the abiotic peak power output in this device of $189\pm32~\textrm{mW}\,\textrm{m}^{-2}$ established at a current of $1430\pm120~\textrm{mA}\,\textrm{m}^{-2}$, demonstrating that the power output from our devices originates from the biological activity of the cyanobacteria. From the linear slope at the high current side of the polarization curve as well as the from the external resistance for which maximal power transfer occurs we can estimate the internal resistance of the device to be around $2.2~\textrm{M}\Omega$ for the biotically loaded device and $1.4~\textrm{M}\Omega$ for the abiotic control (for further details see Supplementary Material). The electrical output recorded from the abiotic control - possibly due to medium salinity\cite{Logan2006,Logan2009} and anodic oxidation - is taken into account when the power densities of biotic experiments are quoted. Specifically, subtracting the abiotic background yields a biotic output power density of $105~\textrm{mW}\,\textrm{m}^{-2}$. This number is halved when comparing to the full cross-sectional area of the device (including the inaccessible parts of the anode), and the power available per footprint area is ca.~$50~\mu\textrm{W}\,\textrm{m}^{-2}$. \subsection*{Light response} To demonstrate the photo-activity of the Synechocystis cells, the variation of the anode-cathode voltage as a response to repeated light stimulation was recorded over time (see inset of Fig.~\ref{gr:BPV}(a)). The external resistor was fixed at 100~M$\Omega$, and the voltage was sampled once per minute. Illumination by white LED light at $200~\mu\textrm{mol}\,\textrm{m}^{-2}\textrm{s}^{-1}$ resulted in a reproducible voltage increase at a rate of $21.7\pm4.7~\textrm{mV}\,\textrm{h}^{-1}$ with $\Delta\textrm{V}_\textrm{light-dark}=5.2\pm0.6~\textrm{mV}$. The time until the electrical outputs were stabilised was around one hour. We find that the baseline voltage levels change after illumination - most certainly due to a buildup and breakdown of intracellular metabolites. From the measured spectrum of the light source (see Supplementary Information) we can determine the average wave number which corresponds to a wavelength of 570~nm. Thus the photon flux can be converted to an incident light intensity of $42~\textrm{W}\,\textrm{m}^{-2}$. Using these values we can extract a rough estimate for the efficiency of our BPV (energy output versus energy input) of around 0.25\% which compares favourably to previously achieved values.\cite{Chiao2006,McCormick2011,Lan2013} Note that light scattering on the glass surface and losses from the non perpendicular illumination angle would increase this number and hence it can be understood as a lower bound. With such an illumination cycle, the light-driven electrical response of a device can be directly compared to dark conditions, proving the functionality of our $\mu$BPV. In addition, the abiotic control shows no variations in anode-cathode potential under similar illumination. The difference between the power outputs under dark and illuminated conditions is consistent with previous studies of \textit{Synechocystis sp.}~PCC 6803.\cite{McCormick2011} Nevertheless, a direct comparison of the power output reported by McCormick \textit{et al.}~of around $0.12~\textrm{mW}\,\textrm{m}^{-2}$ with the peak value in excess of $100~\textrm{mW}\,\textrm{m}^{-2}$ demonstrated here emphasises the great potential of microfluidic approaches compared to macroscopic devices. \subsection*{Variability of the abiotic characterisation} In order to characterise the variability of the electrical behaviour of our $\mu$BPV, two further, lithographically identical devices were studied with abiotic loading (i.e., without photosynthetic cells). These devices were injected with BG11 media (with 0.25 M NaCl), and the current and power outputs were characterised under controlled temperature conditions. \begin{figure} \centering \includegraphics{Figure3.pdf} \caption{(a) and (b) Output voltage (filled circles, solid line, blue axis) and available power density (hollow circles, dashed line, green axis) as a function of current from two further abiotically loaded devices (BG11 cell medium supplemented with 0.25 M NaCl).} \label{gr:abiotic} \end{figure} Following 24~hours of stabilisation of the $\mu$BPV at open circuit potential, polarisation and power curves (see Fig.~\ref{gr:abiotic}) were generated by applying different resistance loads to the external circuit in the dark. In different devices, the abiotic peak power density outputs vary from around 0.2 to $1~\textrm{W/m}^2$ and were established at current densities of 1.5 and $3.5~\textrm{A/m}^2$, respectively. The large variation in device output between different devices stems from the variable position and shape of the cathode which is not lithographically defined in our current designs. Device improvements at this level may well provide a straightforward route to further improvement of the output power. Crucially, no major changes in current and power outputs were observed upon exposure to white light (see inset of Fig.~\ref{gr:BPV}(a)). \subsection*{Comparison with recent literature} The exceptionally high power density in excess of $100~\textrm{mW}\,\textrm{m}^{-2}$ after subtraction of the abiotic background has been facilitated by the physical proximity of the cells to the anode allowing for operation without a proton-exchange membrane, which in turn leads to a low internal resistance in the device, as well as by the microscopic size of the anodic chamber allowing for a large ratio of active surface to volume. In macroscopic bio-electrochemical systems by contrast, parameters such as mass transport, reaction kinetics and ohmic resistance are expected to have detrimental effect on the electrical output.\cite{Rabaey2005,Wang2011c} \begin{table*}[htb]\footnotesize \begin{center} \begin{tabular}{ l c c c c c c c} \multirow{2}{*}{Study} & \multicolumn{1}{c}{\textbf{$P_\textrm{out}$}} & \multicolumn{1}{c}{AAA} & \multicolumn{1}{c}{ACV} & Anode/ & \multirow{2}{*}{Mediator} & Photosynthetic\\ & \textbf{mW/m$^2$} & \multicolumn{1}{c}{mm$^2$} & \multicolumn{1}{c}{$\mu$L} & Cathode & & organism\\ \hline\noalign{\smallskip} \multirow{2}{*}{Chiao 2006\cite{Chiao2006}} & \textbf{\multirow{2}{*}{0.0004}} & \multirow{2}{*}{50} & \multirow{2}{*}{4.3} & Au/ & Methylene & \multirow{2}{*}{\textit{Anabaena sp.}}\\ & & & & N-Au - csc & blue & \\\noalign{\smallskip} \multirow{2}{*}{Bombelli 2011\cite{Bombelli2011}} & \textbf{\multirow{2}{*}{1.2}} & \multirow{2}{*}{80} & \multirow{2}{*}{150} & \multirow{2}{*}{ITO/N-CPt} & \multirow{2}{*}{\ce{K_3[Fe(CN)_6]}} & \textit{Synechocystis sp.} \\ & & & & & & PCC 6803\\\noalign{\smallskip} \multirow{2}{*}{McCormick 2011\cite{McCormick2011}} & \textbf{\multirow{2}{*}{10}} & \multirow{2}{*}{1'300} & \multirow{2}{*}{12'600} & ITO/ & \multirow{2}{*}{free} & \textit{Synechococcus sp.} \\ & & & & Pt-coated glass & & WH 5701\\ \multirow{2}{*}{Thorne 2011\cite{Thorne2011}} & \textbf{\multirow{2}{*}{24}} & \multirow{2}{*}{230} & \multirow{2}{*}{2'300} & \multirow{2}{*}{FTO/Carbon cloth} & \multirow{2}{*}{\ce{K_3[Fe(CN)_6]}} & \multirow{2}{*}{\textit{Chlorella vulgaris}}\\ & & & & & &\\\noalign{\smallskip} Bombelli 2012\cite{Bombelli2012} & \textbf{0.02} & 2'000 & 20'000 & ITO/Pt-C & free & \textit{Oscillatoria limnetica} \\ \noalign{\smallskip} \multirow{2}{*}{Madiraju 2012\cite{Madiraju2012}} & \textbf{\multirow{2}{*}{0.3}} & \multirow{2}{*}{1'500} & \multirow{2}{*}{60'000} & \multirow{2}{*}{Carbon fibre} & \multirow{2}{*}{free} & \textit{Synechocystis sp.} \\ & & & & & & PCC 6803\\\noalign{\smallskip} Bradley 2013\cite{Bradley2013} & \textbf{0.2} & 1'300 & 31'500 & ITO/N-CPt & \ce{K_3[Fe(CN)_6]} & \textit{Synechocystis TM}\\\noalign{\smallskip} \multirow{2}{*}{Lan 2013\cite{Lan2013}} &\textbf{\multirow{2}{*}{13}}& \multirow{2}{*}{4'600} & \multirow{2}{*}{$5\times10^5$} & \multirow{2}{*}{Pre-treated graphite/csc} & \multirow{2}{*}{\ce{K_3[Fe(CN)_6]}} & \textit{Chlamydomonas} \\ & & & & & & \textit{reinhardtii} \\\noalign{\smallskip} Lin 2013\cite{Lin2013} & \textbf{10} & 2'100 & $10^6$ & Au mesh/Graphite cloth & free & \textit{Spirulina platensis}\\\noalign{\smallskip} \multirow{2}{*}{Luimstra 2013\cite{Luimstra2013}} & \textbf{\multirow{2}{*}{6}} & \multirow{2}{*}{1'400} & \multirow{2}{*}{70'000} & PPCP/ & \multirow{2}{*}{free} & \textit{Pauschulzia} \\ & & & & Carbon cloth with Pt & & \textit{pseudovolvox} \\\noalign{\smallskip} \multirow{2}{*}{Sekar 2014\cite{Sekar2014}} & \textbf{\multirow{2}{*}{35}} & \multirow{2}{*}{2.5} & \multirow{2}{*}{n/a} & CNTCP/ & \multirow{2}{*}{free} & \multirow{2}{*}\textit{Nostoc sp.} \\ & & & & Laccase on CNTCP & & \\\noalign{\smallskip} \multirow{2}{*}{Sekar 2014\cite{Sekar2014}} & \textbf{\multirow{2}{*}{100}} & \multirow{2}{*}{2.5} & \multirow{2}{*}{n/a} & CNTCP/ & \multirow{2}{*}{BQ} & \multirow{2}{*}\textit{Nostoc sp.} \\ & & & & Laccase on CNTCP & & \\\noalign{\smallskip} \multirow{2}{*}{This study} & \textbf{\multirow{2}{*}{105}} & \multirow{2}{*}{0.03} & \multirow{2}{*}{0.4} & \multirow{2}{*}{InSnBi alloy/Pt} & \multirow{2}{*}{free} & \textit{Synechocystis sp.} \\ & & & & & & PCC 6803\\ \end{tabular} \parbox{\textwidth}{\caption{\small List of biophotovoltaic devices from the recent literature - including previous microfluidic approaches - that do not require additional energy input. The abbrevations used are anodic active area (AAA), anodic chamber volume (ACV), Nafion film over the cathodic chamber and Au cathode (N-Au), chemical sacrificial cathode (csc), carbon-platinum cathode impregnated on one side with Nafion (N-CPt), carbon paper coated with a thin layer of platinum (Pt-C), indium tin oxide (ITO), fluorine doped tin oxide (FTO), carbon paint with polypyrrole (PPCP), carbon nanotubes on carbon paper (CNTCP), and benzoquinone (BQ). \textit{Synechocystis TM} refers to mutant strains of the cyanobacterium \textit{Synechocystis sp.}~PCC 6803 where the three respiratory terminal oxidase complexes had been inactivated.}\label{Tab:Values}} \end{center} \end{table*} For a specific comparison, Tab.~\ref{Tab:Values} gives an overview of the power densities as well as technical specifications of intrinsic BPVs (i.e., requiring no external energy) characterised in the recent literature, including an instance with an additional enzymatic cathode.\cite{Sekar2014} While there are many aspects influencing the performance of a BPV, such as surface-to-volume ratio, photosynthetic organism, and electrode material, one can observe a trend that generally the mediator-free approaches surpass their counterparts that rely on electron mediators diffusing over large distances. It should be mentioned that many of the studies listed in Tab.~\ref{Tab:Values} were not intended to improve on output power. We also note that higher power densities have been observed\cite{Tsujimura2001} when extrinsic energy was supplied. \section*{Discussion} In summary, we have described a microfluidic design for a mediator-less, membrane-free bio-photovoltaic device. Electrical characterisation of devices loaded with \textit{Synechocystis sp.}~PCC 6803 revealed peak power densities in excess of $100~\textrm{mW/m}^2$. In spite of the low power available per footprint area (currently of the order of $50~\mu\textrm{W/m}^2$) the promising performance and the simple fabrication process demonstrate the potential of our approach for generating biological solar cells with microfluidics. Our approach is applicable to any photosynthetic organism forming biofilms. Furthermore, using the strategy presented in this work, further improvement of the power output should be readily achievable through reduction of the distance between anode and cathode and increase of the channel height. This flexibility in device geometry and the possibility of \textit{in-situ} electroplating of the anode underline the versatility of soft-lithography as a means for generating biophotovoltaic cells. Options for enhanced miniaturisation open pathways for the study of small cell cultures containing as little as tens of cells for rapid screening of electrochemically active microbes in the context of directed evolution. \begin{small} \section*{Methods} \subsection*{Device fabrication} Devices were fabricated to a height of $25~\mu\textrm{m}$ using standard soft lithography\cite{McDonald2002} for polydimethylsiloxane (PDMS) on glass. The designs include an array of $25~\mu\textrm{m}$ wide PDMS pillars spaced by $25~\mu\textrm{m}$ in order to allow for insertion of molten solder\cite{So2011,Li2013b} (Indalloy 19, Indium Corporation, Clinton NY, USA) on a hotplate set to $79~^\circ\textrm{C}$. Solidification of this \ce{InBiSn} alloy upon removal from the heat yields self-aligned wall electrodes using a single lithography step.\cite{Herling2013} This process is illustrated in Fig.~\ref{sch:device}(a) and (b). The cathode is constructed by inserting a strip of platinum wire of $100~\mu\textrm{m}$ diameter through polyethylene tubing (Smiths Medical; 800/100/120; the same as used for contacting microfluidic devices in general) and sealing off both ends of the tubing with epoxy glue. Inserting this tube through a previously punched hole in the device generates a sealed electrical connection and is indicated by the orange wire (Pt) inside a white cylinder (tubing) in the scheme in Fig.~\ref{sch:device}(b). Note that this method for electrode fabrication also allows for straightforward exchange of the cathode material, which would be beneficial for \textit{in-situ} electroplating the \ce{InBiSn} alloy. During settling and operation, the BPVs are oriented such that the bottom of the device is formed by the anode, and the glass slide as well as the pdms forming the side and top walls. The total volume above the anode is below 400~nL, significantly reducing the consumption of biological material and chemicals of each experiment compared to macroscopic approaches. \subsection*{Electrode Area} The accessible surfaces of these electrodes are ca.~$A\sim 2.5~\textrm{mm}/2\times 25~\mu\textrm{m}\approx 0.03~\textrm{mm}^2$ for the anode (only approximately one half of the total metal area is accessible due to the PDMS pillars) and of the order of $0.6~\textrm{mm}^2$ for the cathode, assuming the available length of the Pt wire to be 2~mm. Note that the majority of the cathode lies inside the cavity of the insertion template. If one were to consider the entire horizontal cross-section of the device, the according area would double to $0.06~\textrm{mm}^2$, and the footprint of the device is at present around $60~\textrm{mm}^2$ including the access ports for fluid injection. This latter number can be reduced straightforwardly by more than one order of magnitude by redesigning the inlet ports. \subsection*{Cell culture and growth} A wild-type strain of \textit{Synechocystis sp.}~PCC 6803 was cultivated from a laboratory stock.\cite{Bombelli2011} Cultures were grown and then analysed in BG11 medium\cite{Rippka1979} supplemented with 0.25~M NaCl. All cultures were supplemented with 5~mM NaHCO$_3$ and maintained at $22\pm2~^\circ\textrm{C}$ under continuous low light (ca.~$50~\mu\textrm{mol}\,\textrm{m}^{-2}\textrm{s}^{-1}$) in sterile conditions. Strains were periodically streaked onto plates containing agar ($0.5-1.0\%$) and BG11 including NaCl, which were then used to inoculate fresh liquid cultures. Culture growth and density were monitored by spectrophotometric determination of chlorophyll content. Chlorophyll was extracted in $99.8\%$ (v/v) methanol (Sigma-Aldrich, Gillingham, UK) as described previously.\cite{Porra1989} \subsection*{Cell injection and settling} First, the devices were filled with culture medium (BG11 with 0.25~M NaCl) and any air bubbles were removed by means of syringes attached via elastic polyethylene tubing (Smiths Medical; 800/100/120). \textit{Synechocystis} cells suspended in BG11 (supplemented with NaCl) were then injected at a concentration of $100~\mu\textrm{M}$ chlorophyll. Maintaining the devices for 24~h at an orientation in which the metal alloy anode forms the bottom allows the cells to sediment on the electrode by gravity. This process creates a closely-spaced interface allowing the electrons to be transmitted to the anode (see Fig.~\ref{sch:device}(c) and (g)) and thus favouring mediator-free operation. Throughout all experiments, the syringes are kept attached in order to prevent drying out of the BPV. The complete device design used for the photolithography mask is presented in Fig.~\ref{sch:device}(d), and a microscopy photograph of a device coloured with Coomassie blue is shown in Fig.~\ref{sch:device}(e). Furthermore, a picture of an array of devices is provided in the supplementary material. \subsection*{Microfluidic BPV measurement and illumination} In principle, the optimal way of extracting the voltage output of our biophotovoltaic device would be to determine the half-cell potentials individually by integrating reference electrodes into the devices. Since this is challenging in microfluidic devices,\cite{Shinwari2010} we have instead measured the terminal voltage of our BPV which does not offer insight into the potentials of the complex half-cell reactions but provides an accurate measurement for the power delivered to an external load. Polarisation curves were acquired by recording the terminal voltage $V$ under pseudo steady-state conditions\cite{Logan2006} with variable external loads ($R_\textrm{ext}$) and plotting the cell voltage as a function of current density (current per unit anodic area). Typically, a time span of around 20~min was sufficient for a stable output (see Supplementary Fig.~2). The resistance values ranged from 24.8~M$\Omega$ to 324~k$\Omega$ (24.8, 13, 9.1, 5.3, 2.9, 1.1, 0.547, and 0.324~M$\Omega$), where the internal resistance of the digital voltmeter of $100~\textrm{M}\Omega$ has been taken into account. Voltages were recorded using an UT-70 data logger (Uni-Trend Limited, Hong Kong, China). The current delivered to the load was calculated from Ohm's law \begin{equation}\label{Eq:Ohm} V=R_\textrm{ext}I, \end{equation} and the power $P$ is given by \begin{equation}\label{Eq:Pow} P=V^2/R_\textrm{ext}. \end{equation} Based on the polarisation curves, power curves were obtained for each system by plotting the power per unit area or power density $P/A$ as a function of current density. These power density curves were further used to determine the average maximum power output for the microfluidic BPV system and the negative control. For all measurements, alligator clamps and copper wire served as connections to anode and cathode, and the temperature was kept at $22\pm2~^\circ\textrm{C}$. To characterise the light response, artificial light was provided by a warm white LED bulb (Golden Gadgets, LA2124-L-A3W-MR16), maintained at a constant output of $200~\mu\textrm{mol}\,\textrm{m}^{-2}\textrm{s}^{-1}$ at the location of the BPVs. A measured spectrum of the light source is shown in the supplementary material. Light levels were measured in $\mu\textrm{mol}\,\textrm{m}^{-2}\textrm{s}^{-1}$ with a SKP 200 Light Meter (Skye Instruments Ltd, Llandrindod Wells, UK). The photo-active cells were illuminated through the glass slide forming the bottom of the device, resulting in an almost parallel angle of incidence on the cell layer. This geometry does lead to a decreased light intensity on the cells, which may be compensated for by using a more powerful light source in studies of photosynthetic materials or by altering the geometric arrangement of the devices when harnessing actual sunlight. \end{small} \section*{References}
1,116,691,501,095
arxiv
\section{Background} The quest for new materials functionalities is especially vigorous in transition metal oxides (TMOs), with quasi-two dimensional (q2D) classes causing great activity. The cuprate superconductors, with high superconducting critical temperature (HTS) T$_c$, provide the most prominent example, but doping-induced superconductivity arises in numerous other unexpected systems: ${\cal M}$NCl, ${\cal M}$ = Ti, Zr, Hf (T$_c$=15-25K); MgB$_2$, a self-hole-doped superconductor at 40K; the triangular lattice oxides Li$_x$NbO$_2$, Na$_x$CoO$_2$, and chalcogenides Cu$_x$TiSeO$_2$ and A$_x$${\cal T}$S$_2$ (A=alkali, ${\cal T}$=transition metal), all with\cite{other} T$_c \sim$ 5K. The cuprates, followed by MgB$_2$ and then by the Fe pnictide superconductors (FeSCs) with T$_c$ up to 56 K,, have illustrated that excellent superconductors appear in surprising regions of the materials palette. Even the FeSCs can be pictured as doped (or self-doped) semimetallic superconductors. The CuO$_2$ square-lattice cuprates have inspired study -- the computational design -- of related square-lattice transition metal oxides, such as the ``charge conjugate'' vanadate\cite{sr2vo4A,sr2vo4B,sr2vo4b2} Sr$_2$VO$_4$, the Ag$^{2+}$ material\cite{deepa} Cs$_2$AgF$_4$ that is isostructural and isovalent with La$_2$CuO$_4$, and cuprate-spoofing artificially layered nickelates,\cite{lanio3A} so far without finding new superconductors.\cite{sr2vo4C,lanio3B} These highly interesting materials, though unfruitful for their original intent, suggests that a more detailed understanding of doping effects is necessary to unravel the mechanism of pairing in q2D systems. Nevertheless, materials property design can and does proceed when there is some broad understanding of the mechanism underlying the property.\cite{spaldin,lehur,pardo,negU,blundell} The superconducting pairing mechanism is only well understood for electron-phonon coupling (EPC) where MgB$_2$ with T$_c$=40 K is most successful so far. The detailed understanding of EPC through strong-coupling Eliashberg theory\cite{SSW,gunnarsson} encourages rational, specific optimization of the EPC strength $\lambda$ and of T$_c$, and specific guidelines for one direction for increasing T$_c$ have been laid out.\cite{RTS} Recently a new and different class of cuprate, the delafossite structure CuAlO$_2$ $\equiv$ AlCuO$_2$, has been predicted by Nakanishi and Katayama-Yoshida\cite{NK-Y} (NK) to be a T$_c \approx$ 50K superconductor when sufficiently hole doped. The calculated EPC strength and character is reminiscent of that of MgB$_2$, whose high T$_c$ derives from a specific mode (O-Cu-O stretch for CuAlO$_2$) and focusing in $q$-space\cite{JAn,OKA,IIM,Bohnen} due to the circular shape of the quasi-two dimensional (2D) Fermi surface (FS). CuAlO$_2$ is another layered cuprate, with Cu being (twofold) coordinated by O ions in a layered crystal structure. The differences with square-lattice cuprates are however considerable: the Cu sublattice is not square but triangular; there are {\it only} apical oxygen neighbors; the undoped compound is a $d^{10}$ band insulator rather than a $d^9$ antiferromagnetic Mott insulator; it is nonmagnetic even when lightly doped; and it is most heavily studied as a $p$-type transparent conductor.\cite{transparent} It shares with the hexagonal ${\cal M}$NCl system that doped-in carriers enter at a $d$ band edge. NK provided computational evidence for impressively large $\lambda$ and high temperature superconductivity T$_c$ up to 50 K when this compound is hole-doped, {\it viz.} CuAl$_{1-x}$Mg$_x$O$_2$. It is known that the delafossite structure is retained at least to $x$=0.05 upo coping with Mg.\cite{MgDoping} If this prediction could be substantiated, a new and distinctive structural class would be opened up for a more concerted search for high temperature superconductors (HTS). When our initial linear response calculations indicated weak (rather than strong) EPC, we performed a more comprehensive study. In their work, NK did not carry out linear response calculations of electron-phonon coupling for doped CuAlO$_2$. Instead they made the reasonable-looking simplifications of (a) calculating phonons and EP matrix elements for the undoped insulator, (b) moving the Fermi level in a rigid-band fashion, and (c) using those quantities to evaluate $q$-dependent coupling ($\lambda_q$; $q$ includes the branch index) and finally $\lambda$, predicting T$_c$ up to 50K. In this paper we provide the resolution to this discrepancy, which involves the crucial effect of large doping-induced spectral weight redistribution due to non-rigid-band shifts of spectral density upon doping. The interlayer charge transfer underlying the shift in spectral density has the same origin as the charge transfer obtained from alkali atom adlayers on oxygenated\cite{CsODi} and native\cite{nativeDi} diamond surfaces to produce negative electronic affinity structures. This ``mechanism'' of electronic structure modification will be useful in designing materials for functionalities other than superconductivity. The spectral shifts are distinct from those discussed in the doping of a Mott insulator as we discuss below. First principles electronic structure calculations were performed within density functional theory (DFT) using the FPLO code\cite{FPLO} to obtain the electronic structure for both undoped and doped materials, the latter one being carried out in the virtual crystal approximation (VCA), where the (say) Al$_{1-x}$Mg$_x$ sublattice (Ca substitution is also an option) that gives up its valence electrons is replaced by an atom with an averaged nuclear charge. VCA allows charge transfer to be obtained self-consistently, neglecting only disorder in the Al-Mg layer. The result is the transfer of $x$ electrons per f.u. from Cu, with half going to each of the neighboring Al-Mg layers, corresponding to metallic Cu $d^{10-x}$. Phonon spectra and electron-phonon coupling calculations for the doped system were performed using {\sc Abinit}\cite{abinit} version 6.6.3 with norm-conserving Trouiller-Martins pseudopotentials. In both codes the Perdew-Wang 92 GGA (generalized gradient approximation) functional\cite{PerdewWang92} was used. The phonon and EPC calculations were done on the rhombohedral unit cell using a 24$^3$ k-point mesh and an 8$^3$ q-point mesh, interpolated to more q-points. The measured structural parameters\cite{koehler} for CuAlO$_2$ used were for rhombohedral R$\bar{3}$m (\#166) structure with $a = 5.927$ \AA, $\alpha = 27.932^\circ$. This structure is equivalent to $a = 2.861$ \AA, $c = 17.077$ \AA~with hexagonal axes. Cu resides on the $1a$ site at the origin, Al is at the $1b$ site, at ($\frac{1}{2}$, $\frac{1}{2}$, $\frac{1}{2}$) and the O atom is in the $2c$ position ($u$, $u$, $u$), $u = 0.1101$. \begin{figure}[th] \includegraphics[width=0.95\columnwidth,clip]{Fig1.eps} \caption{(color online) Fatbands plot for CuAlO$_2$, with zero of energy at the top of the gap. The size of the symbol represents the amount of $3d$ character, and the color the character as given in the legend.} \label{fig:band_weights} \end{figure} \begin{figure}[th] \includegraphics[width=0.95\columnwidth,clip]{Fig2.eps} \caption{(color online)Comparison of band structures for the metallic and insulating states of CuAl$_{1-x}$Mg${_x}$O$_2$ with $x = 0.3$. This moderate level of doping results in very strong changes in the relative band positions.} \label{fig:bands_compare} \end{figure} \begin{figure}[th] \includegraphics[width=0.95\columnwidth,clip]{Fig3.eps} \caption{(color online)Comparison of the density of states for the (a) insulating CuAlO$_2$ and (b) metallic CuAl$_{1-x}$Mg${_x}$O$_2$ with $x = 0.3$. For the insulator, the Cu $d$ bands are rather separate from the O $p$ bands, but upon doping strong O $p$ permeates the Cu $d$ bands, to near the Fermi level.} \label{fig:dos_compare} \end{figure} \begin{figure}[th] \includegraphics[width=0.95\columnwidth,clip]{Fig4.eps} \caption{(color online)Fatband plots for the (a) insulator and (b) $x$=0.3 metal states, emphasizing O $2p$ character. In addition to the strong shift upward, the O $2p$ character has increased many-fold for the bands near E$_F$ in the metal. } \label{fig:O_hybrid_compare} \end{figure} The band structure of insulating CuAlO$_2$ shown in Fig. \ref{fig:band_weights}, which agrees with previous work,\cite{NK-Y,AJF,Yanagi} illustrates that Cu $3d$ bands form a narrow, 2.5 eV wide complex at the top of the valence bands. Oxygen $2p$ bands occupy the region -8 eV to -3 eV below the gap. This compound is a closed shell Cu$^+$Al$^{3+}$(O$^{2-}$)$_2$ ionic insulator with minor metal-O covalence, although enough to stabilize this relatively unusual, strongly layered structure. The upper valence bands providing the hole states consist of $d_{z^2}$ character with some in-plane $d_{xy}$, $d_{x^2-y^2}$ mixing. The top of this band occurs at the edge of Brillouin zone (BZ) as in, for example, graphene, but it is anomalously flat along the edge of the zone, viz. Ux-W (M-K, in hexagonal notation), which comprises the entire edge of the BZ. Since it is also almost dispersionless in the $\hat z$ direction, the resulting density of states just below the gap reflects a {\it one-dimensional phase space}, as shown in Fig. \ref{fig:dos_compare}a. The $d_{xy}, d_{x^2-y^2}$ bands are nearly flat in the -2 to -1 eV region, and the $d_{xz}, d_{yz}$ bands are even flatter, at -1 to -0.5 eV. These four flat bands reflect very minor $d$-$d$ hopping in the plane. When hole-doped, a dramatic shift of spectral weight occurs in the occupied bands, as is evident in both Figs. \ref{fig:bands_compare} for the bands and \ref{fig:dos_compare} for the spectral density. With the top (Cu $d_{z^2}$) conduction band as reference, the $3d$-$2p$ band complex at all lower energies readjusts rapidly with doping to lower binding energies. The $d_{xz}$, $d_{yz}$ bands (Fig. \ref{fig:band_weights}) acquire considerable $2p$ character and move up to nearly touch E$_F$ at the point T=$(0,0,\pi/c)$; further doping will introduce holes into this band. The O $2p$ bands, which lay below the $3d$ bands in the insulator, have shifted upward dramatically by 2 eV (a remarkably large 70 meV/\% doping), contributing extra screening at and near E$_F$ in the metallic phase. The gap increases by $\sim$0.5 eV. These spectral shifts can be accounted for by a charge-dipole layer potential shift due to the Cu$\rightarrow$Al-Mg layer charge transfer. The increased $3d-2p$ hybridization is made more apparent in Fig. \ref{fig:O_hybrid_compare}, which reveals that the $d_{xz}$, $d_{yz}$ bands at T (and elsewhere) have increased contribution from the O $p$ states. Also apparent in this plot are seemingly extra bands appearing at about -1 eV near $\Gamma$; these are bands from below which have been shifted strongly upward by $\sim$2 eV by the dipole potential shift resulting from charge transfer. \begin{table}[bh] \begin{centering} \begin{tabular}{ccccc|ccc} \hline \hline & & \multicolumn{3}{c|}{Insulator } & \multicolumn{3}{c}{Metal} \\ & & $z^2$& $xy$ & $x^2-y^2$ & $z^2$ & $xy$ & $x^2-y^2$ \\ \hline \hline \multirow{4}{*}{$z^2$} & $t_1$ & \bf 393 & 198 & 228 & \bf 342 & 191 & 220\\ & $t_2$ & 60 & 8 & 13 & 60 & 14 & 17\\ & $t_3$ & \bf 35 & 22 & 25 & \bf 59 & 17 & 20\\ & $t_\perp$ & \bf 24 & 15 & -16 & \bf 63 & 17 & 17\\ \hline \multirow{4}{*}{$xy$} & $t_1$ & & 123 & 117 & & 107 & 107 \\ & $t_2$ & & 35 & 14 & & 36 & 16\\ & $t_3$ & & 11 & 8 & & 11 & 10\\ & $t_\perp$ & & 23 & 14 & & 31 & 20\\ \hline \multirow{4}{*}{$x^2-y^2$} & $t_1$ & & & 147 & & & 140\\ & $t_2$ & & & 28 & & & 27\\ & $t_3$ & & & 15 & & & 17\\ & $t_\perp$ & & & 18 & & & 23\\ \hline \end{tabular} \caption[Tight binding parameters] {Tight binding hopping parameters for insulating and metallic phases, from the three constructed Wannier functions. The labels $t_1$, $t_2$, $t_3$, refer to the first, second, and third neighbor hoppings in the triangular Cu planes. $t_\perp$ refers to hopping between layers. The most significant changes when doped are highlighted in bold print.} \label{tbl:TightBinding} \end{centering} \end{table} \begin{figure}[th] \begin{center} \subfigure{ \includegraphics[width=0.65\columnwidth,clip]{Fig5.eps} } \end{center} \caption{(Color online.) Isosurface of the Wannier function for the Cu d$_{z^2}$ orbital in the $x$=0.3 doped metal. Antibonding contributions are seen from the nearest O atoms (small red spheres). The metallic state contains contributions from O ions in the second layer above and below that are not present in the insulator.} \label{fig:WF3d0} \end{figure} More light is shed on the electronic structure of CuAl$_{1-x}$Mg$_{x}$O$_2$ by using Wannier functions (WFs) to construct a tight binding model of the states near the Fermi level. We use the WF generator in the FPLO code.\cite{FPLO} These WFs are symmetry respecting atom-based functions,\cite{weiku} constructed by projecting Kohn-Sham states onto, in this case, the Cu 3d$_{z^2}$, 3d$_{xy}$, 3d$_{x^2-y^2}$ atomic orbitals, with resulting hopping amplitudes shown in Table \ref{tbl:TightBinding}. Hoppings involving the $xy$ and $x^2-y^2$ orbitals are not significantly different between the insulator and metal. However, hopping amplitudes for the $d_{z^2}$ WF change significantly, the most important being the factor of 2.5 increase in the {\it hopping between layers}, $t_\perp$. Consistent with the picture from the DOS, the hoppings for the metallic state are more long-range: nearest neighbor hopping drops by 13\%, while third neighbor hopping nearly doubles. All of these changes are neglected in a rigid band treatment. This band dispersion is anomalous for a quasi-2D structure such as this, where normally the $3d$ orbitals with lobes extending in the $x-y$ plane would be expected to be the most dispersive. Instead, it is the $d_{z^2}$ band that disperses, with a bandwidth of 2.5 eV and the band bottom at $\Gamma$. Shown in Fig. \ref{fig:WF3d0} is the $d_{z^2}$-projected WF for the $x$=0.3 hole-doped metal. Consistent with their minor dispersion, the WFs for the other $3d$ orbitals (not shown) have little contribution beyond the atomic orbital, showing only minor anti-bonding contributions from nearby O atoms. The $d_{z^2}$ WF shape is, in addition, quite extraordinary. Although displaying $d_{z^2}$ symmetry as it must, its shape differs strikingly from atomic form. It is so much fatter in the $x$-$y$ plane than the bare $d_{z^2}$ orbital that it is difficult to see the signature $m_{\ell} = 0$ ``$z^2$'' lobes pictured in textbooks. This shape is due, we think, to ``pressure'' from the neighboring antibonding O $p_z$ orbitals above and below. There is an (expected) admixture of O $2p_z$ orbitals, as well as a small symmetry-allowed $p_z + (p_x,p_y)$ contribution from the neighboring oxygen ions that finally provides (with their overlap) the in-plane dispersion of the $d_{z^2}$ band. The important qualitative difference compared to the insulator WF is the contribution from O atoms in the {\it next nearest} planes (across the Al layer) whose states have been shifted upward by the doping-induced charge transfer. This mixing opens a channel for hopping between layers in the Cu $d_{z^2}$ WFs by creating overlap in the two planes of O atoms between Cu layers, it is the source of the increase in $t_\perp$ hopping seen in Table \ref{tbl:TightBinding} that leads to the $k_z$ dispersion of the $2p$ band along L-U in Fig. \ref{fig:O_hybrid_compare} (and more so along $\Gamma$-T, not shown), and will promote good hole-conduction in hole-doped delafossites. \begin{figure}[th] \centering \subfigure{ \includegraphics[width=0.45\columnwidth,clip]{Fig6a.eps} \label{fig:kpoints}} \subfigure{ \includegraphics[width=0.45\columnwidth,clip]{Fig6b.eps} \label{fig:Fermi}} \caption{(color online)Left: rhombohedral zone with special k-points labeled. Right: the sole large multiply-connected Fermi surface for moderately hole doped CuAl$_{1-x}$Mg$_x$O$_2$, $x = 0.3$.} \end{figure} Fermi surfaces (FS) are critical to a material once it is doped into a metallic phase. For small hole doping, the FS lies close to the zone boundary everywhere. The FS of CuAlO$_2$ for $x=0.3$ hole doping in VCA, displayed in Fig. \ref{fig:Fermi}, is not so different from that shown by NK for rigid band doping, but the self-consistent treatment will differ substantially, for larger doping levels, with new sheets appearing due to the spectral weight transfer. The FS resembles a somewhat bloated cylinder truncated by the faces of the rhombohedral BZ. The relevant nesting, not necessarily strong, is of two types. A large $2k_F$ spanning wavevector almost equal to the BZ dimension in the $k_x-k_y$ plane will, when reduced to the first BZ, lead to small $q$ scattering on the FS, broadened somewhat by the $k_z$ dispersion. Second, there are ``skipping'' $\vec q$ values along ($\epsilon,\epsilon$,$q_z$) for small $\epsilon$. It is for these values of $\vec q$ that NK reported extremely strong coupling. We have focused our study of EPC on the regime $x \sim$ 0.3 of doping where NK predicted the very large electron-phonon coupling and high T$_c$. \begin{figure}[th] \includegraphics[width=0.95\columnwidth,clip]{Fig7.eps} \caption{Phonon dispersion curves for $x$=0.3 hole-doped CuAlO$_2$, calculated with the {\sc Abinit} code on a $8^3$ $q$-point grid with $24^3$ k-points. Circles indicate the magnitude of $\lambda_q \omega_q$ for that mode. Some aliasing effects (unphysical wiggles) along L-U and $\Gamma$-T are due to the discrete nature and orientation of the $q$-point mesh. } \label{fig:lambda-omega} \end{figure} \begin{figure}[th] \includegraphics[width=0.95\columnwidth,clip]{Fig8.eps} \caption{(a) The phonon density of states and $\alpha^2F(\omega)$ at $x = 0.3$. (b) The quotient $\alpha^2(\omega)=\alpha^2F(\omega)/F(\omega)$ reflecting the spectral distribution of the coupling strength. The peaks below 5 meV are numerically uncertain and are useless for EPC due to the vanishingly small density of states.} \label{fig:a2f} \end{figure} To assess the effects of the spectral shifts, we have computed the phonons and electron-phonon using linear response theory. The phonon dispersion curves calculated from DFT linear response theory at $x$ = 0.3 are presented in Fig. \ref{fig:lambda-omega}, with fatbands weighting by $\omega_q \lambda_q$ (which is more representative of contribution to T$_c$ than by weighting by the ``mode-$\lambda$'' $\lambda_q$ alone\cite{PBA}). Branches are spread fairly uniformly over the 0-90 meV region. As found by NK, coupling strength is confined to the Cu-O stretch mode at 87 meV very near $\Gamma$, and to very low frequency acoustic modes also near $\Gamma$ where the density of states is very small. Unlike in MgB$_2$, this coupling does {\it not} extend far along $k_z$; the lack of strong electronic two-dimensionality degrades EPC coupling strength greatly and no modes show significant renormalization. We obtain $\lambda$ $\approx$ 0.2, $\omega_{log}$=275K = 24 meV. Using the weak coupling expression with $\mu^* \sim 0.1$ we obtain \begin{eqnarray} T_c \approx \frac{\omega_{log}}{1.2} e^{-\frac{1}{\lambda-\mu^*}} \sim 230 ~ e^{-10}K, \end{eqnarray} so no observable superconductivity is expected. Similar to that obtained by NK, the largest electron-phonon coupling arises from the O-Cu-O bond stretch mode. As anticipated from the FS shape, the most prominent contributions arise from small $q$ phonons. The EPC spectral function $\alpha^2F(\omega)$ is compared in Fig. \ref {fig:a2f}(a) with the phonon DOS $F(\omega)$. As is apparent from their ratio shown in Fig. \ref{fig:a2f}b, the peak around 15 meV is purely from the large density of states there, due to the flat phonon bands over much of the zone at that energy. The coupling with much impact on T$_c$ ({\it i.e.} area under $\alpha^2 F$) occurs in the 45-75 meV range, and is spread around the zone; however, unlike MgB$_2$ no frequency range is dominant and the coupling is weak. The top O-Cu-O stretch move, with the largest $\lambda$ values and in the 80-90 meV range, are so strongly confined to narrow $q$ ranges that they contribute little to the coupling. While we conclude, morosely, that high T$_c$ EPC superconductivity will not occur in doped CuAlO$_2$, the behavior that has been uncovered provides important insight into materials properties design beginning from 2D insulators. In the 40K electron-phonon superconductor MgB$_2$ superconductor, an interlayer charge transfer of much smaller magnitude and natural origin self-dopes the boron honeycomb sublattice to become the premier electron-phonon superconductor of the day. Hole doping of this delafossite does not provide better superconductivity, but it does provide insight into designing materials behavior as well as providing a new platform for complex electronic behavior. For low concentrations small polaron transport has been observed.\cite{AJF} The hole-doping spectral shifts are distinct from doping-induced spectral shifts in Mott insulators, which typically occurs without charge transfer. As for the envisioned behavior: at moderate doping this materials class provides a single band (Cu $d_{z^2}$) triangular lattice system, with Cu$^{2+}$ S=1/2 holes, which if coupling is antiferromagnetic leads to frustrated magnetism. The unusual dispersion at low doping, with little dispersion along $k_z$ and also around the zone boundary, leads to an effectively {\it one dimensional phase space} at the band edge, although this property degrades rapidly with doping. Another triangular single band transition metal compound\cite{stacy,linbo3} is LiNbO$_2$, which superconducts around 5K when heavily hole doped\cite{stacy} and whose mechanism of pairing remains undecided. This work was supported by DOE SciDAC grant DE-FC02-06ER25794 and a collaborative effort with the Energy Frontier Research Center {\it Center for Emergent Superconductivity} through SciDAC-e grant DE-FC02-06ER25777. W.E.P. acknowledges the hospitality of the Graphene Research Center at the National University of Singapore where this manuscript was completed.
1,116,691,501,096
arxiv
\section{Introduction} In coupling optical and mechanical degrees of freedom via radiation pressure forces, optomechanical systems offer a promising platform by which one can prepare and control macroscopic nonclassical states of a mechanical resonator \cite{aspelmeyer:2013,poot:2012,aspelmeyer:2008}. Experiments \cite{chan:2011,teufel:2011,oconnell:2010} have been successful in reaching the quantum ground state for a mechanical resonator using sideband cooling techniques that exploit the positive backaction of radiation pressure damping when using a pump mode that is red-detuned below the cavity resonance frequency \cite{wilson-rae:2007,marquardt:2007}. Although superposition states have already been demonstrated \cite{oconnell:2010}, the fidelity of these states is rapidly reduced due to unwanted heating of the mechanical resonator from the ever-present thermal environment. A complimentary process occurs when driving the system above the cavity resonance frequency, the blue-detuned region, where a sufficiently strong drive gives rise to an overall negative damping rate for the mechanical oscillator leading to an instability and self-induced oscillations of the resonator that have been explored both theoretically \cite{qian:2012, nunnenkamp:2011, rodrigues:2010,ludwig:2008,vahala:2008,marquardt:2006} and experimentally \cite{anetsberger:2009,metzger:2008,aricizet:2006,carmon:2005,kippenberg:2005}. These oscillations can be described as a mechanical, or phonon, lasing process where a pump excitation coherently transfers energy into the mechanical oscillator leading to a threshold of oscillation, randomized phase, saturation, and a reduced linewidth \cite{khurgin:2012,vahala:2008}. Recently, it was found that this self-oscillation regime can lead to nonclassical steady states of the mechanical resonator provided that the single-photon cavity-oscillator coupling strength is on the order of, or larger than the cavity energy decay rate and frequency of the mechanical oscillator \cite{qian:2012,rodrigues:2010}. As steady states of the system$+$environment dynamics, these states can be measured using repeated quantum non-demolition quadrature measurements \cite{clerk:2008} and state tomography \cite{lvovsky:2009} without loss of fidelity, and could provide another route toward generating quantum states in a mechanical resonator. Here we show that nonclassical self-oscillating mechanical states can be generated in an optomechanical system in the regime where the single-photon coupling strength is on the order of the optical cavity decay rate. The application of sufficiently weak pump powers results in a cavity field where both the expectation value and variance of the photon number are much less than unity. Here, photons sent into the cavity mode according to a Poisson process, using for example a laser, give rise to a single-excitation radiation-pressure interaction that is the analogue of a micromaser \cite{walther:2006,filipowicz:1986}. In the micromaser, a stream of excited atoms are passed through an optical cavity one at a time according to a Poisson process generating an interaction term, $\hbar\epsilon(\hat{a}+\hat{a}^{+})\hat{\sigma}_{x}$, while the atom is inside the cavity \cite{filipowicz:1986}. Here $\hat{a}^{+}$ and $\hat{a}$ are the bosonic creation and annihilation operators for the optical cavity mode, respectively, and $\epsilon$ is the atom-cavity coupling strength. The emission of an excitation into the cavity mode is determined by the accumulated Rabi phase of the atom upon exiting the cavity, a quantity controlled by the product of the atom-cavity interaction time $\tau_{\rm{int}}$ and the coupling strength $\epsilon$. In addition, the interaction time must be shorter than the inverse of the cavity decay rate; the single-excitation interaction must be quantum coherent. Our optomechanical analogue, consisting of two oscillator modes, relies on the fact that the use of an atom, or atom-like system, is not fundamental to operation of a micromaser. Instead, it is the coherent single-excitation interaction that underlies the dynamics. With the cavity mode occupied by at most a single photon, the optomechanical interaction, $\hbar g_{0}(\hat{b}+\hat{b}^{+})\hat{a}^{+}\hat{a}$, with $\hat{b}$ and $\hat{b}^{+}$ corresponding to the mechanical mode, drives the mechanical oscillator when the excitation is present, and turns off when the cavity is unoccupied, in analogy with excited atoms transiting an optical cavity. As in the micromaser, the excitation of the mechanical oscillator is proportional to the product of the single-photon cavity-oscillator coupling strength $g_{0}$, and the effective interaction time set by the inverse of the cavity decay rate $\kappa$. Demanding that this process be quantum coherent requires a mechanical oscillator with a large quality factor $Q_{m}$. Unique to the micromaser is the generation of highly nonclassical sub-Poissonian steady states of the optical cavity above-threshold when only a single excited atom is present in the cavity at any one time \cite{davidovich:1996}. This single-atom interaction causes the steady-state photon number in the cavity to undergo a rapid increase at the onset of maser oscillations followed by a series of discontinuous jumps between stationary-states of the optical cavity with different amplitudes \cite{walther:2006}. These jumps correspond to first-order phase transitions in the limit that the cavity damping rate goes to zero \cite{filipowicz:1986}, and are the signature of the micromaser. We will show that all of the above features are present in our optomechanical analogue. Like the recent demonstration of phonon lasing in a three-mode mechanical system \cite{mahboob:2013}, this setup stands apart from the micromaser and similar systems \cite{marthaler:2011,rodrigues:2007,bennett:2006} in that there is no two- or three-level atom-like subsystem generating the maser dynamics. Thus this work exhibits fundamental single-atom maser effects in a system composed solely of oscillator components. Although a similar parameter regime has been considered previously \cite{qian:2012, rodrigues:2010, ludwig:2008}, the connection to the micromaser was dismissed \cite{qian:2012}. This relationship allows us to understand the onset and subsequent reduction in the nonclassical properties of the mechanical states, the role of nonlinearities, the interplay between multiple stable resonator limit-cycles, and the effect of a nonzero thermal environment on the oscillator states. These features have not been addressed previously, and yet are important in answering the questions as to how nonclassical states arise in this system, and how best to maximize these characteristics for subsequent experimental realization. The paper is organized as follows. In Sec.~\ref{sec:semiclassical} we give the Langevin equations for the system, derive the nonlinear response of the cavity mode, and give expressions for the semiclassical stable limit-cycle amplitudes for the mechanical oscillator. In Sec.~\ref{sec:quantum} we express the open quantum dynamics of the system in terms of a master equation, define an appropriate benchmark for measuring the non-classicality of the oscillator density matrix, and solve for the steady state response of the resonator as a function of detuning and coupling strength. The results of numerical simulations are presented and results analyzed with emphasis on the micromaser analogy. Finally, Sec.~\ref{sec:conclusion} gives a brief discussion of the results. Details of the numerical methods used in this work are given in Appendix~\ref{sec:app}. \section{Semiclassical Dynamics}\label{sec:semiclassical} Our starting point is the standard single-mode optomechanical Hamiltonian \begin{equation}\label{eq:hamiltonian} \hat{H}=-\Delta\hat{a}^{+}\hat{a}+\hat{b}^{+}\hat{b}+g_{0}(\hat{b}+\hat{b}^{+})\hat{a}^{+}\hat{a}+E\left(\hat{a}+\hat{a}^{+}\right), \end{equation} where $E$ is the pump amplitude, and we have gone into a frame rotating at the pump frequency $\omega_{p}$ and have written Eq.~(\ref{eq:hamiltonian}) in dimensionless form using the phonon energy $\hbar\omega_{m}$ where $\omega_{m}$ is the frequency of the mechanical oscillator. The typical optomechanical setup is depicted in Fig.~\ref{fig:fig1}. Here, all system parameters are expressed in units of the oscillator frequency. In particular, the detuning between pump and cavity frequencies is $\Delta=\left(\omega_{p}-\omega_{c}\right)/\omega_{m}$, where $\omega_{c}$ is the resonance frequency of the cavity mode ($\omega_{c}\gg \omega_{m}$) in the linear regime. Here, the cavity mode is assumed to be strongly-coupled to a zero-temperature environment with coupling constant $\kappa$. In addition, we will consider a mechanical oscillator coupled to a thermal environment with coupling strength $\Gamma_{m}=1/Q_{m}$. Our focus will be on the regime $g^{2}_{0}/\kappa\omega_{m}\gtrsim 1$, where the radiation-pressure of a single photon displaces the mechanical oscillator on the order of its wave packet extension \cite{aspelmeyer:2013}. Of particular interest will be the parameter space in which the so-called granularity parameter \cite{murch:2008}, $g_{0}/\kappa\gtrsim1$, and the discreteness of the cavity photons becomes important. \begin{figure}[t] \includegraphics[width=7.0cm]{fig1} \caption{(Color online) Conventional driven optomechanical setup where the position of a mechanical resonator ($\hat{b}$) with frequency $\omega_{m}$ and energy damping rate $\Gamma_{m}$ is coupled parametric via radiation pressure to an optical cavity ($\hat{a}$) at frequency $\omega_{c}$. Here, the cavity is assumed to be driven by a laser with amplitude $E$ and frequency $\omega_{p}$. The damping due to coupling with the laser mode is given by $\kappa$.} \label{fig:fig1} \end{figure} Semiclassical dynamics of both the cavity and oscillator can be found using the input-output formalism \cite{gardiner:1985} to derive the Langevin equations for the operators appearing in Eq.~(\ref{eq:hamiltonian}) \begin{eqnarray} \frac{d\hat{a}}{d\tau}&=&i\Delta\hat{a}-ig_{0}(\hat{b}+\hat{b}^{+})\hat{a}-\frac{\kappa}{2}\hat{a}- iE \label{eq:lang-cavity}\\ \frac{d\hat{b}}{d\tau}&=&-i\hat{b}-ig_{0}\hat{a}^{+}\hat{a}-\frac{\Gamma_{m}}{2}\hat{b}-\sqrt{\Gamma}\hat{b}_{\rm in}, \label{eq:lang-resonator} \end{eqnarray} where $\tau=\omega_{m}t$, and $\hat{b}_{\rm in}$ is the input operator for the oscillator mode. The pump amplitude $E$ is related to the cavity input operator and input pump power via $iE=\sqrt{\kappa}\hat{a}_{\rm in}$ and $P=\hbar\omega_{p}E^{2}/\kappa$, respectively. Setting the operators equal to their expectation values, $\bar{a}=\langle \hat{a}\rangle, \bar{b}=\langle\hat{b}\rangle$, while time derivatives and resonator input operator $\hat{b}_{\rm in}$ are set to zero, yields deterministic equations that are used to find the steady-state average photon number for the cavity mode $\bar{N}_{a}$ \begin{equation}\label{eq:nonlinear} E^{2}=\left(\Delta^{2}+\kappa^{2}/4\right)\bar{N}_{a}-2\Delta\mathcal{K}\bar{N}_{a}^{2}+\mathcal{K}^{2}\bar{N}_{a}^{3}, \end{equation} where we have simplified the expression using intrinsic Kerr nonlinearity of the radiation pressure coupling \cite{nation:2008} \begin{equation}\label{eq:kerr} \mathcal{K}=-\frac{2g_{0}^{2}}{1+\frac{\Gamma^{2}_{m}}{4}}. \end{equation} Here the minus sign indicates that this nonlinearity has a ``spring-softening" effect on the cavity; the cavity resonance frequency is pulled below its corresponding linear value. Equation (\ref{eq:nonlinear}) determines the amplitude of the cavity field as a function of the detuning and gives rise to the well-known radiation pressure bistability \cite{dorsel:1983}. One may define the renormalized cavity resonance frequency as the value for which Eq.~(\ref{eq:nonlinear}) is maximized. For the dynamics of the system, we follow Ref.~\cite{marquardt:2006} and make the ansatz that the resonator undergoes sinusoidal oscillations obeying $x(\tau)=\bar{x}+A\cos(\tau)$, where $\bar{x}$ is the static displacement of the resonator and $A$ is the amplitude of oscillation, both measured in units of the resonator zero-point motion $x_{\rm zp}=\sqrt{\hbar/2m\omega_{m}}$, where $m$ is the effective mass of the oscillator. An exact solution is found using the Fourier series $\bar{a}(\tau)=e^{i\varphi(\tau)}\sum_{n=-\infty}^{\infty}\alpha_{n}e^{in\tau}$ in Eq.~(\ref{eq:lang-cavity}) with coefficients \begin{equation}\label{eq:harmonic-cav} \alpha_{n}=-iE\frac{J_{n}(g_{0}A)}{i\left(n-\Delta+g_{0}\bar{x}\right)+\kappa/2}, \end{equation} and time-dependent phase $\varphi(\tau)=-g_{0}A\sin(\tau)$. The time-averaged cavity occupation number $\overline{\langle|\bar{a}|^{2}\rangle}=\sum_{n}|\alpha_{n}|^{2}$ is therefore peaked at discrete values given by $\Delta=n+g_{0}\bar{x}$ where the integer $n$ labels the mechanical sideband, i.e. $n\omega_{m}$. The static displacement and self-oscillation amplitudes are found by self-consistently solving the time-averaged force balance \begin{equation}\label{eq:force} \bar{x}=-2g_{0}\sum_{n}|\alpha_{n}|^{2} \end{equation} and power balance \begin{equation}\label{eq:power} \Gamma_{m}A=-4g_{0}\mathrm{Im}\sum_{n}\alpha^{*}_{n+1}\alpha_{n} \end{equation} equations, respectively. In general, there can be multiple stable limit-cycle amplitudes for a given set of system parameters. For a high-Q mechanical mode, Eqs.~(\ref{eq:nonlinear}) and (\ref{eq:force}) combine to give $g_{0}\bar{x}\propto \mathcal{K}$, showing that the cavity response Eq.~(\ref{eq:harmonic-cav}) also accounts for frequency-pulling effects, although the lineshape remains Lorentzian. \section{Quantum Dynamics}\label{sec:quantum} To understand the nonclassical features in the mechanical steady states of Eq.~(\ref{eq:hamiltonian}) we simulate the full quantum dynamics using the master equation \begin{equation}\label{eq:master} \frac{d\hat{\rho}}{d\tau}=\mathcal{L}\hat{\rho}=-i[\hat{H},\hat{\rho}]+\mathcal{L}_{\rm cav}[\hat{\rho}]+\mathcal{L}_{\rm mech}[\hat{\rho}], \end{equation} where the dissipative terms $\mathcal{L}_{\rm cav}$ and $\mathcal{L}_{\rm mech}$ are assumed to be in Lindblad form \begin{eqnarray} \mathcal{L}_{\rm cav}&=&\frac{\kappa}{2}\left(2\hat{a}\hat{\rho}\hat{a}^{+}-\hat{a}^{+}\hat{a}\hat{\rho}-\hat{\rho}\hat{a}^{+}\hat{a}\right) \\ \mathcal{L}_{\rm mech}&=&\frac{\Gamma_{m}}{2}(\bar{n}_{\rm th}+1)(2\hat{b}\hat{\rho}\hat{b}^{+}-\hat{b}^{+}\hat{b}\hat{\rho}-\hat{\rho}\hat{b}^{+}\hat{b}) \nonumber \\ &+&\frac{\Gamma_{m}}{2}\bar{n}_{\rm th}(2\hat{b}^{+}\hat{\rho}\hat{b}-\hat{b}\hat{b}^{+}\hat{\rho}-\hat{\rho}\hat{b}\hat{b}^{+}), \end{eqnarray} where the cavity input port is at zero temperature, and the mechanical resonator environment is parameterized by the average number of thermal excitations $\bar{n}_{\rm th}=\left[\exp(\hbar\omega_{m}/k_{\rm B}T)-1\right]^{-1}$. As a measure for the nonclassical features in the oscillator steady-state we consider the negativity of the Wigner function \cite{haroche:2006} as measured by the ratio between the sum of negative and positive discretized Wigner densities, here called the nonclassical ratio \begin{equation}\label{eq:ratio} \eta=\frac{\sum_{n}|w^{(-)}_{n}|}{\sum_{m}w^{(+)}_{m}}=\frac{\sum_{n}|w^{(-)}_{n}|dxdp}{1+\sum_{n}|w^{(-)}_{n}|dxdp}, \end{equation} where $w^{(-)}_{n}$ and $w^{(+)}_{m}$ are the amplitudes of negative and positive density components, respectively, and $dxdp$ is the area element. The second equality in (\ref{eq:ratio}) follows from the fact that the total Wigner function must sum to one. For the parameter regime considered here, Eq.~(\ref{eq:ratio}) is nearly linear, making it a suitable benchmark for comparing quantum signatures in the oscillator states. To put the values of Eq.~(\ref{eq:ratio}) in context, we point out that the first Fock state $|1\rangle$ has a nonclassical ratio of $\sim 18 \%$, with all higher Fock states above this value. \begin{figure}[h] \includegraphics[width=8.6cm]{fig2} \caption{(Color online) (a) Steady state cavity energy as a function of detuning and normalized coupling strength $g_{0}/\kappa$. The dashed line shows the renormalized cavity frequency. (b) Steady state energy of the corresponding mechanical oscillator mode, including the first three oscillator sidebands. The detuning value at which the sidebands occur follows the frequency-pulling of the cavity (dashed). (c) Nonclassical ratio for the mechanical oscillator. Contours (solid) show the regions where the nonclassical ratio is larger than $0.1\%$. This region includes portions of the red-detuned side of the cavity response, below the renormalized cavity frequency, and the bistable region enclosed in the dashed lines. (d) Log-scale plot of the mechanical resonators Fano factor.} \label{fig:fig2} \end{figure} Figure~\ref{fig:fig2} shows the results for a numerical simulation finding the steady-states of Eq.~(\ref{eq:master}), performed using QuTiP \cite{qutip1,*qutip2}, for a system with parameters $E=0.1, \kappa=0.3, Q_{m}=10^{4}$ and $\bar{n}_{\rm th}=0$, over a parameter space characterized by $-1\le \Delta \le 3$ and varying $g_{0}$ over the range $0.75\le g_{0}/\kappa \le 3$. Note that the coupling strength is tunable in some superconducting circuit optomechanical realizations \cite{blencowe:2007}. Here, our truncated Hilbert space includes four Fock states for the cavity mode and $200$ states for the mechanical resonator. Simulation details are in the Appendix, and the source code can be obtained on the arXiv \cite{arxiv}. \begin{figure}[t] \begin{center} \includegraphics[width=8.0cm]{fig3} \caption{(Color online) (a) Wigner distribution for the state at $\Delta=0, g_{0}/\kappa=1.35$ with the largest nonclassical ratio $\eta \simeq 6\%$. (b) The number state probability distribution for the state in (a) (solid-red) together with a coherent state of the same amplitude (blue-bars). Dashed lines show the corresponding semiclassical limit-cycle amplitudes found using Eqs.~(\ref{eq:force}) and (\ref{eq:power}). Inset shows the small peak in the distribution for the larger limit-cycle that gives $F=5.2$. (c) Wigner function at $\Delta=-0.43, g_{0}/\kappa=2.4$, located in the bistable regime with $\eta\simeq 2\%$. (d) Phonon distribution function (solid-red) for state in (c) showing four stable limit-cycles with $F=10.4$, and an equal amplitude coherent state (blue-bars).} \label{fig:fig3} \end{center} \end{figure} First, as required for our analogue micromaser, Fig.~\ref{fig:fig2}a shows that the cavity photon number is well-below unity over the entire parameter range. Second, we see in Fig.~\ref{fig:fig2}c that the strongest nonclassical states are generated when the cavity mode is driven just above its renormalized resonance frequency as determined by Eq.~(\ref{eq:nonlinear}), and at the mechanical sidebands. These motional sidebands are not present in a conventional micromaser. Surprisingly, this includes portions of the parameter space on the red-detuned side of the cavity, below the renormalized cavity resonance, and inside the bistable region of the cavity. As discussed in detail below, the mechanical Fano factors given in Fig.~\ref{fig:fig2}d, $F=\langle(\Delta \hat{N}_{b})^{2}\rangle/\langle \hat{N}_{b}\rangle$, for the nonclassical states are typically larger than unity due to the presence of multiple stable oscillation amplitudes. Furthermore, strong quantum features in the Wigner functions ($\eta\ge 1\%$) are not found at the higher resonator sidebands, as strong driving of the oscillator leads to multiple limit-cycles with overlapping number-state distributions that degrade the quantum signatures in these states. The Wigner function and phonon probability distribution for two states in the nonclassical regions, the state with the largest nonclassical ratio, at $\Delta=0, g_{0}/\kappa=1.35$, and a state in the bistable region, at $\Delta=-0.43, g_{0}/\kappa=2.4$, are presented in Fig.~\ref{fig:fig3}. The Wigner functions consist of an ensemble of rings, each corresponding to a stable limit-cycle, with a sub-Poissonian distribution. Sub-Poissonian effects are well-known in both the micomaser \cite{filipowicz:1986} and Kerr-type nonlinear interactions \cite{buzek:1991}. Here, the rotational symmetry arises because of phase diffusion \cite{rodrigues:2010}, and corresponds to a density matrix with only diagonal elements \cite{nunnenkamp:2011}. Negative regions can be generated by single (Fig.~\ref{fig:fig3}a), or multiple (Fig.~\ref{fig:fig3}c) limit-cycles. Although each limit-cycle is sub-Poissonian, the separation between oscillation amplitudes with nonzero occupation probabilities is responsible for the large Fano factors seen in Fig.~\ref{fig:fig2}d. In general, the number of accessible limit-cycles increases with coupling strength and resonator occupation number. Here, semiclassical limit-cycle energies can be directly equated to those for the quantum oscillator as static displacements of the oscillator are negligible in this regime. The onset, and subsequent decline, in the nonclassical ratio can be understood by fixing the detuning at $\Delta=0$, and sweeping the coupling from zero to $g_{0}/\kappa=3$. This is comparable to varying the micromaser pump parameter, proportional to the coupling strength and atom-cavity interaction time \cite{walther:2006,filipowicz:1986}. Here, the interaction time is the inverse of the cavity decay rate, $\tau_{\rm{int}}=\kappa^{-1}$. The resonator Q-factor also plays a role in the pump parameter, and should be sufficiently large to discern quantum effects. To observe the switching between oscillator limit-cycles we pick the number state corresponding to the maximum probability amplitude in the density matrix as the order parameter \cite{rodrigues:2007}. \begin{figure}[t] \begin{center} \includegraphics[width=8.0cm]{fig4} \caption{(Color online) Oscillator order parameter (squares-black), mean phonon number (solid-green), and Fano factor (dashed-red) as a function of $g_{0}/\kappa$ at $\Delta=0$. The shaded region indicates where $\eta\ge 1\%$.} \label{fig:fig4} \end{center} \end{figure} Figure~\ref{fig:fig4} shows order parameter transitions and Fano factors that are analogous to those seen in the micromaser \cite{filipowicz:1986}. The initial transition between the $A=0$ fixed point and onset of limit-cycle oscillation indicates the value of $g_{0}$ at which the single-excitation interaction overcomes the intrinsic loss rate of the resonator, and corresponds to the micromaser threshold. Larger couplings give rise to sub-Poissonian statistics until another limit-cycle becomes accessible. The subsequent discontinuous jump in the order parameter signals that this new limit cycle is now the preferred state of the resonator. For the high-Q oscillator described here, this second transition is sharp even though we outside of the thermodynamic limit \cite{filipowicz:1986}, i.e. $\Gamma_{m}\rightarrow 0$. Note that these features are fundamentally different than those observed in a laser or maser, where above threshold the cavity occupation number saturates, and the state of the cavity is a phase-randomized coherent state \cite{walls:2008}. The transitions presented in Fig.~\ref{fig:fig4} are the characteristic signature of a micromaser, confirming that our optomechanical model is in fact an analogue of this system. \begin{figure}[t] \begin{center} \includegraphics[width=8.5cm]{fig5} \caption{(Color online) Distributions for the lower (red-bars) and upper (green-bars) limit-cycles along with corresponding coherent distributions, (blue-solid) and (black-solid), respectively of the same amplitude. Coherent states are normalized to the probabilities of each limit-cycle. Inset shows the Fano factors corresponding to the lower (black) and upper (grey) limit-cycles individually, as well as the Fano factor (dashed-black) and nonclassical ratio $\eta$ (dashed-grey) for the overall state. Above $g_{0}/\kappa \simeq 1.3$, the two limit-cycles begin to merge, and the individual limit-cycle Fano factors are approximate values.} \label{fig:fig5} \end{center} \end{figure} Figure~\ref{fig:fig5} shows several resonator distributions along with effective Fano factors corresponding to the individual limit-cycles responsible for the oscillations in Fig.~\ref{fig:fig4}, and shown in Fig.~\ref{fig:fig3}b. Both limit-cycles are clearly sub-Poissonian when $\eta>0$, although the state itself can have a large Fano factor. At $g_{0}/\kappa\sim 1.3$, the limit-cycle distributions begin to overlap and the description in terms of individual oscillation amplitudes is no longer valid; the overall distribution of the resonator, which is super-Poissonian, determines the nonclassical features. This merger causes a marked reduction in the nonclassical features of the oscillator Wigner functions. This effect is pronounced at the oscillator sidebands where large phonon numbers inherently give rise to multiple overlapping limit-cycles. \begin{figure}[t] \begin{center} \includegraphics[width=8.0cm]{fig6} \caption{Nonclassical ratio for the states in Fig.~\ref{fig:fig3}a (black) and Fig.~\ref{fig:fig3}c (dashed-black), as well as two states on the first mechanical sideband, $\Delta=1, g_{0}/\kappa=1.4$ (grey) and $\Delta=0.5, g_{0}/\kappa=2.6$ (dashed-grey), as a function of the bath temperature $\bar{n}_{\rm th}$. Inset shows the Fano factors for the $\Delta=0$ limit-cycle above threshold seen in part (a) for bath temperatures $\bar{n}_{\rm th}=0$ (black), $1$ (dashed-black), $3$ (grey), and $5$ (dashed-grey).} \label{fig:fig6} \end{center} \end{figure} Finally, as in the micromaser \cite{davidovich:1996}, the addition of a nonzero thermal environment for the oscillator quickly masks the quantum features in our optomechanical analogue. In Fig.~\ref{fig:fig6} we highlight this decay, via Eq.~(\ref{eq:master}), as a function of the bath occupation number $\bar{n}_{\rm th}$ for both the nonclassical ratio taken at several points in parameter space, and Fano factors for the initial limit cycle at $\Delta=0$. It is seen that the introduction of even a low temperature thermal environment significantly diminishes the quantum features of the mechanical states. However, at larger coupling strengths, some residual quantum features do persist at higher temperatures. \section{Conclusion}\label{sec:conclusion} We have shown that nonclassical states of a mechanical oscillator can be generated in an optomechanical analogue of the micromaser when the cavity is damped so as to be occupied by at most a single photon. This system exhibits strong sub-Poissonian limit-cycles, nonclassical Wigner functions, and phonon oscillations in the resonator that are the signature features of a micromaser. These features are reduced at nonzero bath temperatures as the increased fluctuations interfere with the coherent cavity-resonator interaction. Note that trapped states can not be produced in this setup as these rely on the transient two-level atoms undergoing an integer number of Rabi oscillations \cite{weidinger:1999}. In addition, the micromaser analogy suggests that the presence of multiple photons simultaneously in the cavity mode should degrade the nonclassical signatures of the resonator \cite{walther:2006}; the quantum properties of the oscillator should vanish in the strong-driving, or high-Q cavity limits. This also follows from standard optomechanical theory where it is well-known that Eq.~(\ref{eq:hamiltonian}) is effectively linearized in the strong driving limit \cite{aspelmeyer:2013}. However, understanding where the crossover occurs requires a more complete analytic theory, or more robust numerical methods capable of analyzing multiple oscillator modes with large occupation numbers. The work presented here is, to the best of our knowledge, the first time that these micromaser characteristics have been predicted in a system with no atom-like component, and this link helps to further our understanding on the generation of quantum states in macroscopic mechanical resonators. Note that, during the submission process, we became aware of Ref.~\cite{lorch:2013} that derives, via laser theory, analytic expressions for the case of a single limit-cycle of a high-Q oscillator in the regime where $g_{0}/\kappa \lesssim 1$. Importantly, this work indicates that nonclassical states can be generated outside of the single-excitation regime. However, these states show markedly less non-classicality than the corresponding states in the single-photon regime. A result that is inline with the micromaser theory presented here that indicates that multiple cavity photons will diminish nonclassical features of the resonator.
1,116,691,501,097
arxiv
\section{#1}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \newcommand\defeq{\mathrel{\overset{\makebox[0pt] {\mbox{\normalfont\tiny\sffamily def}}}{=}}} \newcommand{\mathbin{\raisebox{0.5ex}{,}}}{\mathbin{\raisebox{0.5ex}{,}}} \title{Recurrence relations for the ${\cal W}_3$ conformal blocks and ${\cal N}=2$ SYM partition functions} \author{Rubik Poghossian} \affiliation{Yerevan Physics Institute,\\ Alikhanian Br. 2, AM-0036 Yerevan, Armenia} \emailAdd{[email protected]} \abstract{Recursion relations for the sphere $4$-point and torus $1$-point ${\cal W}_3$ conformal blocks, generalizing Alexei Zamolodchikov's famous relation for the Virasoro conformal blocks are proposed. One of these relations is valid for any 4-point conformal block with two arbitrary and two special primaries with charge parameters proportional to the highest weight of the fundamental irrep of $SU(3)$. The other relation is designed for the torus conformal block with a special (in above mentioned sense) primary field insertion. AGT relation maps the sphere conformal block and the torus block to the instanton partition functions of the ${\cal N}=2$ $SU(3)$ SYM theory with 6 fundamental or an adjoint hypermultiplets respectively. AGT duality played a central role in establishing these recurrence relations, whose gauge theory counterparts are novel relations for the $SU(3)$ partition functions with $N_f=6$ fundamental or an adjoint hypermultiplets. By decoupling some (or all) hypermultiplets, recurrence relations for the asymptotically free theories with $0\le N_f<6$ are found. } \keywords{W-algebra, Conformal block, N=2 SYM, Instanton partition function} \preprint{YerPhI/2017/04} \dedicated{To the memory of Alexei Zamolodchikov} \begin{document} \maketitle \flushbottom \section{Introduction} Conformal blocks play central role in any 2d CFT since they are holomorphic building constituents of the correlation functions of primary fields \cite{Belavin:1984vu}. In the case when the theory possesses no extra holomorphic current besides the spin $2$ energy-momentum tensor, the conformal block is fixed by the Virasoro symmetry solely. However a direct computation is practical up to first few levels of the intermediate state. Upon increasing the level such computation soon becomes intractable. Some three decades ago Alexei Zamolodchikov found a brilliant solution to this problem. Based on analysis of the poles and respective residues of the $4$-point conformal block considered as a function of the intermediate conformal weight and thorough investigation of the semiclassical limit, a very efficient recursion formula has been discovered \cite{Zamolodchikov:1985ie,Zamolodchikov:1987tmf}. Successful applications of this recurrence relation include Liouville theory \cite{Zamolodchikov:1995aa}, 4d ${\cal N}=2$ SYM \cite{Poghossian:2009mk}, topological strings \cite{KashaniPoor:2012wb,Kashani-Poor:2013oza}, partition function and Donaldson polynomials on $\mathbb{CP}_2$ \cite{Bershtein:2016mxz} et al. Analogous recurrence relations has been found much later also for torus $1$-point Virasoro block \cite{Poghossian:2009mk} (see also \cite{Hadasz:2009db}) and for ${\cal N}=1$ Super-conformal blocks \cite{Belavin:2006zr,Hadasz:2007nt}. The case when the theory admits higher spin ${\cal W}$-algebra symmetry \cite{Zamolodchikov:1985wn,Fateev:1987zh,Fateev:2007ab} is much more complicated. Holomorphic blocks of correlation functions of generic ${\cal W}$-primary fields can not be found on the basis of the ${\cal W}$-algebra Ward identities solely. Still, it is known that if an $n$-point ($n\ge 4$) contains $n-2$ partially degenerate primaries\footnote{ In this paper the term partially degenerate refers to the primary fields which admit a single null-vector on level $1$.}, the ${\cal W}$-algebra is restrictive enough to determine (in principle) such blocks. It appears that exactly at this situation an alternative way to obtain ${\cal W}$-conformal blocks based on AGT relation \cite{Alday:2009aq,Wyllard:2009hg,Fateev:2011hq} is available. Note that though AGT relations provide combinatorial formulae for computing such conformal blocks, a recursion formulae like the one originally proposed by Zamolodchikov have an obvious advantage. Besides being very efficient for numerical calculations \cite{Zamolodchikov:1995aa}, such recursive formulae are very well suited for the investigation of analyticity properties and asymptotic behavior of the conformal blocks (or their AGT dual instanton partition functions \cite{Poghossian:2009mk}). Instead the individual terms of the instanton sum have many spurious poles that cancel out only after summing over all, rapidly growing number of terms of given order which leaves the final analytic structure more obscure. In this paper recursion formulae are proposed for ${\cal N}=2$ $SU(3)$ gauge theory instanton partition function in $\Omega$-background (Nekrasov's partition function) with $0\le N_f\le 6$ fundamental hypermultiplets as well as for the case with an adjoint hypermultiplet (${\cal N}=2^*$ theory). As a byproduct all instanton exact formula is conjectured for the partition function in an one-parameter family of vacua, which is a natural generalization of the special vacuum introduced in \cite{Argyres:1999ty} and recently investigated in \cite{Ashok:2015cba}. The IR-UV relation discovered in \cite{Billo:2012st,Ashok:2015cba} was very helpful in finding these results. Using AGT relation the analogs of Zamolodchikov's recurrence relations are proposed for the (special) ${\cal W}_3$ $4$-point blocks on sphere and for the torus $1$-point block. Though CFT point of view makes many of the features of the recurrence relations natural, unfortunately rigorous derivations are still lacking. The organization of the paper is as follows: In chapter \ref{chapter1}. After a short review of instanton counting in the theory with $6$ fundamentals, it is shown how investigation of the poles and residues of the partition function incorporated with the known UV - IR relation and the insight coming from the 2d CFT experience leads to the recurrence relation. Then, subsequently decoupling the hypermultiplets by sending their masses to infinity corresponding recurrence relations for smaller number of flavours are found. The simplest case of pure theory ($N_f=0$) is presented in more details. Then a similar analysis is carried out and as a result, corresponding recurrence relation is found for the $SU(3)$, ${\cal N}=2^*$ theory. In chapter \ref{chapter2}. Using AGT relation, the recurrence relations are constructed for the $4$-point ${\cal W}_3$ sphere blocks with two arbitrary and two partially degenerate insertions and for the torus block with a partially degenerate insertion. In both cases exact formulae for the large ${\cal W}_3$ current zero mode limit are presented. It is argued that the location of the poles as well as the structure of the residues which were instrumental in finding the recurrence relations of chapter \ref{chapter1}., are related to the degeneracy condition and the structure of OPE of ${\cal W}_3$ CFT. \section{Instanton partition function in $\Omega$ background} \label{chapter1} \subsection{$SU(3)$ theory with $N_f=6$ fundamental hypermultiplets} Graphically this theory can be depicted as a quiver diagram on the left side of Fig.\ref{figAGT}. \begin{figure}[t] \begin{pgfpicture}{0cm}{0cm}{15cm}{4cm} \pgfcircle[stroke]{\pgfpoint{3cm}{2.8cm}}{0.5cm} \pgfputat{\pgfxy(3,2.75)}{\pgfbox[center,center]{\scriptsize{$SU(3)$}}} \pgfputat{\pgfxy(3,1.7)}{\pgfbox[center,center]{\small{$a_i$}}} {\color{black}\pgfrect[stroke]{\pgfpoint{0.1cm}{2.3 cm}}{\pgfpoint{1cm}{1cm}}} \pgfputat{\pgfxy(0.5,1.8)}{\pgfbox[center,center]{\small{$a_{0,i}$}}} {\color{black}\pgfrect[stroke]{\pgfpoint{4.9cm}{2.3cm}}{\pgfpoint{1cm}{1cm}}} \pgfputat{\pgfxy(5.5,1.7)}{\pgfbox[center,center]{\small{$a_{2,i}$}}} \pgfline{\pgfxy(1.1,2.8)}{\pgfxy(2.05,2.8)} \pgfline{\pgfxy(2.05,2.8)}{\pgfxy(2.5,2.8)} \pgfline{\pgfxy(3.5,2.8)}{\pgfxy(4.9,2.8)} \pgfputat{\pgfxy(7,2.8)}{\pgfbox[center,center]{$\Longleftrightarrow$}} \pgfline{\pgfxy(8.5,1.8)}{\pgfxy(10.1,1.8)} \pgfline{\pgfxy(10.1,1.8)}{\pgfxy(10.1,3.8)} \pgfline{\pgfxy(10.1,1.8)}{\pgfxy(13.1,1.8)} \pgfline{\pgfxy(13.1,1.8)}{\pgfxy(13.1,3.8)} \pgfline{\pgfxy(13.1,1.8)}{\pgfxy(14.7,1.8)} \pgfputat{\pgfxy(11.8,1.2)}{\pgfbox[center,center]{\small{$\boldsymbol{\alpha} $}}} \pgfputat{\pgfxy(9.3,3)}{\pgfbox[center,center]{\small{$\lambda^{(3)}\omega_1$}}} \pgfputat{\pgfxy(9.3,1.2)}{\pgfbox[center,center]{\small{$\boldsymbol{\alpha^{(4)}} $}}} \pgfputat{\pgfxy(13.9,3)}{\pgfbox[center,center]{\small{$\lambda^{(2)}\omega_1$}}} \pgfputat{\pgfxy(13.9,1.2)}{\pgfbox[center,center]{\small{$\boldsymbol{\alpha^{(1)}} $}}} \pgfputat{\pgfxy(8.2,1.8)}{\pgfbox[center,center]{\small{$\infty$}}} \pgfputat{\pgfxy(10.1,4.1)}{\pgfbox[center,center]{\small{$1$}}} \pgfputat{\pgfxy(13.1,4.1)}{\pgfbox[center,center]{\small{$x$}}} \pgfputat{\pgfxy(14.95,1.8)}{\pgfbox[center,center]{\small{$0$}}} \pgfclearendarrow \end{pgfpicture} \caption{On the left: the quiver diagram for the conformal $SU(3)$ gauge theory with $6$ fundamental hypermultiplets. On the right: the dual ${\cal W}_3$ conformal block.} \label{figAGT} \end{figure} The parameters $a_{0,i}$, $a_{2,i}$ are related to the hypermultiplet masses while $a_i$ ($i$ runs over $1,2,3$) are the expectation values of the vector multiplet. The instanton part of the partition function is given as a sum over triple of Young diagrams ${\vec{Y}}=(Y_1,Y_2,Y_3)$ (see \cite{Nekrasov:2002qd, Flume:2002az,Bruzzo:2002xf}) \begin{eqnarray} Z=\sum_{\vec{Y}}Z_{\vec{Y}}x^{|\vec{Y}|}, \label{Zinst} \end{eqnarray} where $x$ is the exponentiated coupling (the instanton counting parameter)$, |\vec{Y}|$ is the total number of boxes of Young diagrams. The coefficients $Z_{\vec{Y}}$ can be represented as \begin{eqnarray} Z_{\vec{Y}}=\prod_{i,j=1}^3\frac{Z_{bf}(\emptyset ,a_{i,0}|Y_j,a_j) Z_{bf}(Y_i,a_i|\emptyset ,a_{2,j})}{Z_{bf}(Y_i ,a_i|Y_j,a_j)} \label{Z_Y} \end{eqnarray} where \begin{eqnarray} Z_{bf}(\lambda,a|\mu,b)&=&\nonumber\\ \prod_{s\in \lambda} (a-b-L_\mu(s)\epsilon_1 +(1+A_\lambda (s))\epsilon_2)\prod_{s\in \mu} (a-b&+&(1+L_\lambda(s))\epsilon_1-A_\mu (s))\epsilon_2)\,.\qquad\qquad \label{Z_bf} \end{eqnarray} Here $A_\lambda(s)$ ($L_\lambda(s)$) is the distance in vertical (horizontal) direction from the upper (right) border of the box $s$ to the outer boundary of the diagram $\lambda$ as demonstrated in Fig. \ref{YD}. As usual $\epsilon_1$ and $\epsilon_2$ denote the parameters of the $\Omega$ background. \newcount\tableauRow \newcount\tableauCol \def0.4{0.4} \newenvironment{Tableau}[1]{% \tikzpicture[scale=0.7,draw/.append style={loosely dotted,gray}, baseline=(current bounding box.center)] \tableauRow=-1.5 \foreach \Row in {#1} { \tableauCol=0.5 \foreach\k in \Row { \draw[thin](\the\tableauCol,\the\tableauRow)rectangle++(1,1); \draw[black,ultra thick](\the\tableauCol,\the\tableauRow)+(0.5,0.5)node{$\k$}; \global\advance\tableauCol by 1 } \global\advance\tableauRow by -1 } }{\endtikzpicture} \newcommand\tableau[1]{\begin{Tableau}{#1}\end{Tableau}} \begin{figure} \center \begin{tabular}{l@{\qquad}l@{\qquad}l} \begin{Tableau}{{,s_1,,,,,,,,,},{,,,,,,,,,,},{,,,,,,s_3,,,,}, {,,,,,,,,,,},{,,s_2,,,,,,,,}} \draw[thick,solid ,color=black](11,-5)--(6,-5) --(6,-4)--(3,-4)--(3,-2)--(1,-2)--(1,-1)--(0,-1)--(0,0); \draw[thin,solid ,color=black](0,-1)--(0,-5); \draw[thin,solid ,color=black](1,-2)--(1,-5); \draw[thin,solid ,color=black](2,-2)--(2,-5); \draw[thin,solid ,color=black](3,-4)--(3,-5); \draw[thin,solid ,color=black](4,-4)--(4,-5); \draw[thin,solid ,color=black](5,-4)--(5,-5); \draw[thin,solid ,color=black](0,-5)--(6,-5); \draw[thin,solid ,color=black](0,-4)--(3,-4); \draw[thin,solid ,color=black](0,-3)--(3,-3); \draw[thin,solid ,color=black](0,-2)--(1,-2); \end{Tableau} \end{tabular} \caption{Arm and leg length with respect to the Young diagram with column lengths $\{4,3,3,1,1,1\}$. The thick solid line outlines its outer border. $A(s_1)=-2$, $L(s_1)=-2$, $A(s_2)=2$, $L(s_2)=3$, $A(s_3)=-3$, $L(s_3)=-4$.} \label{YD} \end{figure} Without loss of generality one may assume that $a_1+a_2+a_3=0$. Then this parameters can be reexpressed in terms of the independent differences $a_{12}\equiv a_1-a_2$ and $a_{23}\equiv a_2-a_3$ \begin{eqnarray} (a_1,a_2,a_3)=\left(\frac{2a_{12}+a_{23}}{3}\mathbin{\raisebox{0.5ex}{,}} -\frac{a_{12}- a_{23}}{3}\mathbin{\raisebox{0.5ex}{,}} -\frac{a_{12}+2a_{23}}{3}\right)\,. \end{eqnarray} The masses of $6$ fundamental hypermultiplets can be identified as \begin{eqnarray} &&m_i=-a_{0,i} \qquad \qquad \quad \,\text{for} \qquad i=1,2,3\,,\nonumber\\ &&m_i=\epsilon_1+\epsilon_2-a_{0,i-3} \quad \text{for}\qquad i=4,5,6\,. \end{eqnarray} The advantage of the definition above is that the partition function is symmetric with respect to permutations of $N_f=6$ masses $m_1,\ldots ,m_6$. For later convenience let us introduce also notations (elementary symmetric functions of masses) \begin{eqnarray} T_n=\sum_{1\le i_1<i_2<\cdots <i_n\le N_f}m_{i_1}\cdots m_{i_n}\,. \end{eqnarray} Let us fix an instanton number $k$ and perform partial summation in (\ref{Zinst}) over all diagrams with total number of boxes equal to $k$. Many spurious poles present in individual terms cancel and one gets a rational expression whose denominator is \begin{eqnarray} (\epsilon_1\epsilon_2)^k\prod \left(a_{12}^2-\epsilon_{r,s}^2\right) \left(a_{23}^2-\epsilon_{r,s}^2\right) \left((a_{12}+a_{23})^2-\epsilon_{r,s}^2\right), \label{denom} \end{eqnarray} where the product is over the positive integers $r\ge1$, $s\ge 1$ such that $rs\le k$ and \begin{eqnarray} \epsilon_{r,s}=r \epsilon_1+s\epsilon_2\,. \label{epsrs} \end{eqnarray} It is not difficult to check this statement explicitly for small $k$. Under AGT map this is equivalent to the well known fact that the 2d CFT blocks as a function of the parameters of the intermediate state acquire poles exactly at the degeneration points. Anticipating this relation let us introduce parameters \begin{eqnarray} &&u=a_{12}^2+a_{12}a_{23}+a_{23}^2,\nonumber\\ &&v=(a_{12}-a_{23})(2a_{12}+a_{23})(a_{12}+2a_{23}). \label{uvrule} \end{eqnarray} We'll see in section \ref{Toda_prel} that $u$ is closely related to the dimension and $v$ to the {\cal W} zero mode eigenvalue of the intermediate state. For what follows it will be crucial to note that the factors of (\ref{denom}) in terms of newly introduced parameters can be rewritten as \begin{eqnarray} -27\left(a_{12}^2-\epsilon_{r,s}^2\right) \left(a_{23}^2-\epsilon_{r,s}^2\right) \left((a_{12}+a_{23})^2 -\epsilon_{r,s}^2\right) =v^2-v_{r,s}(u)^2, \label{vu_a} \end{eqnarray} where \begin{eqnarray} v_{r,s}(u)=\left(3 \epsilon_{r,s}^2-u\right)\sqrt{4u-3 \epsilon_{r,s}^2}\,\,. \label{vrs} \end{eqnarray} Using (\ref{uvrule}) also in the numerator we can expel the parameters $a_{12}$, $a_{23}$ in favor of $v$ and $u$. Moreover for fixed $u$ one gets a polynomial dependence on $v$. Thus, to recover the partition function one needs \begin{itemize} \item{the residues at $v=v_{r,s}(u)$}; \item{the asymptotic behaviour of the partition function for a fixed value of $u$ and large $v$}. \end{itemize} \subsubsection{The residues} It follows from the remarkable identity (\ref{vu_a}) that the residues at $v=\pm v_{r,s}$ is related to the residue with respect to the variable $a_{12}$ at $a_{12}=\epsilon_{r,s}$ in a simple way\footnote{This is a choice of branch of the inverse map $(v,u)\rightarrow (a_{12},a_{23}$). We could consider the poles at $a_{23}=\epsilon_{r,s}$ or $a_{12}+a_{23}=\epsilon_{r,s}$ instead.}: \begin{eqnarray} Res|_{v=\pm v_{r,s}}=\frac{27 \epsilon_{r,s}}{\epsilon_{r,s}+2a_{23}} \,Res|_{a_{12}=\epsilon_{r,s}}. \label{res_va} \end{eqnarray} To restore the $u$-dependence in right hand side of (\ref{res_va}) due to (\ref{vu_a}) one should substitute \begin{eqnarray} a_{23}=\frac{-\epsilon_{r,s}\pm \sqrt{4u-3\epsilon_{r,s}^2}}{2}\,. \label{a23_u} \end{eqnarray} A careful examination shows that the residue of $k=r s$ instanton term at $a_{12}=\epsilon_{r,s}$ receives a nonzero contribution only from the triple $(Y_1,\emptyset ,\emptyset )$, where $Y_1$ is a rectangular diagram of size $r\times s$. Using eqs. (\ref{Zinst}), (\ref{Z_Y}), (\ref{Z_bf}) it is straightforward to evaluate this contribution. The result has a nice factorized form \begin{eqnarray} Res|_{a_{12}=\epsilon_{r,s}}\,Z_{r\cdot s}&=& -\prod_{i=1-r}^r\sideset{}{'}\prod_{j=1-s}^s\epsilon_{i,j}^{-1}\nonumber\\ &\times &\prod_{i=1}^{r}\prod_{j=1}^{s} \frac{\prod_{f=1}^{6} \left(m_f+\frac{1}{3}\,a_{23}+\frac{2}{3}\,\epsilon_{r,s} -\,\epsilon_{i,j}\right)} {(a_{23}+\epsilon_{r-i,s-j})(a_{23}+\epsilon_{i,j})}\,, \label{res_a} \end{eqnarray} where the prime over the product means that the term with $i=j=0$ should be omitted\footnote{ In generic $SU(n)$ case with no hypers a nice formula has been found earlier \cite{Morales:unpablished} for the {\it multiple} residues at the values of parameters $a_{1,n},a_{2,n}\ldots a_{n-1,n}$ specialized as $a_{i,j}=\epsilon_{r_i,s_j}$. Unfortunately, these residues alone are not sufficient to derive a recurrence relation for the partition function. }. \subsubsection{Large $v$ limit} Now let us consider the limit $v\rightarrow \infty $ for fixed $u$. This is equivalent to choosing \begin{eqnarray} a_{23}=\frac{\sqrt{4u-3a_{12}}-a^2_{12}}{2} \label{special_a} \end{eqnarray} and taking large $a_{12}$ limit. Here are the first few terms of this expansion \begin{eqnarray} a_{23}=e^{-\frac{i\pi}{3}} a_{12} -\frac{i u}{\sqrt{3} a_{12}} -\frac{i u^2}{3 \sqrt{3} a_{12}^3}-\frac{2 i u^3}{9 \sqrt{3} a_{12}^5} -\frac{5 i u^4}{27 \sqrt{3} a_{12}^7} -\frac{14 i u^5}{81 \sqrt{3} a_{12}^9}+\cdots\quad \label{special_a_exp} \end{eqnarray} I performed instanton calculation in this limit up to the order $x^5$. The result up to the order $x^4$ reads: \begin{eqnarray} &&\epsilon_1\epsilon_2\log Z\sim\nonumber\\ &&x \left(\frac{m_1 \epsilon}{3}-\frac{m_2}{3}-\frac{2 \epsilon^2}{9}-\frac{4 u}{27}\right) +x^2 \left(\frac{5 m_1 \epsilon}{27}-\frac{m_1^2}{54} -\frac{7 m_2}{54}-\frac{10 \epsilon^2}{81}-\frac{14 u}{243}\right)\qquad\nonumber\\ &&+x^3 \left(\frac{283 m_1 \epsilon}{2187}-\frac{40 m_1^2}{2187}-\frac{163 m_2}{2187}-\frac{566 \epsilon^2}{6561}-\frac{1948 u}{59049}\right)\nonumber\\ &&+x^4 \left(\frac{655 m_1 \epsilon}{6561}-\frac{433 m_1^2}{26244}-\frac{1321 m_2}{26244}-\frac{1310 \epsilon^2}{19683} -\frac{3931 u}{177147}\right)+\cdots\, , \label{Z_asymp} \end{eqnarray} where (and further on) for shortness I use the notation $\epsilon=\epsilon_1+\epsilon_2$. Notice that at $u=0$ the choice of VEV (\ref{special_a}), (\ref{special_a_exp}) coincides with the special vacuum investigated in \cite{Argyres:1999ty,Ashok:2015cba}. In \cite{Billo:2012st,Ashok:2015cba} an exact relation between the UV coupling and effective IR coupling has been established. It was shown that a central role is played by the congruence subgroup $\Gamma_1(3)$ of the duality group $SL(2,\mathbb{Z})$ \cite{koblitz2012introduction,apostol2012modular} and that the relation \begin{eqnarray} x=-27 \left(\frac{\eta(q^3)}{\eta(q)}\right)^{12} \label{x_q} \end{eqnarray} between $x=\exp 2\pi i\tau_{uv}$ and $q=\exp 2\pi i\tau_{ir}$, where $\eta(q)$ is Dedekind's eta function \begin{eqnarray} \eta(q)=q^{1\over 24}\prod_{n=1}^\infty (1-q^n) \end{eqnarray} is valid. It should not come as a surprise also that the unique degree $1$ modular form of $\Gamma_1(3)$ \begin{eqnarray} f_1(q)=\left(\left(\frac{\eta^3(q)}{\eta(q^3)}\right)^3 +27 \left(\frac{\eta^3(q^3)}{\eta(q)}\right)^3\right)^{1/3} \end{eqnarray} and its "ingredients" have a role to play. Indeed the expression \begin{eqnarray} \epsilon_1\epsilon_2\log \left(\left(-\frac{x}{27q}\right)^ {\frac{u}{3\epsilon_1\epsilon_2}} \left(\frac{\eta(q^3)}{\eta^3(q)}\right)^{\frac{3 T_2-T_1^2}{\epsilon_1\epsilon_2}} f_1(q)^{\frac{T_1^2-3T_1\epsilon+2\epsilon^2}{2\epsilon_1\epsilon_2}} \right) \label{Z_asymp_exact} \end{eqnarray} nicely matches the expansion (\ref{Z_asymp}) up to quite high orders in $q$ and there is little doubt that the argument of logarithm in (\ref{Z_asymp_exact}) indeed gives the large $v$ limit of the partition function exactly. \subsubsection{The recurrence relation} Using AGT relation it is not difficult to establish that the residue of the partition function at $v=\pm v_{r,s}(u)$ is proportional to the partition function with expectation values specified as \begin{eqnarray} v\rightarrow \pm v_{r,-s}(u-3\epsilon_1\epsilon_2\,rs) \,;\qquad u\rightarrow u-3\epsilon_1\epsilon_2\,rs \,. \end{eqnarray} On CFT side these are exactly the values corresponding to the null vector built from the given degenerate intermediate state related to the choice $v=\pm v_{r,s}(u)$. Let us represent the partition function as \begin{eqnarray} Z(v,u,q)=\left(-\frac{x}{27q}\right)^ {\frac{u}{3\epsilon_1\epsilon_2}} \left(\frac{\eta(q^3)}{\eta^3(q)}\right)^{\frac{3 T_2-T_1^2}{\epsilon_1\epsilon_2}} f_1(q)^{\frac{T_1^2-3T_1\epsilon+2\epsilon^2}{2\epsilon_1\epsilon_2}} H(v,u|q). \label{Z_H} \end{eqnarray} Note that \begin{eqnarray} H(v,u|q)=1+O(v^{-1}). \end{eqnarray} Incorporating information about residues establish above we finally arrive at the recurrent relation \begin{eqnarray} H(v,u|q)=1+\sum_{r,s=1}^\infty\sum_{\sigma=\pm}\frac{(-27q)^{rs} R^{(\sigma)}_{r,s}(u)}{v-\sigma v_{r,s}(u)} \,H\left(\sigma v_{r,-s}(u-3\epsilon_1\epsilon_2\,rs), u-3\epsilon_1\epsilon_2\,rs\right |q),\nonumber\\ \label{recursion} \end{eqnarray} where due to eqs. (\ref{res_va}), (\ref{res_a}) \begin{eqnarray} R_{r,s}^{(\pm)}&=&\frac{ 27\epsilon_{r,s}\left(u-\epsilon_{r,s}^2\right)} {\mp \sqrt{4u-3\epsilon_{r,s}^2}} \prod_{i=1-r}^r\sideset{}{'}\prod_{j=1-s}^s\epsilon_{i,j}^{-1}\nonumber\\ &\times &\prod_{i=1}^{r}\prod_{j=1}^{s}\frac{\prod_{l=1}^{N_f} \left(m_l-\frac{1}{2}\,\epsilon_{2i-r,2j-s}\pm\frac{1}{6}\, \sqrt{4u-3\epsilon_{r,s}^2}\right)} {u-\epsilon_{r,s}^2+\epsilon_{i,j}\epsilon_{r-i,s-j}}\,. \label{res_v} \end{eqnarray} Using the recurrence relation I have computed the partition function up to the order $x^8$ and compared it with the result of the direct instanton calculation. The agreement was perfect. \subsection{$N_f<6$ cases} \label{nf<6} It is straightforward to decouple some of $6$ hypermultiplets sending their masses to infinity. Let us choose $m_{N_f+1}=\cdots=m_{6}=\Lambda$, renormalize the coupling constant as $x\rightarrow -\frac{x}{\Lambda^{6-N_f}}$ and take the large $\Lambda$ limit\footnote{The minus sign is due to a subtle difference between fundamental and anti-fundamental hypermultiplets. With this sign included we get $N_f$ anti-fundamentals in conventions of \cite{Alday:2009aq}.}. The net effect is that instead of the recursion relation (\ref{recursion}) one obtains \begin{eqnarray} H(v,u|x)=1+\sum_{r,s=1}^\infty\sum_{\sigma=\pm}\frac{(-x)^{rs} R^{(\sigma)}_{r,s}(u)}{v-\sigma v_{r,s}(u)} \,H\left(\sigma v_{r,-s}(u-3\epsilon_1\epsilon_2\,rs), u-3\epsilon_1\epsilon_2\,rs\right |x),\nonumber\\ \label{recursion_less} \end{eqnarray} where for the residues the same formula (\ref{res_v}) with appropriate number of hypermultiplets $N_f$ is valid. The relation between $Z$ and $H$ becomes much simpler. Using eq. (\ref{Z_asymp}) we immediately see that for $N_f=5$ the appropriate relation is \begin{eqnarray} Z_{N_f=5}=\exp \left(\frac{x\,(18(T_1-\epsilon)-x)} {54\epsilon_1\epsilon_2}\right)H(v,u|x), \end{eqnarray} and, for $N_f=4$: \begin{eqnarray} Z_{N_f=4}=\exp \left(\frac{x}{3\epsilon_1\epsilon_2}\right)\,H(v,u|x). \end{eqnarray} Finally in the cases $N_f=0,1,2,3$ the functions $Z$ and $H$ simply coincide. \subsubsection{Pure $SU(3)$ theory} \label{Nf=0} This is the simplest case. It is easy to realize that the partition function is even with respect to the parameter $v$, so that the expansion (\ref{recursion}) can be organized according to the poles in the variable $v^2$: \begin{eqnarray} Z(v^2,u|x)=1+\sum_{r,s=1}^\infty\frac{(-x)^{rs} R_{r,s}(u)} {v^2-v^2_{r,s}(u)} \,Z\left(v^2_{r,-s}(u-3\epsilon_1\epsilon_2\,rs), u-3\epsilon_1\epsilon_2\,rs\right |x),\qquad \label{recursion_pure} \end{eqnarray} where \begin{eqnarray} R_{r,s}=54\epsilon_{r,s}\left(u-\epsilon_{r,s}^2\right) \left(u-3\epsilon_{r,s}^2\right) \prod_{i=1-r}^r\sideset{}{'}\prod_{j=1-s}^s\epsilon_{i,j}^{-1} \prod_{i=1}^{r}\prod_{j=1}^{s}\left( u-\epsilon_{r,s}^2+\epsilon_{i,j}\epsilon_{r-i,s-j} \right)^{-1}.\nonumber\\ \label{res_v2} \end{eqnarray} \subsection{$N=2^*$ theory} \label{rec2*} The analysis of the $SU(3)$ theory with an adjoint hypermultiplet can be carried out in a similar manner. The coefficients $Z_{\vec{Y}}$ of the instanton partition function (\ref{Zinst}) in this case is given by \begin{eqnarray} Z_{\vec{Y}}=\prod_{i,j=1}^3\frac{Z_{bf}(Y_i ,a_i-m|Y_j,a_j)} {Z_{bf}(Y_i ,a_i|Y_j,a_j)}\,, \label{Z_inst_adj} \end{eqnarray} where $m$ is the mass of the adjoint hypermultiplet. The structure of poles is the same as in the previous cases. Due to symmetry under permutation $a_{12}\leftrightarrow a_{23}$ the partition function, as in the case of pure theory, is a function of $v^2$. The residue of the $k=rs$ instanton charge sector of the partition function at $v^2=v^2_{r,s}$ and fixed $u$ is related to the residue in variable $a_{12}$ at $a_{12}=\epsilon_{r,s}$ (with $a_{23}$ fixed) \begin{eqnarray} Res|_{v^2=v^2_{r,s}}=-54\,\epsilon_{r,s} (a_{23}^2-\epsilon^2_{r,s})(2\epsilon_{r,s}a_{23}+a_{23}^2) \,Res|_{a_{12}=\epsilon_{r,s}}. \label{res_va_adj} \end{eqnarray} As in the case of fundamental hypermultiplets the residue of $k=r s$ instanton term at $a_{12}=\epsilon_{r,s}$ receives a nonzero contribution only from the triple of Young diagrams $(Y_1,\emptyset ,\emptyset )$ with $Y_1$ being a rectangular diagram of size $r\times s$. A direct calculation, using eqs. (\ref{Z_bf}), (\ref{Z_inst_adj}) shows that \begin{eqnarray} Res|_{a_{12}=\epsilon_{r,s}}\,Z_{r\cdot s}&=& \prod_{i=1-r}^r\sideset{}{'}\prod_{j=1-s}^s \frac{\epsilon_{i,j}-m}{\epsilon_{i,j}}\nonumber\\ &\times &\prod_{i=1}^{r}\prod_{j=1}^{s} \frac{(a_{23}+\epsilon_{r-i,s-j}+m)(a_{23}+\epsilon_{i,j}-m)} {(a_{23}+\epsilon_{r-i,s-j})(a_{23}+\epsilon_{i,j})}\,. \label{res_a_adj} \end{eqnarray} Investigation of the large $v^2$ behavior in this case is simpler compared to the theory with $6$ fundamentals. Computations in first few instanton orders shows that (in this section a more conventional notation $q$ instead of $x$ for the instanton counting parameter is restored) \begin{eqnarray} \epsilon_1\epsilon_2 \log Z_{{\cal N}=2^*}= -3(m-\epsilon_1)(m-\epsilon_2) \log \left(q^{-\frac{1}{24}}\,\eta(q)\right)+O(v^{-2}). \label{Z_asymp_adj} \end{eqnarray} This is a suggestive result. Recall that in the case of $SU(2)$ gauge group one gets the same answer with the only difference that the overall factor $3$ is replaced by $2$ \cite{Poghossian:2009mk}. Further steps are straightforward. Introducing the function $H$ via \begin{eqnarray} Z_{N=2^*}=\left(q^{-\frac{1}{24}}\,\eta(q)\right)^{-\frac{3(m-\epsilon_1) (m-\epsilon_2)}{\epsilon_1\epsilon_2}}\,H(v^2,u,q) \label{ZH_adj} \end{eqnarray} we get the recurrence relation \begin{eqnarray} H(v^2,u|q)=1+\sum_{r,s=1}^\infty\frac{q^{rs} R_{r,s}(u)} {v^2-v^2_{r,s}(u)} \,H\left(v^2_{r,-s}(u-3\epsilon_1\epsilon_2\,rs), u-3\epsilon_1\epsilon_2\,rs\right |q),\qquad \label{recursion_adj} \end{eqnarray} where \begin{eqnarray} R_{r,s}&=&-54\,m \epsilon_{r,s}\left(u-\epsilon_{r,s}^2\right) \left(u-3\epsilon_{r,s}^2\right) \prod_{i=1-r}^r\sideset{}{'}\prod_{j=1-s}^s \frac{\epsilon_{i,j}-m}{\epsilon_{i,j}}\nonumber\\ &\times &\prod_{i=1}^{r}\prod_{j=1}^{s} \frac{u-\epsilon^2_{r,s}+(\epsilon_{i,j}-m)(\epsilon_{r-i,s-j}+m)} {u-\epsilon^2_{r,s}+\epsilon_{i,j}\epsilon_{r-i,s-j}}\,. \label{res_v2_adj} \end{eqnarray} This recurrence relation has been checked by instanton calculation up to the order $q^{10}$. \section{Recurrence relation for ${\cal W}_3$ conformal blocks} \label{chapter2} In this section using AGT relations \cite{Alday:2009aq,Wyllard:2009hg,Fateev:2011hq} the recurrence relations for ${\cal N}=2$ SYM partition functions will be translated into recurrence relations for certain ${\cal W}_3$-algebra four-point conformal blocks on sphere (AGT counterpart of $N_f=6$ theory) and one-point torus blocks (AGT dual of ${\cal N}=2^*$). This recurrence relations generalize Alexei Zamolodchikov's famous relation established for the four point Virasoro conformal blocks \cite{Zamolodchikov:1985ie,Zamolodchikov:1987tmf}. The recurrent relation for Virasoro $1$-point torus block was proposed in \cite{Poghossian:2009mk} (see also \cite{Hadasz:2009db}). It should be emphasised nevertheless, that the ${\cal W}_3$ blocks considered here are not quite general, two of four primary fields of the sphere block as well as that of the $1$-point torus block are specific. The charge vectors defining their dimensions and ${\cal W}_3$ zero-mode eigenvalues are taken to be multiples of the highest weight of the fundamental (or anti-fundamental) representation of $SU(3)$. Unfortunately effective methods to understand generic ${\cal W}$-blocks (to my knowledge) are still lacking. \subsection{Preliminaries on $A_2$ Toda CFT} \label{Toda_prel} These are 2d CFT theories which, besides the spin $2$ holomorphic energy momentum current ${\cal W}^{(2)}(z)\equiv T(z)$ are endowed with additional higher spin $s=3$ current ${\cal W}^{(3)}$ \cite{Zamolodchikov:1985wn,Fateev:1987zh,Bilal:1988ze}. The Virasoro central charge is conventionally parameterised as \[ c=2+24 Q^2\,, \] where the "background charge" $Q$ is given by \[ Q=b+\frac{1}{b}\,, \] and $b$ is the dimensionless coupling constant of Toda theory. In what follows it would be convenient to represent roots, weights and Cartan elements of the Lie algebra $A_{2}$ as $3$-component vectors satisfying the condition that the sum of the components is zero. It is assumed also that the scalar product is the usual Kronecker one. Obviously this is equivalent to a more conventional representation of these quantities as diagonal traceless $3\times 3$ matrices with pairing given by trace. In this representation the Weyl vector is given by \begin{eqnarray} \boldsymbol{\rho}=\left(1,0,-1\right). \end{eqnarray} For further reference let us quote here explicit expressions for the highest weight $\boldsymbol{\omega}_1$ of the first fundamental representation and for its complete set of weights $\mathbf{h}_1, \mathbf{h}_2,\mathbf{h}_3$ \begin{eqnarray} &&\boldsymbol{\omega}_1=\left(\frac{2}{3}\mathbin{\raisebox{0.5ex}{,}} -\frac{1}{3}\mathbin{\raisebox{0.5ex}{,}} -\frac{1}{3}\right),\nonumber\\ &&(\mathbf{h}_l)_i=\delta_{l,i}-1/3\,. \end{eqnarray} The primary fields $V_{\boldsymbol{\alpha}}$ (in this paper we concentrate only on the left moving holomorphic parts) are parameterized by vectors $\boldsymbol{\alpha}$ with vanishing center of mass. Their conformal wights are given by \begin{eqnarray} h_{\boldsymbol{\alpha}}=\frac{(\boldsymbol{\alpha} ,2Q\boldsymbol{\rho} -\boldsymbol{\alpha})}{2}\,. \end{eqnarray} Sometimes it is convenient to parameterize primary fields (or states) in terms of the Toda momentum vector $\mathbf{p}=Q\boldsymbol{\rho} -\boldsymbol{\alpha}$ instead of $\boldsymbol{\alpha}$. In what follows a special role is played by the fields $V_{\lambda \boldsymbol{\omega}_1}$ with dimensions \begin{eqnarray} h_{\lambda \boldsymbol{\omega}_1}=\lambda \left(Q-\frac{\lambda}{3}\right)\,. \label{dim_lambda} \end{eqnarray} For generic $\lambda$ these fields admit a single null vector at the first level. Besides the dimension, the fields are characterized also by the zero mode eigenvalue of the ${\cal W}_3$ current \begin{eqnarray} w=-\frac{i}{27} \sqrt{\frac{48}{22+5c}}\,\,v\,, \end{eqnarray} where $v$ is defined in terms of the momentum vector $\mathbf{p}$ as \begin{eqnarray} v=27 p_1p_2p_3=(p_{12}-p_{23})(p_{12}+2p_{23})(2p_{12}+p_{23}) \label{v_p} \end{eqnarray} and $p_{12}=p_1-p_2$, $p_{23}=p_2-p_3$. It is convenient to introduce also the parameter \begin{eqnarray} u=p_{12}^2+p_{23}^2+p_{12}p_{23} \label{u_p} \end{eqnarray} so that the conformal dimension (\ref{dim_lambda}) can be rewritten as \begin{eqnarray} h=Q^2-\frac{u}{3}\,. \end{eqnarray} The pair $v,u$ characterizes primary fields more faithfully, than the charge vector, since they are invariant under the Weyl group action. \subsubsection{Sphere $4$-point block} The object of our interest in this section will be the conformal block \begin{eqnarray} \langle V_{\boldsymbol{\alpha}_4}(\infty) V_{\lambda_3 \boldsymbol{\omega}_1}(1)V_{\lambda_2\boldsymbol{\omega}_1}(x) V_{\boldsymbol{\alpha}_1}(0) \rangle_\mathbf{p} \sim x^{h_{\boldsymbol{\alpha}}-h_1-h_2} G(v,u|x)\,, \end{eqnarray} where $\langle \cdots\rangle_{\mathbf{p}} $ denotes the holomorphic part of the correlation function with a specified intermediate state of momentum $\mathbf{p}=Q\boldsymbol{\rho} -\boldsymbol{\alpha} $. It is assumed that the function $G(v,u|x)$ is normalized so that $G(v,u|x)=1+O(x)$ (we explicitly display only dependence on the parameters $v,u$, which specify the intermediate state). Due to AGT relation, the function $G(v,u|x)$ is directly connected to the instanton partition function of $SU(3)$ gauge theory with $N_f=6$ hypermultiplets discussed earlier (see Fig.\ref{figAGT}). Here is the map between parameters of the CFT and Gauge Theory (GT) sides: \begin{eqnarray} b&=&\sqrt{\frac{\epsilon_1}{\epsilon_2}};\qquad u_{CFT}= \frac{u_{GT}}{\epsilon_1\epsilon_2};\qquad v_{CFT}=\frac{v_{GT}}{(\epsilon_1\epsilon_2)^{3/2}};\qquad \\ \label{lambda_m} \lambda^{(2)}&=&\frac{3 \epsilon-m_4-m_5-m_6}{\sqrt{\epsilon_1\epsilon_2}};\qquad \lambda^{(3)}=\frac{m_1+m_2+m_3}{\sqrt{\epsilon_1\epsilon_2}};\\ \label{p_m1} \mathbf{p}^{(1)}&=&Q\boldsymbol{\rho}-\boldsymbol{\alpha}^{(1)}\nonumber\\ \qquad &=&\left( \frac{-2m_4+m_5+m_6}{\sqrt{\epsilon_1\epsilon_2}}, \frac{-2m_5+m_4+m_6}{\sqrt{\epsilon_1\epsilon_2}}, \frac{-2m_6+m_4+m_5}{\sqrt{\epsilon_1\epsilon_2}} \right);\\ \label{p_m4} \mathbf{p}^{(4)}&=&Q\boldsymbol{\rho}-\boldsymbol{\alpha}^{(4)}\nonumber\\ \qquad &=&\left( \frac{-2m_1+m_2+m_3}{\sqrt{\epsilon_1\epsilon_2}}, \frac{-2m_2+m_1+m_3}{\sqrt{\epsilon_1\epsilon_2}}, \frac{-2m_3+m_1+m_2}{\sqrt{\epsilon_1\epsilon_2}} \right)\,. \end{eqnarray} Under this identification of parameters the relation between the gauge theory (with $N_f=6$ fundamentals) partition function and the CFT conformal block is very simple: \begin{eqnarray} Z=(1-x)^{\lambda^{(3)}\left(Q-\frac{1}{3}\,\lambda^{(2)}\right)}\,G\,. \label{AGT} \end{eqnarray} Now it is quite easy to rephrase the recurrence relation for the partition function in terms of CFT language. Define a function $H(v,u|q)$ through \begin{eqnarray} G(v,u|x)=\left(-\frac{x}{27q}\right)^ {\frac{u}{3}} \left(\frac{\eta(q^3)}{\eta^3(q)}\right)^{3(h_1+h_4)-6Q^2} f_1(q)^{\frac{-3(h_2+h_3)+2Q^2}{2}}H(v,u|q), \label{G_H} \end{eqnarray} where $q$ and $x$ are related as in (\ref{x_q}). Then, due to (\ref{Z_H}), (\ref{recursion}), (\ref{AGT}) and (\ref{G_H}) for $H(v,u|q)$ we get essentially the same recurrence relation (\ref{recursion}) \begin{eqnarray} H(v,u|q)=1+\sum_{r,s=1}^\infty\sum_{\sigma=\pm}\frac{(-27q)^{rs} R^{(\sigma)}_{r,s}(u)}{v-\sigma v_{r,s}(u)} \,H\left(\sigma v_{r,-s}(u-3rs), u-3rs\right |q),\nonumber\\ \label{recursion_CFT} \end{eqnarray} where similar to (\ref{vrs}) \begin{eqnarray} v_{r,s}(u)=(3Q_{r,s}^2-u)\sqrt{4u-3Q_{r,s}^2} \label{vrs_CFT} \end{eqnarray} with (cf. (\ref{epsrs}) ) \begin{eqnarray} Q_{r,s}=b r+\frac{s}{b} \label{Qrs} \end{eqnarray} and the residues are given by \begin{eqnarray} R_{r,s}^{(\pm)}&=&\frac{ 27Q_{r,s}\left(u-Q_{r,s}^2\right)} {\mp \sqrt{4u-Q_{r,s}^2}} \prod_{i=1-r}^r\sideset{}{'}\prod_{j=1-s}^sQ_{i,j}^{-1}\nonumber\\ &\times &\prod_{i=1}^{r}\prod_{j=1}^{s}\frac{\prod_{l=1}^{6} \left(\mu_l-\frac{1}{2}\,Q_{2i-r,2j-s}\pm\frac{1}{6}\, \sqrt{4u-3Q_{r,s}^2}\right)} {u-Q_{r,s}^2+Q_{i,j}Q_{r-i,s-j}}\,, \label{res_v_CFT} \end{eqnarray} where CFT counterparts of gauge theory masses $\mu_l=m_l /\sqrt{\epsilon_1\epsilon_2}$ are related to the parameters of the inserted fields via (\ref{lambda_m})-(\ref{p_m4}). It follows from the analog of the Kac determinant for ${\cal W}_3$-algebra \cite{Watts:1989bn}, that the conformal block truncated up to the order $x^k$ should have simple poles in the variable $v$ (for $u$ fixed) located at $v=\pm v_{r,s}(u)$ with $r\ge 1$, $s\ge 1$ and $r\, s\le k$. The relation \begin{eqnarray} v^2-v^2_{r,s}(u)=0 \end{eqnarray} among parameters $v$, $u$ is the condition of existence of a null vector at the level $r s$. This null vector originates a ${\cal W}_3$-algebra representation with parameters \begin{eqnarray} u\rightarrow u-3rs\,;\qquad v\rightarrow \pm v_{r,-s}(u-3 rs). \label{pole_parameters} \end{eqnarray} Though we arrived to the recurrence relation starting from the gauge theory side, in fact many features of this relation are transparent from the CFT side and it is reasonable to expect that a rigorous proof may be found generalizing arguments of Alexei Zamolodchikov from Virasoro to the ${\cal W}$-algebra case. Indeed (\ref{recursion_CFT}) states that the residues at the poles $v=\pm v_{r,s}(u)$ (\ref{vrs_CFT}), are proportional to the conformal block with internal channel parameters (\ref{pole_parameters}) corresponding to the null vector at the level $rs$. The factor $R_{r,s}^{(\pm)}$ (\ref{res_v_CFT}) also has many expected features. Its denominator vanishes exactly when the parameter $u$ is specified so that a second independent degenerate state arises. The factors in the numerator reflect the structure of OPE with degenerate field (see \cite{Fateev:2007ab}) exactly as it was in the case of Virasoro block considered by Alexei Zamolodchikov. It seems more subtle to justify presence of the $u$ independent factors $Q_{i,j}^{-1}$. Our result predicts the following large $v$ behavior of the ${\cal W}_3$ block \begin{eqnarray} G(v,u|x)\sim \left(-\frac{x}{27q}\right)^ {\frac{u}{3}} \left(\frac{\eta(q^3)}{\eta^3(q)}\right)^{3(h_1+h_4)-6Q^2} f_1(q)^{\frac{-3(h_2+h_3)+2Q^2}{2}} +O(v^{-1}). \label{G_asymp_exact} \end{eqnarray} A good starting point to prove this relation might be the deformed Seiberg-Witten curve DSFT \cite{Poghossian:2010pn,Fucito:2011pn,Nekrasov:2013xda} or, equivalently, the quasiclassical null vector decoupling equation for ${\cal W}$-blocks derived in \cite{Poghossian:2016rzb}. \subsubsection{Torus $1$-point block} Since the torus $1$-point block (below $\boldsymbol{\alpha}$ is the charge parameter of the intermediate states) \begin{eqnarray} {\cal F}_{\boldsymbol{\alpha}}^{\lambda}(q)= q^{\frac{c}{24}-h_{\boldsymbol{\alpha}}}\mathrm{tr}\,_{\boldsymbol{\alpha}} \left( q^{L_0-\frac{c}{24}}V_{\lambda \boldsymbol{\omega}_1}(1)\right) \end{eqnarray} is related to the partition function of the gauge theory with adjoint hypermultiplet via \cite{He:2012bi} \begin{eqnarray} Z_{{\cal N}=2^*}= \left(q^{-\frac{1}{24}}\,\eta(q)\right)^{-\lambda(Q-\frac{\lambda}{3}) -1}\,{\cal F}_{\boldsymbol{\alpha}}^{\lambda}(q)\,. \end{eqnarray} The parameter $\lambda $ is related to the adjoint hypermultiplet mass $m$: \begin{eqnarray} \lambda=\frac{3m}{\sqrt{\epsilon_1\epsilon_2}} \end{eqnarray} and as earlier the intermediate momentum parameter $\mathbf{p}=Q\boldsymbol{\rho}-\boldsymbol{\alpha}$ is related to the VEV of the vector multiplet $\mathbf{a}$ as \begin{eqnarray} p_i=\frac{a_i}{\sqrt{\epsilon_1\epsilon_2}}; \qquad i=1,2,3\,. \end{eqnarray} Thus, comparing with (\ref{ZH_adj}), (\ref{recursion_adj}), (\ref{res_v2_adj}), we see that the function $H(v^2,u,q)$ defined by the equality \begin{eqnarray} {\cal F}_{\boldsymbol{\alpha}}^{\lambda}(q)= \left(q^{-\frac{1}{24}}\,\eta(q)\right)^{-2}\,H(v^2,u,q)\,, \label{H_torus} \end{eqnarray} ($v$ and $u$ in terms of the momentum $p$ were defined in (\ref{v_p}), (\ref{u_p})) satisfies the recurrence relation \begin{eqnarray} H(v^2,u|q)=1+\sum_{r,s=1}^\infty\frac{q^{rs} R_{r,s}(u)} {v^2-v^2_{r,s}(u)} \,H\left(v^2_{r,-s}(u-3rs)\,, u-3rs\right |q),\qquad \label{recursion_torus} \end{eqnarray} where \begin{eqnarray} R_{r,s}&=&-18\,\lambda\, Q_{r,s}\left(u-Q_{r,s}^2\right) \left(u-3Q_{r,s}^2\right) \prod_{i=1-r}^r\sideset{}{'}\prod_{j=1-s}^s \frac{Q_{i,j}-\frac{\lambda}{3}}{Q_{i,j}}\nonumber\\ &\times &\prod_{i=1}^{r}\prod_{j=1}^{s} \frac{u-Q^2_{r,s}+(Q_{i,j}-\frac{\lambda}{3})(Q_{r-i,s-j}+\frac{\lambda}{3})} {u-Q^2_{r,s}+Q_{i,j}Q_{r-i,s-j}}\,. \label{res_v2_torus} \end{eqnarray} \section{Summary and discussion} To summarize let me quote the main results of this paper: \begin{itemize} \item{the recurrence relation (see (\ref{Z_asymp_exact}), (\ref{recursion}), (\ref{res_v})) for the instanton partition function of ${{\cal N}=2}$ $SU(3)$ gauge theory with $6$ fundamental hypermultiplets. This recurrence relation suggests an exact in all instanton orders formula for the partition function and prepotential for the theory in a generalized version of the special vacuum considered in \cite{Argyres:1999ty,Ashok:2015cba}}; \item{recurrence relations for smaller number of hypermultiplets (see section \ref{nf<6}) and for pure $N_f=0$ theory (section \ref{Nf=0})}; \item{recurrence relations for the theory with an adjoint hypermultiplet, commonly referred as ${\cal N}=2^*$ theory (see section \ref{rec2*})}; \item{the analogs of Zamolodchikov's recurrence relations are constructed for $4$-point sphere ${\cal W}_3$-blocks with two arbitrary and two partially degenerate insertions (see (\ref{G_H}), (\ref{recursion_CFT}), (\ref{res_v_CFT})) and for the torus ${\cal W}_3$-block with a partially degenerate insertion (see (\ref{H_torus}),(\ref{recursion_torus}), (\ref{res_v2_torus})). For both cases recursion formulae provide explicit expressions for the large ${\cal W}_3$ zero mode limit.} \end{itemize} Though many details of the recurrence relations are transparent either from the 4d gauge theory or from the 2d CFT points of view, still full derivation is lacking. I hope to come back to these questions in a future publication. Of course, generalization to the case of generic $SU(n)$/${\cal W}_n$ cases would be an interesting development. \vspace{1cm} \section*{Acknowledgments} I am grateful to G.~Bonelli, F.~Fucito, F.~Morales, A.~Tanzini for stimulating discussions and for hospitality at the university of Rome "Tor Vergata" and SISSA, Trieste during February of this year, where the initial ideas of this paper emerged. This work was partially supported by the Armenian State Committee of Science in the framework of the research project 15T-1C308. \providecommand{\href}[2]{#2}\begingroup\raggedright
1,116,691,501,098
arxiv
\section{Introduction} Let $(M, g)$ be a Riemannian manifold with Levi-Civita connection $\nabla$ and curvature tensor $R$. If $V$ is an $n$-dimensional vector space identified with the tangent space at an arbitrary point in $M$, denote by ${\mathcal R}(V)$ the linear space of all tensors of type (0,4) over $V$ having the symmetries of $R$. According to the general theory of group representations \cite{W} there exists a splitting of ${\mathcal R}(V)$ into irreducible components under the action of $O(n)$. Singer and Thorpe \cite{ST} and Nomizu \cite{N} give explicitly a decomposition of ${\mathcal R}(V)$ and describe it geometrically in terms of the well-known classes of Riemannian manifolds of constant sectional curvature, Einstein manifolds and conformally flat Riemannian manifolds. In the case when $(M,g,J)$ is an almost Hermitian manifold with almost complex structure $J$ and $V$ is a $2n$-dimensional Hermitian vector space, Tricerri and Vanhecke give in \cite{TV} a complete explicit decomposition of ${\mathcal R}(V)$ under the action of $U(n)$. In this case the splitting of ${\mathcal R}(V)$ gives many new classes of almost Hermitian manifolds with respect to $R$ and leads to the problem of their geometrical description. Following this scheme of studying Riemannian manifolds it seems natural to investigate the linear space $\nabla {\mathcal R}(V)$ of all tensors of type (0,5) over $V$ having the symmetries of the covariant derivative $\nabla R$ of the curvature tensor $R$ of a Riemannian manifold $(M,g)$. A complete explicit decomposition of $\nabla {\mathcal R}(V)$ under the action of $O(n)$ has been given by Gray and Vanhecke in \cite{GV}. The zero space of this splitting $(\nabla R = 0)$ leads to the class of locally symmetric Riemannian manifolds and this class corresponds to the class of locally flat Riemannian manifolds which is the zero class $(R =0)$ in the splitting of ${\mathcal R}(V)$. However we have to mention that for the classes of Riemannian manifolds with respect to $\nabla R$ $(\nabla R \not = 0)$ it is not known very much. Conformally flat Riemannian manifolds ($M, g, d\tau$) with metric $g$ and scalar 1-form $d\tau$ ($\tau$ being the scalar curvature of $(M,g)$) have been studied in \cite{GM}. In this paper we consider the class of Riemannian manifolds whose covariant derivative $\nabla R$ of the curvature tensor is constructed only by the metric $g$ and the scalar 1-form $d\tau$ . This class corresponds to the class of Riemannian manifolds of constant sectional curvature (in the splitting of ${\mathcal R}(V)$). We introduce geometrically the class of directed Riemannian manifolds of pointwise constant relative sectional curvature and prove that these manifolds form the class of Riemannian manifolds with special covariant derivative $\nabla R$ of the curvature tensor mentioned above. We prove that any rotational hypersurface is a directed Riemannian manifold and find all rotational hypersurfaces of pointwise constant relative sectional curvature. For the special subclass of the directed Riemannian manifolds of pointwise constant relative sectional curvature whose distribution is totally umbilical we prove a structural theorem and a theorem of Schur's type. \section{Directed Riemannian manifolds of pointwise constant relative sectional curvature} Let $(M,g)$ be a Riemannian manifold with Levi-Civita connection $\nabla$. The Riemannian curvature operator $R$ is given by $R(X,Y) = [\nabla_X , \nabla_Y ] - \nabla _{[X,Y]}$ and the corresponding curvature tensor of type (0,4) is defined by $R(X,Y,Z,U) = g(R(X,Y)Z,U)$ for arbitrary differentiable vector fields X,Y,Z,U. Further the algebra of all differentiable vector fields on $M$ will be denoted by ${\mathcal X}M$. The covariant derivative $\nabla R$ of the curvature tensor $R$ has the following symmetries: $$\begin{array}{l} (\nabla _{W}R)(X,Y,Z,U) = - (\nabla _{W}R)(Y,X,Z,U) = - (\nabla _{W}R)(X,Y,U,Z);\\ [2mm] \sigma _{XYZ} (\nabla _{W}R)(X,Y,Z,U) = 0;\\ [2mm] \sigma _{WXY} (\nabla _{W}R)(X,Y,Z,U) = 0, \end{array}\leqno(1)$$ where $W,X,Y,Z,U \in {\mathcal X}M$ and $\sigma$ denotes the corresponding cyclic summation. We denote by $\tau$ the scalar curvature of the manifold $(M,g)$ and by $\pi$ the tensor $$\pi (X,Y,Z,U) = g(Y,Z)g(X,U) - g(X,Z)g(Y,U); \hspace{0,5cm} X,Y,Z,U \in {\mathcal X}M.$$ We recall that a Riemannian manifold of constant sectional curvatures is characterized by the equality $$ R = {\frac{\tau }{n(n-1)}}\pi, \leqno{(2)}$$ i.e. the curvature tensor of a Riemannian manifold of constant sectional curvatures is constructed only by the metric $g$. Let $\omega$ be a 1-form on the Riemannian manifold $(M, g)$. We consider the tensor $$\begin{array}{l} \Pi (\omega )(W,X,Y,Z,U) = 2\omega (W)\pi (X,Y,Z,U) + \omega (X)\pi (W,Y,Z,U)\\ [2mm] + \omega (Y)\pi (X,W,Z,U) + \omega (Z)\pi (X,Y,W,U) + \omega (U)\pi (X,Y,Z,W). \end{array}$$ It is easy to check that the tensor $\Pi (\omega )$ has the symmetries (1) of the tensor $\nabla R$. Our aim in this paper is to study the class of Riemannian manifolds characterized by the condition $$ \nabla R = {\frac{1}{2(n-1)(n+2)}}\Pi (\omega ). \leqno{(3)}$$ With respect to $\nabla R$ this class formally corresponds to the class of Riemannian manifolds of constant sectional curvatures. In terms of the decomposition of $\nabla R$ \cite{GV} the condition (3) means that $\nabla R$ coincides with its component in the space $\nabla {\mathcal R}_{I}$. In this section we characterize the equality (3) geometrically. Let $E = span\{X,Y\}$ be a 2-plane in the tangent space $T_{p}M$ at a point $p$ in $M$ and $\{X,Y\}$ be an orthonormal basis of $E$. The tensor $\nabla R$ generates the 1-form $\varphi _{E}$ defined on $E$ as follows: $$ \varphi _{E}(Z) = (\nabla _{Z}R)(X,Y,Y,X), \hspace{0,5cm} Z \in E. \leqno{(4)}$$ Because of the properties (1) of $\nabla R$ the 1-form $\varphi _{E}$ does not depend on the orthonormal basis of $E$. The 1-form $\varphi _{E}$ defined on $E$ by (4) is said to be {\it a sectional 1-form}. Let now $\eta $ be a unit 1-form on $(M,g)$ and $\Delta $ be the distribution of $\eta $, i.e. $$ \Delta (p) = \{ X \in T_{p}M : \eta (X) = 0 \},\; p \in M.$$ \begin{defn} A Riemannian manifold $(M,g)$ is said to be {\it directed} if there exists a unit 1-form $\eta $ on $M$ such that i) $\varphi _{E} = k(E,p)\,\eta \vert _{E}\;$ for all $\,E \not \subset \Delta;$ ii) $\varphi _{E} = 0\;$ for all $\,E \subset \Delta$. For any 2-plane $E \not \subset \Delta$ the function $k(E,p)$ is said to be {\it a relative sectional curvature}. \end{defn} The condition i) means that all sectional 1-forms $\varphi _{E},\; E \not \subset \Delta\,$ are collinear with the restriction of the 1-form $\eta $ to the 2-plane $E$. We say that $(M,g)$ {\it is directed by the 1-form $\eta $}. \begin{defn} A directed Riemannian manifold $(M,g)$ is said to be {\it of pointwise constant relative sectional curvatures} if the relative sectional curvature $k(E,p)$ of any 2-plane $E \not \subset \Delta$ does not depend on $E$. \end{defn} In order to find a tensor characterization for the Riemannian manifolds described in Definition 2.2 we need the following \begin{lem}\label{lem 1} Let $L$ be a tensor of type $(0,5)$ satisfying the following equalities \begin{itemize} \item[i)] $L(W,X,Y,Z,U) = -L(W,Y,X,Z,U) = -L(W,X,Y,U,Z);$ \vskip 2mm \item[ii)] $\sigma _{XYZ}L(W,X,Y,Z,U) = 0;$ \vskip 2mm \item[iii)] $\sigma _{WXY}L(W,X,Y,Z,U) = 0$ \end{itemize} for all $W,X,Y,Z,U \in {\mathcal X}M$. If $L(X,X,Z,Z,X) = 0$ for arbitrary $X,Z \in {\mathcal X}M$, then $L \equiv 0.$ \end{lem} {\it Proof}. Substituting successively $X$ by $X + Y$ and by $X - Y$ into the equality $$L(X,X,Z,Z,X) = 0$$ and taking into account the properties of $L$, we obtain $$ L(X,Y,Z,Z,Y) + 2L(Y,X,Z,Z,Y) = 0. \leqno{(5)}$$ This implies that $$ L(Y,X,Z,Z,Y) = L(Z,X,Y,Y,Z). \leqno{(6)}$$ Applying the condition iii) to $L(X,Y,Z,Z,Y)$ and taking into account (6) we find $$ L(X,Y,Z,Z,Y) - 2L(Y,X,Z,Z,Y) = 0.$$ The last equality combined with (5) implies $L(X,Y,Z,Z,Y) = 0$ for all $X,Y,Z \in {\mathcal X}M$. Now it follows in a standard way that $L \equiv 0$. \hfill {$\square$} We give a tensor characterization for directed manifolds of pointwise constant relative sectional curvatures. \begin{thm} Let $(M,g)$ be a Riemannian manifold directed by the unit 1-form $\eta $. Then $(M,g)$ is of pointwise constant relative sectional curvatures $k(p)$ if and only if $$ \nabla R = {\frac{1}{4}}k(p)\Pi (\eta ).\leqno{(7)}$$ The function $k(p)$ satisfies the equality $$ d\tau = {\frac{(n-1)(n+2)}{2}}\, k\,\eta, \leqno{(8)}$$ where $\tau $ is the scalar curvature of the manifold. \end{thm} {\it Proof.} To prove the first implication we put $$ L = \nabla R - {\frac{1}{4}}\,k\,\Pi (\eta ).$$ Under the conditions of the theorem it is easy to check that $L (X,X,Y,Y,X) = 0$ for all $X,Y \in {\mathcal X}M$. Applying Lemma 2.3 we obtain (7). The inverse is an easy verification. The equality (8) follows from (7) by two contractions. \hfill {$\square$} Theorem 2.4 implies immediately \begin{cor}\label{cor 1} Let $(M,g)$ be a directed Riemannian manifold of pointwise constant relative sectional curvatures. Then, $(M,g)$ is locally symmetric if and only if $d\tau = 0$. \end{cor} Considering directed Riemannian manifolds of pointwise constant relative sectional curvatures $k$ and of nonconstant scalar curvature $\tau$, i. e. $d\tau \not = 0$ on $M$, we compute (up to an orientation of $\eta $) from (8) $$ k = {\frac{2\,\Vert d\tau \Vert}{(n-1)(n+2)}},\quad \eta = \frac{1}{||d\tau||}\,d \tau\,. \leqno{(9)}$$ Hence, the 1-form $\eta $ is uniquely determined by the metric $g$. \section{Examples} In this section we give examples of the manifolds introduced in the previous section among the rotational hypersurfaces. First we need some formulas. Let $(M,g)$ be a rotational hypersurface in the Euclidean space ${\bf R}^{n+1}$ with a rotational axis oriented by a unit vector $e$. We consider $M$ as a 1-parameter family of spheres $S^{n-1} (t), t \in J$, given by the equalities $$ (X - x_0(t))^{2} = r^{2}(t), \hspace{0,5cm} e(X - x_0(t)) = 0,\leqno{(10)}$$ where $x_0(t)$ and $r(t)$ are respectively centers and radii of the spheres. Further we assume that the rotational hypersurface $M$ is also given by a vector-valued function $X(u^{1},...,u^{n-1},t)$ satisfying (10), where ${u^{1},...,u^{n-1},t}$ is a local coordinate system on $M$. Taking partial derivatives of (10) we find $$ (X - x_0)X_{\alpha } = 0, \hspace{5mm} eX_{\alpha } = 0; \alpha = 1,...,n-1,$$ $$(X - x_0)X_{t} = rr', \hspace{0,5cm} eX_{t} = 1.$$ Then the vector $X - x_0 - rr'e \,$ is normal to $M$ at the point $X$ and we can choose the unit normal to $M$ by the equality $$ N = - {\frac{X - x_0 - rr'.e}{r\sqrt {1+r'^{2}}}}.$$ Denote by $\xi $ the unit vector field tangent to $M$ and perpendicular to the parallels $S^{n-1} (t)$. Up to a sign we have $$ \xi = \sqrt {1+r'^{2}} e - r' N.$$ If $\nabla '$ is the standard flat connection in ${\bf R}^{n+1}$ we find the Weingarten formulas on $M$: $$ \nabla '_{x}N = {\frac{1}{r \sqrt {1+r'^2}}} x, \hspace{0,5cm} x \perp \xi;$$ $$ \nabla '_{\xi }N = {\frac{r''}{(\sqrt {1+r'^2})^3}} \xi.$$ Hence,the second fundamental tensor $h$ of $M$ has the following structure $$ h = {\frac{1}{r \sqrt {1+r'^2}}} g - {\frac{1+r'^2+rr''}{r(\sqrt {1+r'^2})^3}}\eta \otimes \eta, \leqno{(11)} $$ where $\eta $ is the dual 1-form of the unit vector field $\xi $. Substituting $h$ from (11) into the Gauss equation we find the curvature tensor of any rotational hypersurface has the following form (see also \cite{GM}): $$ R = a\pi + b\Phi , \leqno{(12)} $$ where $a$ and $b$ are the functions $$ a = {\frac{1}{r^{2}(1+r'^{2})}}, \hspace{0,5cm} b = - {\frac{1+r'^{2}+rr''} {r^{2}(1+r'^{2})^{2}}} \leqno{(13)}$$ and $\Phi $ is the tensor $$ \Phi (X,Y,Z,U) = g(Y,Z)\eta (X)\eta (U) - g(X,Z)\eta (Y)\eta (U)$$ $$ +g(X,U)\eta (Y)\eta (Z) - g(Y,U)\eta (X)\eta (Z); \hspace{0,5cm} X,Y,Z,U \in {\mathcal X}M.$$ Let $\nabla $ be the Levi-Civita connection of the rotational hypersurface $(M,g)$. Applying the second Bianchi identity to (12) we obtain $$ \nabla _{x}\xi = \lambda x, \hspace{0,5cm} \lambda = {\frac{\xi (a)} {2b}}, \hspace{0,5cm} x \perp \xi; \leqno{(14)}$$ $$ (\nabla _{X} \eta )(Y) = \lambda [g(X,Y) - \eta (X)\eta (Y)], \hspace{0,5cm} X,Y \in {\mathcal X}M; \leqno{(15)}$$ $$ da = \xi (a).\eta = 2\lambda b.\eta; \leqno{(16)}$$ $$ db = \xi (b).\eta; \leqno{(17)}$$ Taking into account (12), (15), (16) and (17) we calculate with respect to local coordinates $$ \nabla _{i}R_{jkpq} = \lambda b(2\eta _{i}\pi _{jkpq} + \eta _{j}\pi _{ikpq} + \eta _{k}\pi _{jipq} \leqno{(18)}$$ $$ + \eta _{p}\pi _{jkiq} + \eta _{q}\pi _{jkpi}) + (\xi (b) - 2b\lambda )\eta _{i}\Phi _{jkpq}.$$ If $E = span\{X,Y\}$ is an arbitrary 2-plane in $T_{p}M, p \in M$ with an orthonormal basis $\{X,Y\}$, we denote by $\gamma $ the angle between $\xi $ and $E$. Then we have $$ \cos ^2\gamma = \eta ^2(X) + \eta ^2(Y).$$ Taking into account the defining equality (4) from (18) we obtain $$ \varphi _{E} = [4\lambda b + (\xi (b) - 2b\lambda)\cos ^2\gamma ]\eta.$$ Thus, we have \begin{prop} Every rotational hypersurface is a directed Riemannian manifold. \end{prop} Now we shall find the rotational hypersurfaces of pointwise constant relative sectional curvature. As a consequence of (18), (16) and Theorem 2.4 we obtain \begin{prop} A rotational hypersurface $(M,g)$ with curvature tensor (12) is of pointwise constant relative sectional curvature iff $$ a - b = B = const $$ \end{prop} By use of the formulas (13) we find $$ a - b = {\frac{2(1+r'^2)+rr''}{r^2(1+r'^2)^2}} = B. \leqno{(19)}$$ Solving the differential equation (19) we obtain \begin{prop} A rotational hypersurface $(M,g)$ with meridian $t = t(r)$ is of pointwise constant relative sectional curvature iff $$ t = \int {\frac{r\sqrt{Ar^2+B}}{\sqrt{1-Ar^4-Br^2}}}dr, \hspace{0,5cm} 0 < (Ar^2+B)r^2 < 1. \leqno{(20)} $$ \end{prop} Putting $u^2 = Ar^2+B, m = {\frac{\sqrt {B^2+4A} - B}{2A}}, m' = {\frac{\sqrt {B^2+4A} + B}{2A}},$ we obtain the meridian of the hypersurface has equations $$ r = \sqrt {{\frac{u^2-B}{A}}}, \hspace{0,5cm} t = {\frac{1}{A}}\int {\frac{u^2}{\sqrt {(1-mu^2)(1+m'u^2)}}}du.$$ Further we consider the cases: I) $A > 0.$ Putting $u = \sqrt {{\frac{1-x^2}{m}}}, x \in (0,1)$ we find the meridian of the rotational hypersurface has the following equations: $$ r =\sqrt {{\frac{1-x^2}{m}}-{\frac{B}{A}}}, \hspace{0,5cm} t = {\frac{-1}{Am\sqrt {m+m'}}}(J_1 - J_2), \leqno{(21)}$$ where $$ J_1 = \int {\frac {dx}{\sqrt {(1-x^2)(1-k^2x^2)}}}, \hspace{0,5cm} J_2 = \int {\frac {x^2dx}{\sqrt {(1-x^2)(1-k^2x^2)}}}, \hspace{0,5cm} (k = \sqrt {{\frac{m'}{m+m'}}} < 1)$$ are the integrals of Legendre of first type and of second type, respectively. II) $A < 0.$ Putting $u = {\frac{x}{\sqrt {-m'}}}, x \in (0,1)$ we find the equations $$ r = \sqrt {-{\frac{x^2}{m'A}}-{\frac{B}{A}}}, \hspace{0,5cm} t = -{\frac{1}{Am\sqrt {-m'}}}J_2. \leqno{(22)}$$ \section{The case of a totally umbilical distribution} Let $(M,g)$ be a Riemannian manifold with a unit vector field $\xi $. By $\eta $ and $\Delta $ we denote respectively the dual to $\xi $ 1-form and the distribution perpendicular to $\xi $. The distribution $\Delta $ is said to be {\it totally umbilical} if $$ \nabla _{x}\xi = \lambda x, \leqno{(23)}$$ where $x \in \Delta $ and $\lambda $ is a function on $M$. From (14) it follows that every rotational hypersurface has a totally umbilical distribution. If we set $\theta (X) = d\eta (\xi ,X), \hspace{0,5cm} X \in {\mathcal X}M$, then from (23) it follows that $$ (\nabla _{X}\eta )(Y) = \lambda [g(X,Y) - \eta (X)\eta (Y)] + \eta (X)\theta (Y). \leqno{(24)}$$ and $$ d\eta = \eta \wedge \theta. \leqno{(25)}$$ The last equality means that the distribution $\Delta $ is involutive. Taking into account (24) we find the Gauss formula for the distribution $\Delta $: $$ \nabla _{x}y = D_{x}y - \lambda g(x,y)\,\xi; \hspace{0,5cm} x,y \in \Delta, \leqno{(26)}$$ where $D$ is the Levi-Civita connection of the distribution $\Delta $. Next we denote by $K$ the curvature tensor of $D$ and find the Gauss equation for the distribution $\Delta $: $$ R(x,y,z,u) = K(x,y,z,u) - \lambda ^2\pi (x,y,z,u); \hspace{0,5cm} x,y,z,u \in \Delta. \leqno{(27)}$$ Taking into account (26) and (27) we calculate $$ (\nabla _{w}R)(x,y,z,u) = (D_{w}K)(x,y,z,u) + d\lambda ^2(w)\pi (x,y,z,u); w,x,y,z,u \in \Delta. \leqno{(28)}$$ When $d\tau \not = 0$ on the Riemannian manifold $(M,g)$ the distribution of the 1-form $d\tau $ is said to be {\it the scalar distribution}. Now we can prove \begin{thm}\label{th 2} Let $(M,g)$ be a connected directed Riemannian manifold of pointwise constant relative sectional curvature. If the scalar distribution of the manifold is totally umbilical, then $M$ is a one-parameter family of locally symmetric submanifolds. \end{thm} {\it Proof.} Because of (25) the distriburion $\Delta$ is involutive. Let $p \in M$ and $S_p $ be the maximal integral submanifold of the distribution $\Delta $ through the point $p$. Since $R$ and $K$ satisfy the second Bianchi identity, then the equality (28) implies $d\lambda ^2 = 0 $ for all $w \in \Delta$. This means $d\lambda ^2 = 0$ on $S_p$. Under the conditions of the theorem from (7) it follows that the restriction of $\nabla R$ onto $S_p$ is zero. Then the equality (28) implies $DK = 0$ on $S_p$, i.e. $S_p$ is a locally symmetric submanifold of $M$. \hfill {$\square$} The last question to consider is a theorem of Schur's type for the pointwise constant relative sectional curvature (9). \begin{thm}\label{th 3} Let $(M,g)$ be a directed Riemannian manifold of pointwise constant relative sectional curvature (9) and totally umbilical scalar distribution. Then the curvature function $k$ is constant on the integral submanifolds of the scalar distribution iff $\eta $ is closed ($\xi $ is geodesic). \end{thm} {\it Proof.} Writing the equality (24) in local coordinates $$ \nabla _{i}\eta _{j} = \lambda (g_{ij} - \eta _{i}\eta _{j}) + \eta _{i}\theta _{j}$$ we find $$ \nabla _{i}\tau _{j} = \Vert d\tau \Vert _{i}\eta _{j} + \Vert d\tau \Vert \lambda (g_{ij} - \eta _{i}\eta _{j}) - \Vert d\tau \Vert \eta _{i}\theta _{j},$$ $$(dk + k \theta )\wedge \eta = 0.$$ The last equality shows that $d\ln k + \theta = 0$ on the integral submanifolds $S_{p}$ of $\Delta $. Hence, $k$ is constant on $S_{p}$ iff $\theta = 0$, i.e. $d\eta = 0$. Finally the equalities $$ g(\nabla _{\xi }\xi ,x) + \eta (\nabla _{\xi }x) = 0;$$ $$[\xi ,x] = \nabla _{\xi }x - \lambda x;$$ $$ \theta (x) = - \eta (\nabla _{\xi }x)$$ for all $x \in \Delta$ imply the condition $\theta = 0$ is equivalent to the condition $\nabla _{\xi }\xi = 0,$ i.e. $\xi $ being geodesic. \hfill {$\square$} \begin{rem} In the given examples in section 3 a simple calculation shows that $$ \Vert d\tau \Vert ^2 = -{\frac{4(\tau - nB)^2 (\tau + 2B)}{(n-1)(n+2)}} + C(\tau - nB), \hspace{0,5cm} B, C = const.$$ Hence, the curvature function $k$ is a constant on $S_{p}$, but it is not a global constant on $M$. Therefore Theorem 4.2 cannot be improved in this direction. It is interesting to find examples of directed Riemannian manifolds of constant relative sectional curvature. \end{rem} \begin{rem} If $(M,g)$ is a surface with Gaussian curvature $K$ in the Euclidean space, then its sectional 1-form $\varphi $ satisfies the equality $\varphi = dK $ and consequently every surface is a directed Riemannian manifold of pointwise constant relative sectional curvature $k = \Vert dK\Vert $. Hence, the surfaces of constant relative sectional curvature are exactly the surfaces satisfying the condition $\Vert grad \,K\Vert = const$. \end{rem} \vskip 4mm The second author is partially supported by Sofia University Grant 99/2013.
1,116,691,501,099
arxiv
\section{Introduction} Cosmic far-infrared background (CFIRB) originates from unresolved dusty star-forming galaxies from all redshifts and accounts for half of the extragalactic background light generated by galaxies. In dusty star-forming galaxies, $\sim90\%$ of the ultraviolet (UV) photons produced by recent star-forming activities are absorbed by interstellar dust and re-emitted in far-infrared (FIR; also known as submillimeter, hereafter submm; 100--1000 $\micron$). The FIR luminosities of galaxies are thus tracers of star formation rate (SFR) and are complementary to UV luminosities \citep[e.g.,][]{Kennicutt98,KennicuttEvans12,MadauDickinson14}. Compared with UV, galaxies are much less understood in FIR/submm due to the low-resolution of telescopes in these wavelengths. Despite the recent progress in resolving galaxies in FIR/submm \citep[e.g.,][]{Casey14,Lutz14,Dunlop16,Fujimoto16,Geach16}, most of the dusty star-forming galaxies remain unresolved. Therefore, CFIRB provides a rare opportunity to study dusty star-forming galaxies under the current resolution limit. First predicted by \cite{PartridgePeebles67b} and \cite{Bond86}, CFIRB was discovered by {\em COBE}-FIRAS, which also provided to date the only absolute intensity measurement of CFIRB \citep{Puget96,Fixsen98,Hauser98,Gispert00,HauserDwek01}. Thereafter, the anisotropies of CFIRB have been measured to ever-improving accuracy by {\em Spitzer} \citep{Lagache07}, BLAST \citep{Viero09}, SPT \citep{Hall10}, {\em AKARI} \citep{Matsuura11}, ACT \citep{Hajian12}, {\em Herschel}-SPIRE \citep{Amblard11,Berta11,Viero13}, and {\em Planck}-HFI \citep{Planck11CIB, Planck13XXX}. In addition, CFIRB maps have been cross-correlated with the lensing potential observed using cosmic microwave background \citep[CMB,][]{Planck13XVIII} and with near-infrared background \citep[][]{Thacker15}. The CFIRB anisotropies have been interpreted mostly using phenomenological models \citep[e.g.,][]{Viero09,Amblard11,Planck11CIB,DeBernardis12,Shang12,Xia12, Addison13,Viero13,Planck13XXX}. Although these models can fit the data, they provide limited insight into the underlying galaxy evolution processes. Since galaxy evolution has been extensively studied by UV/optical surveys, it is necessary to understand whether CFIRB agrees with the current knowledge of galaxy evolution. In this work, we construct an empirical model for dusty star-forming galaxies based on recent galaxy survey results, including stellar mass functions, star-forming main sequence, and dust attenuation. We find that, without introducing new parameters, a minimal model can well reproduce the observed CFIRB anisotropies and submm number counts. Our model is the first step towards constructing a comprehensive model for UV, optical, and FIR observations, as well as building multiwavelength mock catalogues for these observations. Such a model is essential for the understanding of cosmic star-formation history and for extracting the most information from multiwavelength surveys. Our approach is similar to the empirical approach adopted by \cite{BetherminDore12} and \cite{BetherminDaddi12,Bethermin13}. Our major innovations include using an $N$-body simulation and recent self-consistent compilations of stellar mass functions and star-forming main sequence. We also adopt a minimalist approach; that is, we look for the simplest, observationally-motivated model that agrees with CFIRB observations. In each step of our modelling, we directly use constraints from recent observations and avoid introducing new parameters or fitting model to the data. This work is complementary to our earlier work of interpreting CFIRB using a physical gas regulator model \citep{Wu16}. This paper is organized as follows. We introduce our model in Section~\ref{sec:model} and calculate the CFIRB anisotropies in Section~\ref{sec:obs}. Section~\ref{sec:results} compares our model predictions with the observational results of {\em Planck} and {\em Herschel}. We discuss our results in Section~\ref{sec:discussions} and summarize in Section~\ref{sec:summary}. Throughout this work, we use the cosmological parameters adopted by the Bolshoi--Planck simulation (see Section~\ref{sec:BolshoiP}), the stellar population synthesis (SPS) model from \citet[][BC03]{BruzualCharlot03}, and the initial mass function (IMF) from \cite{Kroupa01}. \section{Empirical Model}\label{sec:model} We construct a model to generate the infrared (IR) spectral flux densities $S_\nu$ for a population of galaxies. Our model includes following five steps: \begin{enumerate} \item Sampling dark matter haloes from the Bolshoi--Planck simulation (Section~\ref{sec:BolshoiP}) \item Performing abundance matching to assign stellar mass ($M_{*}$) to haloes (Section~\ref{sec:abmatch}) \item Assigning ${\rm SFR}$ to $M_{*}$ based on the star-forming main sequence (Section~\ref{sec:SFR_Ms}) \item Calculating IR luminosity ($L_{\rm IR}$) based on SFR and $M_{*}$ (Section~\ref{sec:LIR_SFR}) \item Calculating $S_\nu$ by assuming a spectral energy distribution (SED; Section~\ref{sec:SED}) \end{enumerate} Steps (ii), (iii), and (iv) are demonstrated in Figure~\ref{fig:model}. Below we describe each step in detail. \begin{figure*} \includegraphics[width=0.67\columnwidth]{plots/fit_schechter_smf.pdf} \includegraphics[width=0.67\columnwidth]{plots/SFR_Ms_Speagle.pdf} \includegraphics[width=0.67\columnwidth]{plots/LIR_SFR_Heinis.pdf} \caption[]{Key elements of our model. {Left-hand panel}: stellar mass functions from \protect\cite{Henriques15} and \protect\cite{Song16}. We fit redshift-dependent Schechter functions to the data and preform abundance matching between $M_{*}$ and $v_{\rm peak}$ (see Section~\ref{sec:abmatch} and Appendix~\ref{app:smf}). {Centre}: star-forming main sequence from \protect\cite{Speagle14}, which is used to assign ${\rm SFR}$ to $M_{*}$ (see Section~\ref{sec:SFR_Ms}). {Right-hand panel}: $L_{\rm IR}$--${\rm SFR}$ relation based on the IRX--$M_{*}$ relation from \protect\cite{Heinis14}. The dashed line corresponds to $L_{\rm IR}\propto{\rm SFR}$ (the Kennicutt relation), which predicts too high $L_{\rm IR}$ for low-mass galaxies (see Section~\ref{sec:LIR_SFR}).} \label{fig:model} \end{figure*} \subsection{Dark matter haloes from the Bolshoi--Planck simulation}\label{sec:BolshoiP} We use the public halo catalogues of the Bolshoi--Planck simulation \citep{Klypin16BolshoiP,Rodriguez-Puebla16}\footnote{\href{http://hipacc.ucsc.edu/Bolshoi/MergerTrees.html}{http://hipacc.ucsc.edu/Bolshoi/MergerTrees.html}}, which is based on a Lambda cold dark matter cosmology consistent with the {\em Planck} 2013 results \citep{Planck13cosmo}: $\Omega_\Lambda$ = 0.693; $\Omega_{\rm M}$ = 0.307; $\Omega_{\rm b}$ = 0.048; $h$ = 0.678; $n_{\rm s}$ = 0.96; and $\sigma_8$ = 0.823. The simulation has a box size of 250 $h^{-1}$Mpc and a mass resolution of $1.5\times10^{8} h^{-1}\rm M_\odot$. The simulation is processed with {\sc rockstar} halo finder \citep{Behroozi13rs} and {\sc consistent trees} \citep{Behroozi13tree}. Therefore, the halo catalogues include the mapping between central haloes and subhaloes, as well as the peak circular velocity of a halo in its history ($v_{\rm peak}$). In this work, we use all haloes with $v_{\rm peak} >$ 100 km s$^{-1}$ between $z=0.25$ and $5$, with a redshift interval of $\Delta z \approx 0.25$. When calculating theoretical uncertainties (see Section~\ref{sec:results}), we use 0.1\% of the haloes in the simulation ($\sim$ 6000 haloes in the $z=0.25$ snapshot) to facilitate the calculation. \subsection{Stellar mass from abundance matching}\label{sec:abmatch} To assign a stellar mass to each halo, we perform abundance matching between $v_{\rm peak}$ and observed stellar mass functions. The basic concept of abundance matching is to assign higher stellar masses to more massive haloes based on the number density, either monotonically or with some scatter \citep[e.g.,][]{ValeOstriker04,Shankar06,Behroozi13,Moster13}. Instead of halo mass, we use $v_{\rm peak}$, which is less affected by mass stripping and better correlated with stellar mass \citep[e.g.,][]{NagaiKravtsov05, Conroy06, Wang06, WetzelWhite10,Reddick13}. First, we collect observed stellar mass functions from the literature. For $z\leq3$, we use the recent compilation of stellar mass functions by \citet[][see their figures 2 and A1]{Henriques15}\footnote{The data sets are publicly available at \href{http://galformod.mpa-garching.mpg.de/public/LGalaxies/figures_and_data.php}{http://galformod.mpa-garching.mpg.de/public/LGalaxies/figures$\_$and$\_$data.php}.}, which are calibrated with the {\em Planck} cosmology. Following \cite{Henriques15}, we add $\Delta M_{*}= 0.14$ to convert to the BC03 SPS model. For $z \ge 4$, we use the stelar mass functions by \citet[][see their table 2]{Song16}, which are derived from the rest-frame UV observations from CANDELS, GOODS, and HUDF, based on the BC03 SPS model. Secondly, we fit the stellar mass functions using redshift-dependent Schechter functions (see Appendix~\ref{app:smf}). For $ 0 \leq z\leq 3.5$, we use a double Schechter function with constant faint-end slopes; for $ 3.5 < z \leq 6$, we use a single Schechter function with a time-dependent slope. Using the fitting functions presented in Appendix~\ref{app:smf}, we are able to interpolate smoothly between redshifts. The left-hand panel of Figure~\ref{fig:model} shows the data points and the fitting functions. Although we fit the stellar mass function out to $z=6$, we only use galaxies at $z\leq5$ in our calculations. Thirdly, we perform abundance matching between the stellar mass functions and the $v_{\rm peak}$ of haloes, assuming a scatter of 0.2 dex \citep[e.g.,][]{Reddick13}. In the calculation, the input stellar mass function is first deconvolved with the scatter, and then the deconvolved stellar mass function is used to assign $M_{*}$ to $v_{\rm peak}$ monotonically. We use the code provided by Y.-Y. Mao\footnote{\href{https://bitbucket.org/yymao/abundancematching}{https://bitbucket.org/yymao/abundancematching}}, which follows the implementation in \cite{Behroozi10,Behroozi13}. With this step, a stellar mass is assigned to each halo. \subsection{SFR from the star-forming main sequence}\label{sec:SFR_Ms} We assign an SFR to each $M_{*}$ based on the star-forming main sequence compiled by \cite{Speagle14}: \begin{equation}\begin{aligned} \log_{10} {\rm SFR}(M_{*}, t) =& (0.84 - 0.026 \times t) \log_{10} M_{*} \\ & - (6.51- 0.11 \times t) \ , \end{aligned}\end{equation} where $t$ is the age of the universe in Gyr. This relation is shown in the central panel of Figure~\ref{fig:model}. The compilation of \cite{Speagle14} is based on the Kroupa IMF, the BC03 SPS model, and the cosmological parameters $\Omega_\Lambda$ = 0.7, $\Omega_{\rm M}$ = 0.3, and $h$ = 0.7. This cosmology is slightly different from our choice; however, these authors stated that the effect of cosmology is negligible for the main-sequence calibration. In our calculation, for each $\log_{10}M_{*}$, an SFR is drawn from a normal distribution with a mean given by the equation above and a scatter of 0.3 dex. We note that \cite{Speagle14} have shown that the intrinsic scatter (deconvolved with the evolution in a redshift bin) and the true scatter (excluding observational uncertainties) of the main sequence are 0.3 and 0.2 dex, respectively. We find that a scatter of 0.2 dex produces too low number counts and too low shot noise (see Section~\ref{sec:results}). In the central panel of Figure~\ref{fig:model}, we show a 0.3 dex of scatter around the mean relation at $z=1$. \subsection{Infrared luminosity from SFR and stellar mass}\label{sec:LIR_SFR} To calculate $L_{\rm IR}$, it is commonly assumed that $L_{\rm IR} \propto {\rm SFR}$ \citep[the Kennicutt relation; ][]{Kennicutt98,KennicuttEvans12}. However, this relation is known to break down for low-mass galaxies, which tend to have lower dust content, lower attenuation, and lower $L_{\rm IR}$ \citep[e.g.,][]{Pannella09,GarnBest10,Buat12,Hayward14}. One way to improve upon the Kennicutt relation is to assume that the photons produced by star formation are split into UV and IR, \begin{equation} {\rm SFR} = K_{\rm UV} L_{\rm UV} + K_{\rm IR} L_{\rm IR} \ , \end{equation} and then use a relation between $L_{\rm IR}$ and $L_{\rm UV}$ \citep[e.g.,][]{Bernhard14}. The logarithm of the ratio between $L_{\rm IR}$ and $L_{\rm UV}$ is commonly referred to as the IR-excess (IRX), \begin{equation} {\rm IRX} = \log_{10}\left(\frac{L_{\rm IR}}{L_{\rm UV}}\right)\ , \end{equation} and has been calibrated observationally. Given the two equations above, we can solve for $L_{\rm IR}$: \begin{equation} L_{\rm IR} = \frac{{\rm SFR}}{K_{\rm IR} + K_{\rm UV} 10^{-{\rm IRX}(M_{*})}} \ . \end{equation} We use $K_{\rm UV} = 1.71\times10^{-10}$ and $K_{\rm IR} = 1.49\times10^{-10}$ from \cite{KennicuttEvans12} based on the Kroupa IMF . \cite{Heinis14} calibrated the IRX--stellar mass relation based on the rest-frame UV-selected galaxies at $z\sim$ 1.5, 3, and 4 in the COSMOS field observed with {\em Herschel}-SPIRE (part of the HerMES program). They provided the fitting function \begin{equation} {\rm IRX}(M_{*}) = \alpha \log_{10} \left( \frac{M_{*}}{10^{10.35}\rm M_\odot} \right) + {\rm IRX}_0 \ , \end{equation} where ${\rm IRX}_0 = 1.32$ and $\alpha=0.72$. We adopt $\alpha=1.5$, which agrees better with the CFIRB amplitudes and is still consistent with their observations (see below). The right-hand panel of Figure~\ref{fig:model} demonstrates the $L_{\rm IR}$--${\rm SFR}$ relation with this ${\rm IRX}(M_{*})$, which produces lower $L_{\rm IR}$ for low-SFR galaxies compared with the Kennicutt relation \citep[][the dashed line]{KennicuttEvans12}. We show the data points from \cite{Heinis14} to demonstrate the level of uncertainties in observations. In particular, we use the $M_{*}$ and ${\rm IRX}$ from their figure 3 and calculate the corresponding ${\rm SFR}$ and $L_{\rm IR}$. We note that the current observations can only constrain the brightest end, and we need to extrapolate to the faint end. As we will discuss in Section~\ref{sec:results-CFIRB}, this mass-dependent attenuation is essential for reproducing the observed CFIRB amplitudes. \subsection{Spectral energy distribution}\label{sec:SED} With the $L_{\rm IR}$ calculated above, we need the SED $\Theta_\nu$ to calculate the spectral flux density $S_\nu$. The spectral luminosity density is given by \begin{equation} L_\nu = L_{\rm IR} \Theta_\nu \ , \end{equation} and $S_\nu$ at the observed frequency $\nu$ is given by \begin{equation} S_\nu = \frac{L_{(1+z)\nu}}{4\pi \chi^2(1+z)} \ , \end{equation} where $\chi$ is the comoving distance, and $L_{(1+z)\nu}$ is evaluated at the rest-frame frequency $(1+z)\nu$. We assume that the SED of each galaxy is given by a single-temperature modified blackbody, \begin{equation} \Theta_\nu \propto \nu^{\beta} B_\nu(T_{\rm d}) \ , \end{equation} where $B_\nu$ is the Planck function, $T_{\rm d}$ is the dust temperature, and $\beta$ is the spectral index. The SED is normalized such that $\int {\rm d}\nu\Theta_\nu = 1$. We adopt $\beta = 2.1$ based on our previous work for CFIRB \citep{Wu16}, and we note that $\beta=2$ is widely used and theoretically motivated \citep{DraineLee84,MathisWhiffen89}. To calculate the $T_{\rm d}$ of each galaxy, we adopt the relation between $T_{\rm d}$ and specific star formation rate (SSFR, ${\rm SFR}/M_{*}$) given by \cite{Magnelli14}, \begin{equation} T_{\rm d} = 98\ [K] \times (1+z)^{-0.065} +6.9 \log_{10} {\rm SSFR} \ , \end{equation} and we assume a normal distribution with a scatter of 2 K around this relation (consistent with their figure 10). This relation is derived from galaxies up to $z\sim2$ from the PEP and HerMES programs of {\em Herschel} with multiwavelength observations. The stellar mass is derived from SED fitting, while the SFR is derived by combining UV and IR. These authors bin galaxies based on ${\rm SFR}$, $M_{*}$, and $z$ and calculate $T_{\rm d}$ using the stacked far-infrared flux density in each bin. They have found that the $T_{\rm d}$--SSFR relation is tighter than the $T_{\rm d}$--$L_{\rm IR}$ relation. \section{Calculating the CFIRB angular power spectra}\label{sec:obs} With the prediction of $S_\nu$ for each halo in the catalogues, we proceed to compute the CFIRB angular power spectra. The formalism presented below is motivated by the analytical halo model presented in \cite{Shang12}, and we make various generalization and adjustments for our sampling approach. Since we use subhaloes from an $N$-body simulation, we expect our approach to be more accurate than a purely analytical calculation. The CFIRB auto angular power spectrum is given by the sum of the two-halo term, the one-halo term, and the shot noise: \begin{equation} C^{\nu}_\ell = C^{\nu, \rm 2h}_\ell + C^{\nu, \rm 1h}_\ell + C^{\nu, \rm shot}_\ell \ . \end{equation} Here we present the equations for a single frequency; the equations for two-frequency cross-spectra can be generalized easily. The two-halo term corresponds to the contribution from two galaxies in distinct haloes and is given by \begin{equation} C^{\nu, \rm 2h}_\ell = \int \chi^2 {\rm d}\chi F^2_\nu(z) P_{\rm lin}\left(k=\frac{\ell}{\chi}, z\right) \ , \end{equation} where $P_{\rm lin}(k,z)$ is the linear matter power spectrum calculated with CAMB \citep{Lewis00}, and \begin{equation} \label{eq:K_nu_z} F_\nu(z) = \int {\rm d}M \frac{{\rm d}n}{{\rm d}M} b(M) \left( S_\nu^{\rm cen} + \int {\rm d}M_{\rm s} \frac{{\rm d}N(M)}{{\rm d}M_{\rm s}} S_\nu^{\rm sat} \right) \ , \end{equation} where $M$ is the mass of central haloes, ${\rm d}n/{\rm d}M$ and $b(M)$ are the mass function and halo bias of central haloes, $M_s$ is the mass of subhaloes, and ${\rm d}N(M)/{\rm d}M_{\rm s}$ is the number of subhaloes in a central halo. In our sampling approach, the integration is replaced by the sum over all $b(M) S_\nu$, and for a satellite galaxy we use the $b(M)$ of its central halo. For $b(M)$, we use the fitting function of halo bias from \cite{Tinker10}, and we have verified that this fitting function agrees with the linear halo bias measured directly from the Bolshoi--Planck simulation. The one-halo term corresponds to the contribution from two galaxies in the same halo and is given by \begin{equation} C^{\nu, \rm 1h}_\ell = \int \chi^2 {\rm d}\chi G_\nu\left(k=\ell/\chi,z\right) \ , \end{equation} where \begin{equation}\begin{aligned} G_\nu(k, z) =& 2 \int {\rm d}M\frac{{\rm d}n}{{\rm d}M} S_\nu^{\rm cen} \left( \int {\rm d}M_{\rm s} \frac{{\rm d}N(M)}{{\rm d}M_{\rm s}} S_\nu^{\rm sat} \right)u(k, z) \\ &+ \int {\rm d}M\frac{{\rm d}n}{{\rm d}M} \left( \int {\rm d}M_{\rm s} \frac{{\rm d}N(M)}{{\rm d}M_{\rm s}} S_\nu^{\rm sat} \right)^2 u^2(k,z) \ . \end{aligned}\end{equation} Here, $u(k,z)$ is the density profile of dark matter haloes in the Fourier space, and $u(k,z)\approx 1$ for the large scales discussed in this work. The first term corresponds to summing over the central--satellite pairs in a halo, and the second term corresponds to summing over the satellite--satellite pairs in a halo. We avoid self-pairs in calculating the second term. The shot noise corresponds to self-pairs of galaxies and is given by \begin{equation} C^{\nu, \rm shot}_\ell = \int \chi^2 {\rm d}\chi \int {\rm d}S_\nu \frac{{\rm d}n}{{\rm d}S_\nu} S_\nu^2 \ , \end{equation} where $S_\nu$ includes both central and satellite galaxies. The cross angular spectrum between CFIRB and CMB lensing potential is given by \begin{equation}\begin{aligned} C^{\phi\nu}_{\ell} = & \int_0^{\chi_*} \chi^2 {\rm d}\chi(1+z) F_\nu(z) \frac{3}{\ell^2}\Omega_{\rm M} H_0^2 \left(\frac{\chi_*-\chi}{\chi_* \chi}\right) \\ &\times P_{\rm lin}\left(k=\frac{\ell}{\chi}, z\right) \ , \end{aligned}\end{equation} where $\chi_*$ is the comoving distance to the last-scattering surface, and $F_\nu(z)$ is given by Equation~\ref{eq:K_nu_z}. \section{Comparison with observations}\label{sec:results} \begin{figure*} \vspace{-0.5cm} \centerline{\includegraphics[width=\columnwidth]{plots/CL.pdf} \includegraphics[width=\columnwidth]{plots/lensing.pdf}} \vspace{-1cm} \caption[]{Comparison between our model (blue bands) and the CFIRB anisotropies observed by {\em Planck} (data points). {Left-hand panel}: CFIRB auto angular power spectra from \cite{Planck13XXX}. The red dashed curves show that the Kennicutt relation overproduces the large-scale amplitudes. {Right-hand panel}: cross-angular power spectra between CFIRB and CMB lensing potential from \cite{Planck13XVIII}. The dark and light blue bands correspond to the 68\% and 95\% intervals of theoretical uncertainties, respectively.} \label{fig:CFIRB} \end{figure*} In this section we compare our model predictions with observational results. \subsection{CFIRB anisotropies}\label{sec:results-CFIRB} We compare our model with the CFIRB anisotropies observed by {\em Planck}: \begin{itemize} \item \cite{Planck13XXX} presents the CFIRB observed by {\em Planck}-HFI for an area of 2240 deg$^2$, for which {\sc HI} maps are available for removing the foreground Galactic dust emission. The primordial CMB, the Sunyaev--Zeldovich effect, and the radio sources are also removed. We compare our model with the CFIRB angular power spectra for $187 \leq \ell \leq 2649$, presented in their table D.2. \item \cite{Planck13XVIII} presents the first detection of the cross-correlation between CFIRB and CMB lensing potential (the latter is extracted from the low-frequency bands of {\em Planck}). The CMB lensing potential is dominated by dark matter haloes between $z\approx1$ and $3$, and CFIRB is dominated by galaxies in the same redshift range; therefore, the cross-correlation between CFIRB and CMB lensing potential directly probes the connection between FIR galaxies and dark matter haloes. In addition, compared with the auto-correlation of CFIRB, this cross-correlation is less affected by the contamination of Galactic dust. \end{itemize} Figure~\ref{fig:CFIRB} compares our model predictions with the observational results described above. We include the results in 353, 545, and 857 GHz (849, 550, and 350 $\micron$), and we exclude 217 GHz because CMB dominates this band for all angular scales. In all calculations, we apply the colour-correction factors and flux cuts of \citet[][see their section 5.3 and table 1]{Planck13XXX}. The left column corresponds to the auto angular power spectra of CFIRB, $C^{\nu}_\ell$, while the right column corresponds to the cross-angular spectra between CFIRB and CMB lensing potential, $C^{\phi\nu}_\ell$. To calculate the theoretical uncertainties, we repeat Steps (iii) to (v) in Section~\ref{sec:model} for 1000 times, and we use 0.1\% of the haloes in the Bolshoi--Planck simulation to facilitate the calculation. The dark and light blue bands correspond to the 68\% and 95\% intervals of the theoretical uncertainties. As can be seen, our model well captures both observational results. We emphasize that we perform no fitting to the data, and that all the components of our model directly come from independent surveys of UV, optical, and FIR. The red dashed curves in the left column of Figure~\ref{fig:CFIRB} show that, if we assume the Kennicutt relation ($L_{\rm IR}\propto{\rm SFR}$) instead of the mass-dependent dust attenuation, we produce too high large-scale amplitudes of the power spectra. The Kennicutt relation assigns too high $L_{\rm IR}$ to low-mass galaxies, and because of the high number density of low-mass galaxies, it leads to too high CFIRB large-scale amplitudes. In this sense, CFIRB can be used to constrain the SFR and dust content of low-mass galaxies. We note that the Kennicutt relation and the mass-dependent attenuation produce very similar small-scale auto power spectra. The reason is that the two models have very similar $L_{\rm IR}$ for massive galaxies, and the small-scale spectra are dominated by shot noise, which is contributed mostly by massive galaxies. \subsection{Number counts}\label{sec:results-NC} In this section, we turn to submm number counts, which are dominated by massive galaxies. We compare our model with the number counts observed by {\em Herschel}-SPIRE at 250, 350, and 500 $\micron$ (1200, 857, and 600 GHz): \begin{itemize} \item \cite{Bethermin12} presented the deep number counts from the HerMES survey (the COSMOS and GOODS-N fields). They performed stacked analyses based on the 24 $\micron$ sources, and they managed to extract the number counts down to $\sim$ 2 mJy. \item \cite{Valiante16} presented the number counts from the H-ATLAS survey of an area of 161.6 deg$^2$ (the GAMA fields). At the faint end, their results agree with \cite{Bethermin12}; at the bright end, they have better statistics due to the larger survey area. \end{itemize} Figure~\ref{fig:NC} compares the number counts from our model with the two observational results described above. For this calculation, we use all haloes in the Bolshoi--Planck simulation to obtain enough bright galaxies. We note that the observed brightest end ($\gtrsim100$ mJy) is dominated by gravitationally lensed sources, which we do not have in our model. Our model mostly agrees with the observational results; however, for 250 $\micron$ (1200 GHz) our model produces slightly higher number counts; this could result from our oversimplified assumption for SED. \begin{figure} \vspace{-0.5cm} \includegraphics[width=\columnwidth]{plots/NC.pdf} \vspace{-0.5cm} \caption[]{Number counts predicted from our model (colour bands) compared with the results from HerMES \protect\citep[][circles]{Bethermin12} and H-ATLAS \protect\citep[][triangles]{Valiante16}. The dark and light bands correspond to the 68\% and 95\% intervals of the theoretical uncertainties, respectively.} \label{fig:NC} \end{figure} \section{Discussions}\label{sec:discussions} In our model, we assume that all galaxies belong to the star-forming main sequence. This is a simplified assumption, because it is known that a fraction of massive galaxies are quiescent \citep[e.g.,][]{Ilbert13,Moustakas13,Muzzin13,Tomczak14,Man16,Schreiber16}. In addition, studies have also shown that quiescent galaxies can still have significant FIR emission due to the dust heated by old stars (the so-called cirrus dust emission, e.g., \citealt{Fumagalli14,Hayward14,Narayanan15}). In \cite{Wu16}, we have also found that the observed CFIRB requires substantial FIR emission from massive haloes. We have attempted to include quiescent galaxies in this work, but we find that the CFIRB data cannot distinguish between $L_{\rm IR}$ coming from star formation and cirrus dust. When we include a fraction of quiescent galaxies with SSFR = $10^{-12} \rm yr^{-1}$ \citep[e.g.,][]{Muzzin13, Fumagalli14}, the power spectra are lowered, and we need to add cirrus dust emission to these quiescent galaxies to compensate for the lowered power. However, the fraction of quiescent galaxies and the cirrus dust emission are both highly uncertain are degenerate with each other; therefore, we decide not to include them in this work. Investigating the contribution from quiescent galaxies will require the modelling of old stars and cirrus dust, as well as comparisons with near-infrared observations. We will investigate this in future work. Furthermore, it is also known that a small fraction of galaxies undergo starburst phases and have significantly higher SFR and IR luminosities \citep[e.g.,][]{Elbaz11}. The starburst galaxies account for $\sim10\%$ of the cosmic SFR density at $z\sim2$ \citep{Rodighiero11,Sargent12} and are expected to have negligible contribution to the CFIRB \citep[e.g.,][]{Shang12,Bethermin13}. The effect of starburst galaxies will be degenerate with that of quiescent galaxies in producing CFIRB. Therefore, any departure from the star-forming main sequence will require constraints from multiwavelength observations, which will be explored in our future work. In this work, we choose a minimal number of modelling steps in order to avoid degeneracies. Except for the slope of the IRX--$M_{*}$ relation, all the other parameter values are directly taken from the literature, and we neither introduce new parameters nor fit parameters to the data. In our future work, we plan to incorporate more astrophysical processes (including quiescent and starburst galaxies, realistic SEDs) into our model, combine multiwavelength observational results from UV, optical, near-IR, FIR, and radio surveys, and perform Markov chain Monte Carlo calculations to constrain model parameters, Over the next decade, new instruments are expected to revolutionize the view of the FIR/submm sky. The Far-infrared Surveyor (Origins Space Telescope), which is currently planned by NASA, prioritizes the measurements of cosmic SFR. The Cosmic Origins Explorer (CORE, \citealt{CORE16a}) and the ground-based CMB-S4 experiment \citep{Abazajian16} will measure CFIRB and CMB lensing to unprecedented precision. The Primordial Inflation Explorer (PIXIE, \citealt{Kogut11}) will significantly improve the accuracy of the absolute intensity of CFIRB compared with {\em COBE}-FIRAS. These missions are expected to lead to a consistent picture of cosmic star formation history. In a companion paper, we apply a principle component approach to investigate the optimal experimental designs for constraining the cosmic star-formation history using CFIRB \citep{Wu16c}. We plan to apply the empirical approach presented in this paper to generate mock catalogues, check consistencies between models, and develop survey strategies for these observational programs. \section{Summary}\label{sec:summary} We present a minimal empirical model for dusty star-forming galaxies to interpret the observations of CFIRB anisotropies and submm number counts. Our model is based on the Bolshoi--Planck simulation and various results from UV/optical/IR galaxy surveys. Below we summarize our model and findings: \begin{itemize} \item To assign IR spectral flux densities $S_\nu$ to dark matter haloes, we model stellar mass (using abundance matching between $v_{\rm peak}$ and observed stellar mass functions), SFR (using the star-forming main sequence), $L_{\rm IR}$ (assuming a mass-dependent attenuation), and SED (assuming a modified blackbody). \item Given the connection between $S_\nu$ and halo mass obtained above, we apply an extended halo model to calculate the auto angular power spectra of CFIRB and the cross-angular power spectra between CFIRB and CMB lensing potential. We find that the commonly used Kennicutt relation, $L_{\rm IR} \propto{\rm SFR}$, leads to too high CFIRB amplitudes. The observed CFIRB amplitudes require that low-mass galaxies have lower $L_{\rm IR}$ than expected from the Kennicutt relation. This trend has been observed previously and is related to the low dust content of low-mass galaxies. \item Our model also produces submm number counts that agree with observational results of {\em Herschel}. The number counts are contributed by massive haloes, and this agreement indicates that our minimal model (star-forming main sequence only, no quiescent or starburst galaxies) is sufficient for dusty star-forming galaxies in massive haloes. We slightly overproduce the number counts at 250 $\micron$ (1200 GHz), and this may indicate that the SEDs of dust emission deviate from a simple modified blackbody. \end{itemize} Our results indicate that the observed CFIRB broadly agrees with the current knowledge of galaxy evolution from resolved galaxies in UV and optical surveys, under the assumption that low-mass galaxies produces IR luminosities lower than expected from the Kennicutt relation. Therefore, CFIRB provides a rare opportunity of constraining the SFR and dust production in low-mass galaxies. However, since CFIRB does not provide redshifts of galaxies, further investigations for low-mass galaxies will require the cross-correlation between CFIRB with galaxies or extragalactic background light observed in other wavelengths \citep[e.g.,][]{Cooray16,Serra16}. \section*{Acknowledgements} We thank Joanne Cohn and Martin White for helpful discussions, and we thank Yao-Yuan Mao for providing the code and assistance for the abundance matching calculation. HW\ acknowledges the support by the US\ National Science Foundation (NSF) grant AST1313037. The calculations in this work were performed on the Caltech computer cluster Zwicky, which is supported by NSF MRI-R2 award number PHY-096029. OD\ acknowledges the hospitality of the Aspen Center for Physics, which is supported by NSF grant PHY-1066293. Part of the research described in this paper was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. The Bolshoi--Planck simulation was performed by Anatoly Klypin within the Bolshoi project of the University of California High-Performance AstroComputing Center (UC-HiPACC) and was run on the Pleiades supercomputer at the NASA Ames Research Center. \bibliographystyle{mnras}
1,116,691,501,100
arxiv
\section*{Introduction} \noindent The ground field $\bbk$ is algebraically closed and of characteristic $0$. A commutative associative $\bbk$-algebra ${\ca}$ is a {\it Poisson algebra\/} if there is an additional anticommutative bilinear operation $\{\,\,,\,\}\!:\,\ca\times \ca \to\ca$ called a {\it Poisson bracket} such that \[ \begin{array}{cl} \{a,bc\}=\{a,b\}c+b\{a,c\}, & \text{(the Leibniz rule)} \\ \{a,\{b,c\}\}+\{b,\{c,a\}\}+\{c,\{a,b\}\}=0 & \text{(the Jacobi identity)} \end{array} \] for all $a,b,c\in{\ca}$. A subalgebra ${\mathcal C}\subset \ca $ is {\it Poisson-commutative} if $\{\mathcal{C},\mathcal{C}\}=0$. We also write that $\mathcal C$ is a {\sf PC}-{\it subalgebra}. The {\it Poisson centre} of ${\ca}$ is $\mathcal{ZA}=\{z\in\ca \mid \{z,a\}=0 \ \forall a\in{\ca}\}$. Two Poisson brackets on $\ca$ are said to be {\it compatible}, if all their linear combinations are again Poisson brackets. Usually, Poisson algebras occur in nature as algebras of functions on varieties (manifolds), and we only need the case, where such a variety is the dual of a Lie algebra $\q$ and hence $\ca=\bbk[\q^*]=\gS(\q)$ is a polynomial ring in $\dim\q$ variables. There is a general method for constructing a ``large'' Poisson-commutative subalgebra of $\gS(\q)$ associated with a pair of compatible brackets, see e.g.~\cite[Sect.~1]{duzu}. Let $\{\,\,,\,\}'$ and $\{\,\,,\,\}''$ be compatible Poisson brackets on $\q^*$. This yields a two parameter family of Poisson brackets $a\{\,\,,\,\}'+b\{\,\,,\,\}''$, $a,b\in\bbk$. As we are only interested in the corresponding Poisson centres, it is convenient to organise this, up to scaling, in a 1-parameter family $\{\,\,,\,\}_t=\{\,\,,\,\}'+t\{\,\,,\,\}''$, $t\in\BP=\bbk\cup\{\infty\}$, where $t=\infty$ corresponds to the bracket $\{\ ,\ \}''$. The {\it index\/} $\crk\{\,\,,\,\}$ of a Poisson bracket $\{\,\,,\,\}$ is defined in Section~\ref{sect:prelim}. For almost all $t\in\BP$, $\crk\{\,\,,\,\}_t$ has one and the same (minimal) value. Set $\BP_{\sf reg}=\{t\in \BP\mid \crk\{\,\,,\,\}_t \text{ is minimal}\}$ and $\BP_{\sf sing}=\BP\setminus \BP_{\sf reg}$. Let $\cz_t$ denote the Poisson centre of $(\gS(\q),\{\,\,,\,\}_t)$. The crucial fact is that the algebra $\gZ\subset \gS(\q)$ generated by $\{\cz_t\mid t\in\BP_{\sf reg}\}$ is Poisson-commutative w.r.t{.} to any bracket in the family. In many cases, this construction provides a {\sf PC}-{subalgebra} of $\gS(\q)$ of maximal transcendence degree. A notable realisation of this scheme is the {\it argument shift method} of~\cite{mf}. It employs the Lie--Poisson bracket on $\q^*$ and a Poisson bracket $\{\,\,,\,\}_\gamma$ of degree zero associated with $\gamma\in\q^*$. Here $\{\xi,\eta\}_\gamma=\gamma([\xi,\eta])$ for $\xi,\eta\in\q$. The algebras $\gZ=\gZ_\gamma$ occurring in this approach are known nowadays as {\it Mishchenko--Fomenko subalgebras}. Let $G$ be a connected semisimple Lie group with $\Lie(G)=\g$. In \cite{oy}, we studied compatible Poisson brackets and {\sf PC}-subalgebras related to an involution of $\g$. Our main object now is the $1$-parameter family of linear Poisson brackets on $\g^*$ related to a {\it $2$-splitting} of $\g$, i.e., a vector space sum $\g=\h\oplus\rr$, where $\h$ and $\rr$ are Lie subalgebras. This is also the point of departure for the Adler--Kostant--Symes theorems and subsequent results, see~\cite[Sect.~4.4]{AMV}, \cite[\S\,2]{t16}. But our further steps are quite different. For $x\in\g$, let $x_\rr\in\rr$ and $x_{\h}\in\h$ be the components of $x$. Here one can contract $\g$ to either $\h\ltimes\rr^{\rm ab}$ or $\rr\ltimes \h^{\rm ab}$. Let $\{\,\,,\,\}_{0}$ and $\{\,\,,\,\}_{\infty}$ be the corresponding Poisson brackets on $\g^*$. Then \[ \{x,y\}_0=\begin{cases} [x,y] & \text{ if } \ x,y\in \h, \\ {}[x,y]_{\rr} & \text{ if } \ x \in\h,y\in \rr, \\ \quad 0 & \text{ if } \ x,y\in \rr , \end{cases} \ \text{ and } \ \{x,y\}_\infty=\begin{cases} [x,y] & \text{ if } \ x,y\in \rr, \\ {}[x,y]_{\h} & \text{ if } \ x \in\h,y\in \rr, \\ \quad 0 & \text{ if } \ x,y\in \h. \end{cases} \] Since $ \{\,\,,\,\}=\{\,\,,\,\}_0+\{\,\,,\,\}_\infty$ is a Poisson bracket, these two brackets are compatible, cf.~\cite[Lemma~1.1]{oy}. Consider the $1$-parameter family of Poisson brackets \[ \{\,\,,\,\}_t=\{\,\,,\,\}_0+t\{\,\,,\,\}_\infty, \] where $t\in \BP$. Here $\bbk^\times\subset \BP_{\sf reg}$. Note that these brackets are different from the bracket \[ (x,y)\mapsto [x_\h,y_\h]- [x_\rr,y_\rr] \] considered in the Adler--Kostant--Symes theory. The algebras $\g_{(0)}=\h\ltimes\rr^{\sf ab}$ and $\g_{(\infty)}=\rr\ltimes\h^{\sf ab}$ are {\it In\"on\"u--Wigner contractions} of $\g$, and a lot of information on their symmetric invariants is obtained in~\cite{contr,Y-imrn}. Let $\gZ=\gZ_{\langle\h,\rr\rangle}$ denote the subalgebra of $\gS(\g)$ generated by all centres $\cz_t$ with $t\in \BP_{\sf reg}$. Then $\{\gZ,\gZ\} =0$ and therefore \[ \trdeg \gZ\le \frac{1}{2}(\dim\g+\rk\g)=\bb(\g). \] This upper bound for $\trdeg\gZ$ is attained if $\ind\{\,\,,\,\}_0=\ind\{\,\,,\,\}_{\infty}=\ind\{\,\,,\,\}=\rk\g$, i.e., $\BP=\BP_{\sf reg}$, see Theorem~\ref{thm:dim-Z}. A $2$-splitting with such property is said to be {\it non-degenerate.} We show that the $2$-splitting $\g=\h\oplus\rr$ is non-degenerate if and only if both subalgebras $\h$ and $\rr$ are {\it spherical}, see Theorem~\ref{thm:c=0} and Remark~\ref{Dima-T} for details. Therefore, we concentrate on $2$-splittings involving spherical subalgebras of $\g$. This allows us to point out many natural pairs $(\h,\rr)$ such that $\trdeg\gZ=\bb(\g)$. Furthermore, in several important cases, $\gZ$ is a polynomial algebra. 1) \ Consider the $2$-splitting $\g =\be\oplus\ut_-$, where $\be$ and $\be_-$ are two opposite Borel subalgebras, $\te=\be\cap\be_-$, and $\ut_-=[\be_-,\be_-]$. The {\sf PC}-subalgebra $\gZ=\gZ_{\langle\be,\ut_-\rangle}$ has a nice set of algebraically independent generators. Let $\{H_i \mid 1\le i\le \rk\g \}\subset\gS(\g)^{\g }$ be a set of homogeneous basic invariants and $d_i=\deg H_i$. The splitting $\g =\mathfrak b\oplus\ut_-$ leads to a bi-grading in $\gS(\g)$ and the decomposition $H_i=\sum_{j=0}^{d_i}(H_i)_{(j,d_i-j)}$. Then $\gZ_{\langle\be,\ut_-\rangle}$ is freely generated by the bi-homogeneous components $(H_i)_{(j,d_i-j)}$ with $1\le i\le \rk\g $, $1\le j \le d_j-1$ and a basis for the Cartan subalgebra $\te$, see Theorem~\ref{thm:b-n_polynomial}. It is easily seen that if ${\mathcal C}\subset\gS(\g)$ is a {\sf PC}-subalgebra and $\trdeg{\mathcal C}=\bb(\g)$, then $\mathcal C$ is complete on generic regular $G$-orbits, cf. Lemma~\ref{obvious}. Using properties of the principal nilpotent orbit in $\g\simeq\g^*$, we are able to prove that $\gZ_{\langle\be,\ut_-\rangle}$ is {\it complete} on {\bf each} regular coadjoint orbit of $G$ (Theorem~\ref{thm:b-n_complete}) and that it is a {\it maximal\/} {\sf PC}-subalgebra of $\gS(\g)$ (Theorem~\ref{max-u}). One can also consider a more general setting, where $\be$ is replaced with an arbitrary parabolic $\p\supset\be$, see Remark~\ref{rem:setting-p}. 2) \ Let $\sigma$ be an involution of maximal rank of $\g$, i.e., the $(-1)$-eigenspace of $\sigma$, $\g_1$, contains a Cartan subalgebra of $\g$. If $\g_0$ is the corresponding fixed-point subalgebra, then there is a Borel $\be$ such that $\g=\be\oplus\g_0$. This $2$-splitting is non-degenerate and we show that $\gZ_{\langle\be,\g_0\rangle}$ is a polynomial algebra, see Theorem~\ref{thm:g0-b-polynomial}. At least for $\g=\sln$, this {\sf PC}-subalgebra is also maximal (Example~\ref{ex:sl-so}). It is likely that the maximality takes place for all simple $\g$. \\ \indent More generally, a non-degenerate $2$-splitting is associated with any involution $\sigma$ such that $\g_1\cap\g_{\sf reg}\ne \hbox {\Bbbfont\char'077}$, see Remark~\ref{rmk:any-invol}. 3) \ Consider a semisimple Lie algebra $\tilde\g=\g\times\g$ and involution $\tau$ that permutes the summands. Here $\tilde\g_1\cap\tilde\g_{\sf reg} \ne \hbox {\Bbbfont\char'077}$ and this yields a natural non-degenerate $2$-splitting $\g\times\g=\Delta_\g\oplus\h$, which represents the famous Manin triple. The corresponding {\sf PC}-subalgebra $\gZ\subset\gS(\g\oplus\g)$ appears to be polynomial. This has a well-known counterpart over $\BR$ that involves a compact real form $\ka$ of $\g$. Namely, if $\g$ is considered as a real Lie algebra, then it has the {\it Iwasawa decomposition} $\g=\ka\oplus\rr$~\cite[Ch.\,5,\,\S 4]{t41}, where $\rr\subset \be$ is a solvable real Lie algebra. We prove that the $\BR$-algebra $\gZ_{\langle\ka,\rr\rangle}$ is also polynomial, see Section~\ref{sect:k-b}. We refer to \cite{duzu} for generalities on Poisson varieties, Poisson tensors, symplectic leaves, etc. Our general reference for algebraic groups and Lie algebras is~\cite{t41}. \section{Preliminaries on the coadjoint representation} \label{sect:prelim} \noindent Let $Q$\/ be a connected linear algebraic group with $\Lie(Q)=\q$. Then $\gS_\bbk(\q)=\gS(\q)$ is the symmetric algebra of $\q$ over $\bbk$. It is identified with the graded algebra of polynomial functions on $\q^*$, and we also write $\bbk[\q^*]$ for it. \\ \indent Write $\q^\xi$ for the {\it stabiliser\/} in $\q$ of $\xi\in\q^*$. The {\it index of}\/ $\q$, $\ind\q$, is the minimal codimension of $Q$-orbits in $\q^*$. Equivalently, $\ind\q=\min_{\xi\in\q^*} \dim \q^\xi$. Let $\bbk(\q^*)^Q$ be the field of $Q$-invariant rational functions and $\bbk[\q^*]^Q$ the algebra of $Q$-invariant polynomial functions on $\q^*$. By the Rosenlicht theorem, one has $\ind\q=\trdeg\bbk(\q^*)^Q$. Therefore $\trdeg\bbk[\q^*]^Q\le \ind\q$. The ``magic number'' associated with $\q$ is $\bb(\q)=(\dim\q+\ind\q)/2$. Since the coadjoint orbits are even-dimensional, the magic number is an integer. If $\q$ is reductive, then $\ind\q=\rk\q$ and $\bb(\q)$ equals the dimension of a Borel subalgebra. The Lie--Poisson bracket on $\bbk[\q^*]$ is defined on the elements of degree $1$ (i.e., on $\q$) by $\{x,y\} :=[x,y]$. The {\it Poisson centre\/} of $\gS(\q)$ is \[ \cz\gS(\q)=\{H\in \gS(\q)\mid \{H,x\} =0 \ \ \forall x\in\q\}=\gS(\q)^\q . \] As $Q$ is connected, we have $\gS(\q)^\q=\gS(\q)^{Q}=\bbk[\q^*]^Q$. The set of $Q$-{\it regular\/} elements of $\q^*$ is \beq \label{eq:regul-set} \q^*_{\sf reg}=\{\eta\in\q^*\mid \dim \q^\eta=\ind\q\} . \eeq The $Q$-orbits in $\q^*_{\sf reg}$ are also called {\it regular}. Set $\q^*_{\sf sing}=\q^*\setminus \q^*_{\sf reg}$. We say that $\q$ has the {\sl codim}--$n$ property if $\codim \q^*_{\sf sing}\ge n$. By~\cite{ko63}, the semisimple algebras $\g$ have the {\sl codim}--$3$ property. Let $\Omega^i$ be the $\gS(\q)$-module of differential $i$-forms on $\q^*$. Then $\Omega=\bigoplus_{i=0}^n \Omega^i$ is the $\gS(\q)$-algebra of regular differential forms on $\q^*$. Likewise, $\cW=\bigoplus_{i=0}^n \cW^i$ is the graded skew-symmetric algebra of polyvector fields, which is generated by the $\gS(\q)$-module $\cW^1$ of polynomial vector fields on $\q^*$. Both algebras are free $\gS(\q)$-modules. The {\it Poisson tensor (bivector)\/} $\pi\in \operatorname{Hom}_{\gS(\q)}(\Omega^2,{\gS(\q)})$ associated with a Poisson bracket $\{\,\,,\,\}$ on $\q^*$ is defined by the equality $\pi(\textsl{d}f\wedge \textsl{d}g)=\{f,g\}$ for $f,g\in \gS(\q)$. For any $\xi\in\q^*$, $\pi(\xi)$ defines a skew-symmetric bilinear form on $T^*_\xi(\q^*)\simeq\q$. Formally, if $f,g\in\gS(\q)$, $v=\textsl{d}_\xi f$, and $u=\textsl{d}_\xi g$, then $\pi(\xi)(v,u)=\pi(\textsl{d}f\wedge \textsl{d}g)(\xi)=\{f,g\}(\xi)$. In view of the duality between differential 1-forms and vector fields, we may regard $\pi$ as an element of $\cW^2$. Let $[[\ ,\ ]]: \cW^i\times \cW^j \to \cW^{i+j-1}$ be the Schouten bracket. The Jacobi identity for $\pi$ is equivalent to that $[[\pi,\pi]]=0$, see e.g.~\cite[Chapter\,1.8]{duzu}. \begin{df} \label{def-crk} The {\it index\/} of a Poisson bracket $\{\,\,,\,\}$ on $\q^*$, denoted $\ind\{\,\,,\,\}$, is the minimal codimension of the symplectic leaves in $\q^*$. \end{df} It is easily seen that if $\pi$ is the corresponding Poisson tensor, then \\[.4ex] \centerline{$\crk\{\,\,,\,\}=\min_{\xi\in\q^*} \dim \ker \pi(\xi)=n-\max_{\xi\in\q^*}\rk \pi(\xi)$.} \\ Recall that for a Lie algebra $\q$ and the dual space $\q^*$ equipped with the Lie--Poisson bracket $\{\,\,,\,\} $, the symplectic leaves are the coadjoint $Q$-orbits. Hence $\crk\{\,\,,\,\} =\ind\q$. \subsection{Complete integrability on coadjoint orbits} For $\xi\in\q^*$, let $Q{\cdot}\xi$ denote its coadjoint $Q$-orbit. If $\psi_\xi\!: T^*_\xi \q^* \to T^*_\xi(Q{\cdot}\xi)$ is the canonical projection, then $\ker\psi_\xi=\q^\xi$. Let $\pi$ be the Poisson tensor of the Lie--Poisson bracket on $\q^*$. Then $\pi(\xi)(x,y)=\xi([x,y])$ for $x,y\in\q$. The skew-symmetric form $\pi(\xi)$ is non-degenerate on $T^*_\xi(Q{\cdot}\xi)$. The algebra $\bbk[Q{\cdot}\xi]$ carries the Poisson structure, which is inherited from $\q^*$. We have $$\{F_1|_{Q{\cdot}\xi},F_2|_{Q{\cdot}\xi}\}=\{F_1,F_2\}|_{Q{\cdot}\xi}$$ for all $F_1,F_2\in\gS(\q)$. The coadjoint orbit $Q{\cdot}\xi$ is a smooth symplectic variety. \begin{df} \label{com-fam} A set $\boldsymbol{F}=\{F_1,\ldots,F_m\}\subset \bbk[Q{\cdot}\xi]$ is said to be {\it a complete family in involution} if $F_1,\ldots,F_m$ are algebraically independent, $\{F_i,F_j\}=0$ for all $i,j$, and $m=\frac{1}{2}\dim (Q{\cdot}\xi)$. In the terminology of \cite[Def.~4.13]{AMV}, here $(Q{\cdot}\gamma, \{\,\,,\,\}, \boldsymbol{F})$ is a {\it completely integrable system}. \end{df} The interest in integrable systems arose from the theory of differential equations and in particular equations of motions, see e.g. \cite[Chapter~4]{AMV}. By now this theory has penetrated nearly all of mathematics and has had a definite impact on such remote fields as combinatorics and number theory. A rich interplay between Lie theory and complete integrability is well-documented, see~\cite{t16,AMV,Per}. Applications of {\sf PC}-subalgebras of $\gS(\q)$ are one of the striking examples of this interplay. Let $\gA\subset \gS(\q)$ be a Poisson-commutative subalgebra. Then the restriction of $\gA$ to $Q{\cdot}\xi$, denoted $\gA|_{Q{\cdot}\xi}$, is Poisson-commutative for every $\xi$. We say that $\gA$ is {\it complete on\/} $Q{\cdot}\xi$, if $\gA|_{Q{\cdot}\xi}$ contains a complete family in involution. The condition is equivalent to the equality $\trdeg (\gA|_{Q{\cdot}\xi}) = \frac{1}{2}\dim (Q{\cdot}\xi)$. \begin{lm} \label{obvious} Suppose that $\gA\subset \gS(\q)$ is Poisson-commutative, $\xi\in\q^*_{\sf reg}$, and $\dim\textsl{d}_\xi \gA=\bb(\q)$. Then $\gA$ is complete on $Q{\cdot}\xi$. \end{lm} \begin{proof} Since $\xi$ is regular, we have $\dim\ker\psi_\xi=\ind\q$. Therefore \[ \dim \psi_\xi(\textsl{d}_\xi \gA) \ge \bb(\q)-\ind\q=\frac{1}{2}\dim (Q{\cdot}\xi) \] as required. \end{proof} \section{In\"on\"u--Wigner contractions and their invariants} \label{sect:2} Let $\h$ be a Lie subalgebra of $\q$. Choose a complementary subspace $V$ to $\h$ in $\q$, so that $\q=\h \oplus V$ is a vector space decomposition. For any $s\in\bbk^{\times}$, define the invertible linear map $\vp_s\!: \q\to\q$ by setting $\vp_s\vert_{\h}={\mathsf{id}}$, $\vp_s\vert_{V}=s{\cdot}{\mathsf{id}}$. Then $\vp_s\vp_{s'}=\vp_{ss'}$ and $\vp_s^{-1}=\vp_{s^{-1}}$, i.e., this yields a one-parameter subgroup of ${\rm GL}(\q)$. The map $\vp_s$ defines a new (isomorphic to the initial) Lie algebra structure $[\,\,,\,]_{(s)}$ on the same vector space $\q$ by the formula \beq \label{eq:fi_s} [x,y]_{(s)}=\vp_s^{-1}([\vp_s(x),\vp_s(y)]). \eeq The corresponding Poisson bracket is $\{\ ,\ \}_{(s)}$. We naturally extend $\vp_s$ to an automorphism of $\gS(\q)$. Then the centre of the Poisson algebra $(\gS(\q),\{\,\,,\,\}_s)$ equals $\vp_s^{-1}(\gS(\q)^{\q})$. The condition $[\h,\h]\subset \h$ implies that there is a limit of the brackets $[\ ,\ ]_{(s)}$ as $s$ tends to zero. The limit bracket is denoted by $[\ ,\ ]_{(0)}$ and the corresponding Lie algebra $\q_{(0)}$ is the semi-direct product $\h\ltimes V^{\sf ab}$, where $V^{\sf ab}\simeq \q/\h$ as an $\h$-module and $[V^{\sf ab},V^{\sf ab}]_{(0)}=0$. More precisely, if $x=h+v\in \q_{(0)}$ with $h\in\h$ and $v\in V$, then \[ [h+v,h'+v']_{(0)}=[h,h']+ [h,v']_V- [h',v]_V , \] where $z_V$ denotes the $V$-component of $z\in\q_{(0)}$. The limit algebra $\q_{(0)}$ is called an {\it In\"on\"u-Wigner} (=\,{\sf IW}) or {\it one-parameter contraction\/} of $\q$, see \cite[Ch.\,7,\S\,2.5]{t41} or~\cite[Sect.\,1]{alafe}. Below, we will repeatedly use the following {\bf Independence principle.} {\it The {\sf IW}-contraction $\q_{(0)}$ does not depend on the initial choice of a complementary subspace $V$.} \\ Therefore, when there is no preferred choice of $V$, we write $\q_{(0)}=\h\ltimes (\q/\h)^{\sf ab}$. By a general property of Lie algebra contractions, we have $\ind\q_{(0)}\ge \ind\q$. We need conditions on $\q$ and $\h$ under which the index of the {\sf IW}-contraction does not increase. For this reason, we switch below to the case in which $\q=\g$ is reductive and hence $G$ is a connected reductive algebraic group. For any irreducible algebraic $G$-variety $X$, there is the notion of the {\it complexity} of $X$, denoted $c_G(X)$, see~\cite{vi86}. Namely, $c_G(X)=\dim X-\max_{x\in X}\dim B{\cdot}x$, where $B\subset G$ is a Borel subgroup. Then $X$ is said to be {\it spherical}, if $c_G(X)=0$, i.e., if $B$ has a dense orbit in $X$. In particular, for any subgroup $H\subset G$, one can consider the complexity of the homogeneous space $X=G/H$. Then $H$ (or $\h=\Lie(H)$) is said to be {\it spherical\/} if $c_G(G/H)=0$. \begin{thm} \label{thm:c=0} Suppose that $G$ is reductive and the homogeneous space $G/H$ is quasi-affine. Then $\ind (\h\ltimes (\g/\h)^{\sf ab})=\ind \g+ 2c_G(G/H)=\rk \g+ 2c_G(G/H)$. In particular, $\ind (\h\ltimes (\g/\h)^{\sf ab})=\ind \g$ if and only if\/ $\h$ is a spherical subalgebra of\/ $\g$. \end{thm} \begin{proof} For the affine homogeneous spaces, a proof is given in \cite[Prop.\,9.3]{p05}. Here we demonstrate that that proof actually applies in the general quasi-affine setting. Let $\h^\perp$ be the annihilator of $\h$ in the dual space $\g^*$. It is an $H$-submodule of $\g^*$ that is called the {\it coisotropy representation\/} of $H$. Here $\h^\perp$ and $\g/\h$ are dual $\h$-modules. (If $\Phi$ is a $G$-invariant bilinear form on $\g$, then one can identify $\g^*$ and $\g$ using $\Phi$, and consider $\h^\perp$ as a subspace of $\g$.) Let $\bbk(\h^\perp)^H$ denote the subfield of $H$-invariants in $\bbk(\h^\perp)$. The Ra\"is formula for the index of semi-direct products~\cite{rais} asserts that $\ind(\h\ltimes (\g/\h)^{\sf ab})=\trdeg\bbk(\h^\perp)^H + \ind \es$, where $\es$ is the $\h$-stabiliser of a generic point in $\h^\perp$. (Here we use the fact that $\g/\h$ and $\h^{\perp}$ are dual $H$-modules.) Since $G/H$ is quasi-affine, $\es$ is reductive \cite[Theorem\,2.2.6]{these-p}. Hence $\ind (\h\ltimes (\g/\h)^{\sf ab})=\trdeg\bbk(\h^\perp)^H+\rk\es$. Moreover, there is a formula for $c_G(G/H)$ in terms of the action $(H:\h^\perp)$. Namely, $2c_G(G/H)=\trdeg\bbk(\h^\perp)^H- \rk\g+ \rk\es$ \cite[Cor.\,2.2.9]{these-p}. Whence the conclusion. \end{proof} \begin{rmk} \label{rem:parab-contr} If $P\subset G$ is a parabolic subgroup, then $G/P$ is {\bf not} quasi-affine. However, it is proved in~\cite[Theorem\,4.1]{alafe2} that $\ind(\p\ltimes (\g/\p)^{\sf ab})=\rk\g$. For a Borel subgroup $B$, this appeared already in~\cite[Cor.\,3.5]{alafe}. The reason is that the Ra\"\i s formula readily implies that $\ind(\p\ltimes (\g/\p)^{\sf ab})=\ind\g^e$, where $e\in \p^{\sf nil}$ is a Richardson element and $\p^{\sf nil}$ is the nilradical of $\p$. The famous {\it Elashvili conjecture} asserts that $\ind\g^e=\rk\g$ for any $e\in \g$. For the Richardson elements, a conceptual proof of the Elashvili conjecture is given in \cite{CM}. \end{rmk} \begin{rmk} \label{Dima-T} In an earlier version of this article, we conjectured that $\ind (\h\ltimes (\g/\h)^{\sf ab})=\ind \g$ \ for {\it\bfseries any\/} spherical subalgebra $\h$. Having heard from us about this problem, D.\,Timashev informed us that combining some results of Knop~\cite{knop}, the Elashvili conjecture, and the scheme of proof of Theorem~\ref{thm:c=0}, one can extend Theorem~\ref{thm:c=0} to {\bf arbitrary} homogeneous spaces $G/H$. This general argument is outlined below. We are grateful to Timashev for providing necessary details. Let $\mathsf T^*(G/H)=G\times^H\h^\perp$ be the cotangent bundle of $G/H$. The generic stabiliser for the $H$-action on $\h^\perp$ is isomorphic to a generic stabiliser for the $G$-action on $\mathsf T^*(G/H)$. Let $\es$ be such a stabiliser. (If $G/H$ is quasi-affine, then $\es$ is reductive. But this is not so in general.) A general description of $\es$, see~\cite[Sect.\,8]{knop}, can be stated as follows. Fix a maximal torus $T\subset B$ and set $\te=\Lie(T)$. For a generic $B$-orbit $\co$ in $G/H$, consider the parabolic subgroup $P=\{g\in G\mid g(\co)\subset\co\}\supset B$. Let $\Gamma\in \mathfrak X(T)$ be the lattice of weights of all $B$-semi-invariants in the field $\bbk(G/H)$ and $\te_0$ the Lie algebra of $\Ker\Gamma\subset T$. The rank of $\Gamma$ is called the {\it rank\/} of $G/H$, denoted $r_G(G/H)$. Set $\ah=\te_0^\perp$, the orthogonal complement w.r.t. $\Phi\vert_\te$ and consider the Levi subgroup $M=Z_G(\ah)\subset G$. The weights in $\Gamma$ can be regarded as characters of $M$ and we consider the identity component of their common kernel as a subgroup of $M$, denoted $M_0$. Clearly, $M_0$ is reductive. Write $\ma_0\subset \ma$ for their Lie algebras. \\ \indent Let $P_-$ be the opposite to $P$ parabolic subgroup and $\p_-^{\sf nil}$ the nilradical of $\p_-=\Lie(P_-)$. Then $M_0\cap P_-$ is a parabolic subgroup of $M_0$ and Knop's description boils down to the assertion that $\es$ is the generic stabiliser for the linear action of $M_0\cap P_-$ on $\ma\cap \p_-^{\sf nil}=\ma_0\cap \p_-^{\sf nil}$. It is noticed by Timashev that $\es\subset\ma_0$ is actually the stabiliser in $\ma_0$ of a Richardson element in $\ma_0\cap\p_-^{\sf nil}=(\ma_0\cap\p_-)^{\sf nil}$. Hence $\ind\es=\rk M_0$ by the Elashvili conjecture. By the Ra\"is formula, we have \beq \label{eq:rais} \ind(\h\ltimes (\g/\h)^{\sf ab})=\trdeg\bbk(\h^\perp)^H + \ind \es . \eeq The general theory developed in~\cite{knop,p90} implies that $\trdeg\bbk(\h^\perp)^H=2c_G(G/H)+r_G(G/H)$. The last ingredient is that, by the very construction of $M_0$, one has $\rk M_0=\rk\g- r_G(G/H)$. Gathering the above formulae, we obtain $2c_G(G/H)+\rk\g$ in the right-hand side of~\eqref{eq:rais}. \end{rmk} Associated with the vector space sum $\g=\h\oplus V$, one has the bi-homogeneous decomposition of any homogeneous $H\in \gS(\g)$: \[ \textstyle H=\sum_{i=0}^{d} H_{i,d-i} \ , \] where $d=\deg H$ and $H_{i,d-i}\in \gS^i(\h)\otimes \gS^{d-i}(V)\subset \gS^{d}(\g)$. Then $(i,d-i)$ is the {\it bi-degree\/} of $H_{i,d-i}$. Let $H^\bullet$ denote the nonzero bi-homogeneous component of $H$ with maximal $V$-degree. Then $\deg_{V}\! H=\deg_{V} H^\bullet$. Similarly, $H_{\bullet}$ stands for the nonzero bi-homogeneous component of $H$ with maximal $\h$-degree, i.e., minimal $V$-degree. It is known that if $H\in \cz\gS(\g)$, then $H^\bullet\in \cz\gS(\h\ltimes V^{\rm ab})$~\cite[Prop.\,3.1]{coadj}. However, it is not always the case that $\cz\gS(\h\ltimes V^{\rm ab})$ is generated by the functions of the form $H^\bullet$ with $H\in \cz\gS(\g)$. Let $\{H_1,\dots,H_l\}$, $l=\rk\g$, be a set of homogeneous algebraically independent generators of $\gS(\g)^\g$ and $d_i=\deg H_i$. Then $\sum_{i=1}^l d_i=\bb(\g)$. \begin{df} \label{def:ggs0} We say that $H_1,\dots,H_l$ is an $\h$-{\it good generating system} in $\gS(\g)^\g$ (=\,$\h$-{\it {\sf g.g.s.}\/} for short) if $H_1^\bullet,\dots,H_l^\bullet$ are algebraically independent. \end{df} The importance of {\sf g.g.s.} is readily seen in the following fundamental results. \begin{thm}[{\cite[Theorem\,3.8]{contr}}] \label{thm:kot14} Let $H_1,\dots,H_l$ be an arbitrary set of homogeneous algebraically independent generators of\/ $\gS(\g)^\g$ and $\g=\h\oplus V$. Then \begin{itemize} \item[\sf (i)] \ $\sum_{j=1}^l \deg_{V}\! H_j\ge \dim V$; \item[\sf (ii)] \ $H_1,\dots,H_l$ is an\/ $\h$-{\sf g.g.s.} if and only if\/ $\sum_{j=1}^l \deg_V \! H_j=\dim V$. \end{itemize} \end{thm} Furthermore, if the contraction $\g\leadsto \g_{(0)}=\h\ltimes (\g/\h)^{\sf ab}$ has some extra properties, then the existence of an $\h$-{\sf g.g.s.} provides the generators of $\cz\gS(\g_{(0)})$. More precisely, Theorem~3.8(iii) in \cite{contr} yields the following: \begin{thm} \label{thm:h-ggs+codim2} Suppose that $\g_{(0)}=\h\ltimes (\g/\h)^{\sf ab}$ has the {\sl codim}--$2$ property and\/ $\ind\g_{(0)}=\ind\g$. If there is an $\h$-{\sf g.g.s.} $H_1,\dots,H_l$ in $\gS(\g)^\g$, then $H_1^\bullet,\dots,H_l^\bullet$ \ freely generate $\gS(\g_{(0)})^{\g_{(0)}}$. In particular, $\gS(\g_{(0)})^{\g_{(0)}}$ is a polynomial ring. \end{thm} \section{$2$-splittings of $\g$ and Poisson-commutative subalgebras} \label{sect:3} \noindent Let $\g$ be a semisimple Lie algebra. The sum $\g=\h\oplus\rr$ is called a {\it $2$-splitting of} $\g$, if both summands are Lie subalgebras. Then $\g^*$ acquires the decomposition $\g^*=\h^*\oplus\rr^*$, where $\rr^*=\Ann\!(\h)=\h^\perp$, $\h^*=\Ann\!(\rr)=\rr^\perp$. Given a $2$-splitting $\g=\h\oplus\rr$, one can consider two {\sf IW}-contractions. Here either subalgebra is the preferred complement to the other, so we write $\h\ltimes\rr^{\sf ab}$ and $\rr\ltimes\h^{\sf ab}$ for these contractions. The important feature of this situation is that the corresponding Poisson-brackets are compatible and their non-trivial linear combinations define Lie algebras isomorphic to $\g$. If $x=x_{\h}+x_{\rr}\in \g$, then the Lie--Poisson bracket on $\g^*$ is decomposed as follows \[ \{x,y\}=\underbrace{[x_\h,y_\h]+ [x_{\h},y_{\rr}]_{\rr} + [x_{\rr},y_{\h}]_{\rr}}_{\{x,y\}_{0}} + \underbrace{[x_{\h},y_{\rr}]_{\h} + [x_{\rr},y_{\h}]_{\h} + [x_{\rr},y_{\rr}]}_{\{x,y\}_{\infty}}. \] Here the bracket $\{\,\,,\,\}_{0}$ (resp. $\{\,\,,\,\}_{\infty}$) corresponds to $\g_{(0)}=\h\ltimes\rr^{\sf ab}$ (resp. $\g_{(\infty)}=\rr\ltimes\h^{\sf ab}$). Using this decomposition, we introduce a $1$-parameter family of Poisson brackets on $\g^*$: \[ \{\,\,,\,\}_{t}=\{\,\,,\,\}_{0}+t\{\,\,,\,\}_{\infty}, \] where $t\in \BP=\bbk\cup\{\infty\}$ and we agree that $\{\,\,,\,\}_{\infty}$ is the Poisson bracket corresponding to $t=\infty$. It is easily seen that $\{\,\,,\,\}_{t}$ with $t\in \bbk^\times$ is given by the map $\vp_t$, see Eq.~\eqref{eq:fi_s}. By \cite[Lemma~1.2]{oy}, all these brackets are compatible. Write $\g_{(t)}$ for the Lie algebra corresponding to $ \{\,\,,\,\}_{t}$. Of course, we merely write $\g$ in place of $\g_{(1)}$. All Lie algebras $\g_{(t)}$ have the same underlying vector space $\g$. {\bf Convention~1.} We often identify $\g$ with $\g^*$ via the Killing form on $\g$. We also think of $\g^*$ as the dual of any algebra $\g_{(t)}$ and usually omit the subscript `$(t)$' in $\g_{(t)}^*$. However, if $\xi\in\g^*$, then the stabiliser of $\xi$ in the Lie algebra $\g_{(t)}$ (i.e., with respect to the coadjoint representation of $\g_{(t)}$) is denoted by $\g_{(t)}^\xi$. Let $\pi_t$ be the Poisson tensor for $\{\,\,,\,\}_{t}$ and $\pi_t(\xi)$ the skew-symmetric bilinear form on $\g\simeq T^*_\xi(\g^*)$ corresponding to $\xi\in\g^*$, cf. Section~\ref{sect:prelim}. A down-to-earth description is that $\pi_t(\xi)(x_1,x_2)=\{x_1,x_2\}_{(t)}(\xi)$. Set $\rk\pi_t=\max_{\xi\in\g^*}\rk\pi_t(\xi)$. If $t\ne 0, \infty$, then $\g_{(t)}\simeq \g$ and hence $\ind\g_{(t)}=\ind\g=\rk\g$. For each Lie algebra $\g_{(t)}$, there is the related singular set $\g^*_{(t),\sf sing}=\g^*\setminus \g^*_{(t),\sf reg}$\,, cf.~Eq.~\eqref{eq:regul-set}. Then, clearly, \[ \g^*_{(t),\sf sing}=\{\xi\in\g^* \mid \rk \pi_t(\xi)< \rk \pi_t\} , \] which is the union of the symplectic $\g_{(t)}$-leaves in $\g^*$ having a non-maximal dimension. For aesthetic reasons, we write $\g^*_{\infty,\sf sing}$ instead of $\g^*_{(\infty),\sf sing}$. Let $\cz_t$ denote the centre of the Poisson algebra $(\gS(\g), \{\,\,,\,\}_{t})$. Formally, $\cz_t=\gS(\g_{(t)})^{\g_{(t)}}$. Then $\cz_1=\gS(\g)^{\g}$. For $\xi\in\g^*$, let $\textsl{d}_\xi F\in\g$ denote the differential of $F\in\gS(\g)$ at $\xi$. It is a standard fact that, for any $H\in\gS(\g)^\g$, $\textsl{d}_\xi H$ belongs to $\z(\g^\xi)$, where $\z(\g^\xi)$ is the centre of $\g^\xi$. \\ \indent Let $\{H_1,\dots,H_l\}$ be a set of homogeneous algebraically independent generators of $\gS(\g)^\g$. By the {\it Kostant regularity criterion\/} for $\g$, $\textsl{d}_\xi H_1,\dots,\textsl{d}_\xi H_l$ are linearly independent if and only if $\xi\in\g^*_{\sf reg}$, see~\cite[Theorem~9]{ko63}. Therefore \beq \label{eq:ko-re-cr} \text{ $\langle \textsl{d}_\xi H_j \mid 1\le j\le l \rangle_{\bbk}=\g^\xi$ \ if and only if \ $\xi\in\g^*_{\sf reg}$.} \eeq (Recall that $\g^\xi=\z(\g^\xi)$ if and only if $\xi\in\g^*_{\sf reg}$~\cite[Theorem\,3.3]{p03}.) For $\xi\in\g^*$, set $\textsl{d}_\xi \cz_t=\left<\textsl{d}_\xi F\mid F\in\cz_t\right>_{\bbk}$. Then $\textsl{d}_\xi \cz_t \subset \ker \pi_t(\xi)$ for each $t$. The regularity criterion obviously holds for any $t\ne 0,\infty$. That is, \beq \label{span-dif} \text{for }\ t\ne0,\infty, \ \text{ one has }\ \xi\in\g^*_{(t),\sf reg} \ \Leftrightarrow \ \textsl{d}_\xi \cz_t =\ker \pi_t(\xi) \Leftrightarrow \ \dim \ker \pi_t(\xi)=\rk\g . \eeq {\bf Remark.} The same property holds for $t=0$ in some particular cases considered in~\cite[Sections\,4 \& 5]{contr}, which also occur below. For instance, if $(\h,\rr)$ is either $(\be,\ut_-)$, see Section~\ref{sect:b-n}, or $(\be,\g_0)$, see Section~\ref{sect:g0-b} for details. \subsection{The non-degenerate case} \label{subs:2Sph} Let us say that a $2$-splitting is {\it non-degenerate}, if $\ind\g_{(0)}=\ind\g_{(\infty)}=\rk\g$ and thereby $\BP_{\sf reg}=\BP$. This is equivalent to that both subalgebras $\h$ and $\rr$ are spherical, see Theorem~\ref{thm:c=0} and Remark~\ref{Dima-T}. Clearly, $\{\cz_t,\cz_{t'}\}_t=0=\{\cz_t,\cz_{t'}\}_{t'}$ for all $t,t'\in\BP$. If $t\ne t'$, then each bracket $ \{\,\,,\,\}_s$ is a linear combination of $ \{\,\,,\,\}_{t}$ and $ \{\,\,,\,\}_{t'}$. Hence $\{\cz_t,\cz_{t'}\}_s=0$ for all $s\in\BP$. By continuity, this ensures that $\ker\pi_{t'}(\xi) = \lim_{t\to t'} \textsl{d}_{\xi} \cz_t$ for each $\xi\in\mathfrak g^*_{(t'),\sf reg}$, cf.~\cite[Appendix]{oy}. Using this one shows that the centres $\cz_t$ ($t\in \BP$) generate a {\sf PC}-subalgebra of $\gS(\g)$ with respect to any bracket $\{\,\,,\,\}_t$, $t\in\BP$. Write $\gZ_{\langle\h,\rr\rangle}:=\mathsf{alg}\langle\cz_t\rangle_{t\in\BP}$ for this subalgebra. For each $\xi\in\mathfrak g^*$, the space $\textsl{d}_\xi\gZ_{\langle\h,\rr\rangle}$ is the linear span of $\textsl{d}_\xi \cz_t$ with $t\in\BP$. In~\cite{bols}, Bolsinov outlined a method for estimating the dimension of such subspaces. A rigorous presentation is contained in Appendices~\cite{mrl,oy}, which is going to be used in the following proof. \begin{thm} \label{thm:dim-Z} Given a non-degenerate $2$-splitting $\g=\h\oplus\rr$, \begin{itemize} \item[\sf (1)] there is a dense open subset $\Omega\in\g^*$ such that $\dim\ker \pi_t(\xi)=\rk\g$ for all $\xi\in\Omega$ and $t\in\BP$; \item[\sf (2)] \ for all $\xi\in\Omega$, one has $\dim \textsl{d}_\xi\gZ_{\langle\h,\rr\rangle}=\bb(\g)$ and hence $\trdeg \gZ_{\langle\h,\rr\rangle}=\bb(\g)$. \end{itemize} \end{thm} \begin{proof} {\sf (1)} Suppose that \[ \xi=\xi_\h+\xi_\rr\in \h^*\oplus\rr^*=\g^*. \] The presence of the invertible map $\vp_t$ implies that $\xi\in\g^*_{(t),\sf sing}$ if and only if \ $\xi_\h+t^{-1}\xi_\rr\in\g^*_{\sf sing}$. Therefore, \beq \label{eq-cdt} \bigcup_{t\ne 0,\infty} \g^*_{(t),\sf sing} = \{\xi_\h+t \xi_\rr \mid \xi_\h+\xi_\rr\in\g^*_{\sf sing}, \, t\ne 0,\infty\} . \eeq Since $\codim \g^*_{(t),\sf sing}=3$ for each $t\in\bbk^\times$, the closure $Y:=\ov{\bigcup_{t\ne 0,\infty} \g^*_{(t),\sf sing}}$ is of codimension $2$ in $\g^*$. Then we have $\dim\ker \pi_t(\xi)=\rk\g$ for all $t\in \BP$ and all $\xi$ in the dense open subset \\ \centerline{ $\Omega=\g^*\setminus (Y\cup \g^*_{(0),\sf sing}\cup \g^*_{\infty,\sf sing})$. } \\[.5ex] {\sf (2)} \ By definition, $\textsl{d}_\xi\gZ_{\langle\h,\rr\rangle} =\sum_{t\in\BP}\textsl{d}_\xi\cz \subset \sum_{t\in\BP} \ker \pi_t(\xi)$. Then~\eqref{span-dif} and the hypothesis on $\xi$ imply that $\textsl{d}_\xi\gZ_{\langle\h,\rr\rangle}\supset \sum_{t\ne 0,\infty} \ker \pi_t(\xi)$. Here we have a $2$-dimensional vector space of skew-symmetric bilinear forms $a{\cdot}\pi_t(\xi)$ on $\g\simeq T^*_\xi \g^*$, where $a\in\bbk$, $t\in \BP$. Moreover, $\rk \pi_t(\xi)=\dim\g-\rk\g$ for each $t$. By~\cite[Appendix]{mrl}, $\sum_{t\ne 0,\infty} \ker \pi_t(\xi)=\sum_{t\in\BP} \ker \pi_t(\xi)$ and $\dim \sum_{t\in\BP} \ker \pi_t(\xi) = \rk\g + \frac{1}{2}(\dim\g- \rk\g)=\bb(\g)$. \end{proof} Thus, any non-degenerate $2$-splitting $\g=\h\oplus\rr$ provides a Poisson-commutative subalgebra $\gZ_{\langle\h,\rr\rangle}\subset \gS(\g)$ of maximal transcendence degree. Let $\{H_1,\dots,H_l\}$, $l=\rk\g$, be a set of homogeneous algebraically independent generators of $\gS(\g)^\g$ and $d_j=\deg H_j$. Recall that for any $H_j$, one has the bi-homogeneous decomposition: \[ H_j=\sum_{i=0}^{d_j} (H_j)_{i,d_j-i} , \] and $H_j^\bullet$ is the nonzero bi-homogeneous component of $H_j$ with maximal $\rr$-degree. Then $\deg_{\rr}\! H_j=\deg_{\rr} H_j^\bullet$. Similarly, $H_{j,\bullet}$ stands for the nonzero bi-homogeneous component of $H_j$ with maximal $\h$-degree, i.e., minimal $\rr$-degree. {\bf Convention~2.} We tacitly assume that the order of summands in the sum $\g=\h\oplus\rr$ is fixed. This means that, for a homogeneous $H\in\gS(\g)$, we write $H^\bullet$ (resp. $H_\bullet$) for the bi-homogeneous component of maximal degree w.r.t. the second (resp. first) summand. It is known that $H_j^\bullet\in \cz\gS(\h\ltimes\rr^{\rm ab})$ and $H_{j,\bullet}\in \cz\gS(\rr\ltimes\h^{\rm ab})$~\cite[Prop.\,3.1]{coadj}. \begin{thm} \label{thm:main3-1} The algebra $\gZ_{\langle\h,\rr\rangle}$ is generated by $\cz_0$, $\cz_\infty$, and the set of all bi-homogeneous components of $H_1,\dots,H_l$, i.e., \beq \label{eq:bihom} \{(H_j)_{i,d_j-i} \mid j=1,\dots,l \ \& \ i=0,1,\dots,d_j\}. \eeq \end{thm} \begin{proof} Recall that $\cz(\{\,\,,\,\}_1)=\cz\gS(\g)=\bbk[H_1,\dots,H_l]$. By the definition of $\{\,\,,\,\}_t$, we have $\cz(\{\,\,,\,\}_t)=\vp_{t}^{-1} (\cz(\gS(\g)))$ for $t\ne 0,\infty$ and \[ \vp_t(H_j)=(H_j)_{d_j,0}+t (H_j)_{d_j-1,1}+ t^2 (H_j)_{d_j-2,2}+\dots \] Using the Vandermonde determinant, we deduce from this that all $(H_j)_{i,d_j-i}$ belong to $\gZ_{\langle\h,\rr\rangle}$ and the algebra generated by them contains $\cz_t$ with $t\in \bbk\setminus\{0\}$. \end{proof} The main difficulty in applying this theorem is that one has to know the generators of the centres $\cz_0$ and $\cz_\infty$. The problem is that these centres are not always generated by certain bi-homogeneous components of $H_1,\dots,H_l$. In the subsequent sections, we consider several nice examples of non-degenerate $2$-splittings of $\g$, describe the corresponding Poisson-commutative subalgebras of $\gS(\g)$ and point out some applications to integrable systems. \section{The Poisson-commutative subalgebra $\gZ_{\langle\be,\ut_-\rangle}$} \label{sect:b-n} Let $\g=\ut\oplus\te\oplus\ut_-$ be a fixed triangular decomposition and $\be=\ut\oplus\te$. The corresponding subgroups of $G$ are $U,T,U_-$, and $B$. In this section, we take $(\h,\rr)=(\be,\ut_-)$. Then $\g_{(0)}=\be\ltimes \ut^{\sf ab}_-$ and $\g_{(\infty)}=\ut_-\ltimes \be^{\sf ab}$. Since $G/U_-$ is quasi-affine, $\ind\g_{(\infty)}=\ind\g$, see Theorem~\ref{thm:c=0}. By a direct computation, one also obtains $\ind\g_{(0)}=\ind\g$, cf. Remark~\ref{rem:parab-contr}. Hence $\g=\be\oplus\ut_-$ is a non-degenerate $2$-splitting. In order to get explicit generators of the algebra $\gZ_{\langle\be,\ut_-\rangle}$, we first have to describe the algebras $\cz_0$ and $\cz_\infty$. Recall that $\gS(\g)^\g=\bbk[H_1,\dots,H_l]$ and $H_i^\bullet$ is the bi-homogeneous component of $H$ of highest degree w.r.t. $\ut_-$. The following is Theorem~3.3 in~\cite{alafe}. \begin{prop} \label{prop:gen-Z0} For $\g_{(0)}=\be\ltimes\ut^{\sf ab}_-$, the Poisson centre $\cz_0=\cz\gS(\g_{(0)})$ is freely generated by $H_1^\bullet,\dots, H_l^\bullet$. The bi-degree of $H_j^\bullet$ is $(1,d_j-1)$. \end{prop} In our present terminology, one can say that {\bf any} homogeneous generating system $H_1,\dots,H_l\in\gS(\g)^\g$ is a $\be$-{\sf g.g.s.} \begin{prop} \label{prop:gen-Zinf} For $\g_{(\infty)}=\ut_-\ltimes\be^{\sf ab}$, one has $\cz_\infty=\gS(\te)$, where $\te\subset\be=\be^{\sf ab}\subset \g_{(\infty)}$. \end{prop} \begin{proof} Since $\be$ is abelian in $\g_{(\infty)}$ and $\be\simeq \g/\ut_-$ as an $\ut_-$-module, we have $\te\subset \cz_\infty$. Since $\ind \g_{(\infty)}=l=\dim\te$, this means that $\gS(\te)\subset\cz_\infty$ is an algebraic extension. Because $\gS(\te)$ is algebraically closed in $\gS(\g_{(\infty)})$, we conclude that $\gS(\te)=\cz_\infty$. \end{proof} \begin{thm} \label{thm:b-n_polynomial} The algebra $\gZ_{\langle\be,\ut_-\rangle}$ is polynomial. It is freely generated by the bi-homogeneous components $\{(H_j)_{i,d_j-i} \mid 1\le j\le l,\ 1\le i \le d_j-1\}$ and a basis for $\te$. \end{thm} \begin{proof} By Proposition~\ref{prop:gen-Z0}, the generators of $\cz_0$ are certain bi-homogeneous components of $H_1,\dots,H_l$. Therefore, combining Theorem~\ref{thm:main3-1}, Proposition~\ref{prop:gen-Z0}, and Proposition~\ref{prop:gen-Zinf}, we obtain that $\gZ_{\langle\be,\ut_-\rangle}$ is generated by the bi-homogeneous components of all $H_j$'s and $\gS(\te)$. The bi-homogeneous component $(H_j)_{d_j,0}$ is the restriction of $H_j$ to $(\ut_-)^\perp=\be^*\subset\g^*$. (Upon the identification of $\g$ and $\g^*$, this becomes the restriction to $\be_-=\te\oplus\ut_-$.) As $H_j$ is $G$-invariant, such a restriction depends only on $\te^*\subset \be^*$; i.e., it is a $W$-invariant element of $\gS(\te)$. Since we already have the whole of $\gS(\te)$, the functions $\{(H_j)_{d_j,0} \mid 1\le j\le l\}$ are not needed for a minimal generating system. On the other hand, $(H_j)_{0,d_j}$ is the restriction of $H_j$ to $\be^\perp=\ut^*_-\simeq \ut$. Therefore, $(H_j)_{0,d_j}= 0$ for all $j$. Thus, $\gZ_{\langle\be,\ut_-\rangle}$ is generated by the functions pointed out in the statement. The total number of these generators is $l+\sum_{j=1}^l(d_j-1)=\bb(\g)$. Because $\trdeg \gZ_{\langle\be,\ut_-\rangle}=\bb(\g)$ (Theorem~\ref{thm:dim-Z}), all these generators are nonzero and algebraically independent. \end{proof} \noindent Thus, we have constructed a polynomial Poisson-commutative subalgebra $\gZ_{\langle\be,\ut_-\rangle}\subset \gS(\g)$ of maximal transcendence degree. \begin{thm} \label{thm:b-n_complete} The Poisson-commutative algebra $\gZ_{\langle\be,\ut_-\rangle}$ is complete on every \emph{regular} coadjoint orbit of $G$. \end{thm} \begin{proof} Given an orbit $G{\cdot}x\subset \g^*_{\sf reg}$, it suffices to find $y\in G{\cdot}x$ such that $\dim\textsl{d}_y \gZ_{\langle\be,\ut_-\rangle}= \bb(\g)$. Consider first the regular nilpotent orbit $Ge'$. Let $\{e,h,f\}$ be a principal $\tri$-triple in $\g$ such that $e\in\ut $, $h\in\te$, $f\in\ut _-$. Then $y:=e+h-f\in G{\cdot}e'$. Here $e\in\mathfrak u_-^*$ and $(h-f)\in\mathfrak b^*$. We claim that $y\in\g_{(t), {\sf reg}}^*$ for every $t\in\BP$. Indeed, if $t\ne 0,\infty$, then $te+(h-f)\in\g^*_{\sf reg}$, cf. \eqref{eq-cdt}. Further, $\g_{(0)}^e=\mathfrak b^e=\g^e$ is commutative and $\dim\g^e=l$. Therefore also $\g_{(0)}^y=\g^e$ and $y\in\g^*_{(0),{\sf reg}}$. Finally, $\ad\!^*(\ut _-)(h-f)=\Ann\!(\te\oplus\ut _-)$. Hence $\dim\g_{(\infty)}^y=\dim\g_{(\infty)}^{h-f}=l$ and $y\in\g^*_{\infty,\sf reg}$. The claim is settled. Now we know that $y\in \Omega$, where $\Omega$ is the subset of Theorem~\ref{thm:dim-Z}(1). By Theorem~\ref{thm:dim-Z}(2), $\dim\textsl{d}_y \gZ_{\langle\be,\ut_-\rangle}= \bb(\g)$. This means that $\gZ_{\langle\be,\ut_-\rangle}$ is complete on the regular nilpotent orbit $G{\cdot}e=G{\cdot}e'$, see Lemma~\ref{obvious}. In general, using the theory of associated cones of Borho and Kraft~\cite{bokr}, one sees that $Ge\subset \overline{\bbk^\times {\cdot}Gx}$ for any $x\in\g^*_{\sf reg}$. Since the subalgebra $\gZ_{\langle\be,\ut_-\rangle}$ is homogeneous, we have \[ \bb(\g)\ge \max_{x'\in Gx}\dim \textsl{d}_{x'} \gZ_{\langle\be,\ut_-\rangle} \ge \max_{e'\in Ge}\dim\textsl{d}_{e'} \gZ_{\langle\be,\ut_-\rangle}=\bb(\g). \] The result follows in view of Lemma~\ref{obvious}. \end{proof} \begin{rmk} \label{rem:setting-p} Our $(\be,\ut_-)$-results can be put in a more general setting in the following way. Let $\p\supset \be$ be a standard parabolic subalgebra with Levi decomposition $\p=\el\oplus\p^{\sf nil}$. This yields the decomposition $\g=\p^{\sf nil}\oplus \el\oplus\p^{\sf nil}_-$, where $\p_-=\el\oplus\p^{\sf nil}_-$ is the opposite parabolic. Consider the $2$-splitting $\g=\p\oplus\p^{\sf nil}_-$. Here $\p$ is a spherical subalgebra, while $\p^{\sf nil}_-$ is spherical if and only if $\p=\be$. Actually, $c_G(G/P_-^{\sf nil})=\dim \ut(\el)$, where $\ut(\el)=\ut\cap\el$. Then \[ \ind\g_{(\infty)}=\ind (\p^{\sf nil}_-\ltimes\p^{\sf ab})=\dim\el , \] cf. Theorem~\ref{thm:c=0}. Moreover, one proves here that $\gZ_\infty= \gS(\el)$, cf. Proposition~\ref{prop:gen-Zinf}. Therefore, if $\p\ne \be$, then $\BP_{\sf sing}=\{\infty\}$ and the {\sf PC}-subalgebra $\gZ_{\langle\p,\p_-^{\sf nil}\rangle}$ is generated by all $\cz_t$ with $t\ne\infty$. In this case, $\gZ_{\langle\p,\p_-^{\sf nil}\rangle}\subset \gS(\g)^\el$ and one can prove that $\trdeg \gZ_{\langle\p,\p_-^{\sf nil}\rangle}=\bb(\g)-\dim \ut(\el)$. To describe explicitly $\gZ_{\langle\p,\p_-^{\sf nil}\rangle}$, one has to know the structure and generators of $\cz_0=\cz\gS(\p\ltimes (\p_-^{\sf nil})^{\sf ab})$. However, it is not known whether $\cz_0$ is always polynomial, and generators are only known in some special cases. For instance, this is so if $\p$ is a minimal parabolic, i.e., $[\el,\el]\simeq\tri$ (see~\cite[Section\,6]{alafe2}). We hope to consider this case in detail in a forthcoming publication. \end{rmk} \section{The maximality of $\gZ_{\langle\be,\ut_-\rangle}$} \label{sect:5} \noindent Here we prove that $\gZ_{\langle\be,\ut_-\rangle}$ is a {\bf maximal} Poisson-commutative subalgebra of $\gS(\g)$. Let $\Delta$ be the set of roots of $(\g,\te)$. Then $\g_\mu$ is the root space for $\mu\in\Delta$. Let $\Delta^+$ be the set of positive roots corresponding to $\ut$. Choose nonzero vectors $e_\mu\in\g_\mu$ and $f_\mu\in\g_{-\mu}$ for any $\mu\in\Delta^+$. Let $\ap_1,\ldots,\ap_l$ be the simple roots and $\delta$ the highest root in $\Delta^+$. Write $\delta=\sum_{i=1}^l a_i \ap_i$ and set $f_i=f_{\ap_i}$. Assuming that $\deg H_j\le \deg H_i$ if $j<i$ for the basic invariants in $\gS(\g)^\g$, we have $H_l^\bullet = e_\delta \prod_{i=1}^l f_i^{a_i}$, see~\cite[Lemma~4.1]{alafe}. Recall that we have two contractions $\g_{(0)}=\be\ltimes\ut^{\sf ab}_-$ and $\g_{(\infty)}=\ut_-\ltimes\be^{\sf ab}$. As the first step towards proving the maximality of $\gZ_{\langle\be,\ut_-\rangle}$, we study the subsets $\g^*_{\infty,\sf sing}$ and $\g^*_{(0),\sf sing}$. \begin{lm} \label{lm-sing-inf} {\sf (i)} $\g^*_{\infty,\sf sing}=\bigcup_{\ap\in\Delta^+}D(\alpha)$, where $D(\alpha)=\{\xi\in\g^* \mid (\xi,\alpha)=0\}$ and $(\,\,,\,)$ is the Killing form on $\g^*\simeq \g $. \\ \indent {\sf (ii)} For any $\alpha\in\Delta^+$ and a generic $\xi\in D(\alpha)$, we have $\dim\g _{(\infty)}^\xi=l+2$. \end{lm} \begin{proof} For $\xi\in\g^*$, let $C=C_\infty(\xi)$ be the matrix of $\pi_{\infty}(\xi)|_{\ut _-\times\ut ^{\sf ab}}$. Since $[\mathfrak b,\mathfrak b]_{(\infty)}=0$, we have $\rk\pi_\infty(\xi)\ge 2\rk C$. Note that if $[e_\ap,f_\beta]_{(\infty)}\ne 0$, then either $\ap=\beta$ or $\ap-\beta\in\Delta^+$ and therefore $\ap\succcurlyeq\beta$ in the usual root order ``$\succcurlyeq$" on $\Delta^+$. Refining this partial order to a total order on $\Delta^+$ and choosing bases in $\ut $ and $\ut _-$ accordingly, one can bring $C$ into an upper triangular form with the entries $\xi([f_\ap,e_\ap])$ on the diagonal. Now it is clear that $\g^*_{\infty,\sf sing}\subset\bigcup_{\ap\in\Delta^+} D(\ap)$. Let $\xi\in D(\ap)$ be a generic point. Then $\rk C=\dim\ut -1$ and there is a nonzero $e\in\ut $ such that $\pi_\infty(\xi)(\ut _-,e)=0$. Hence $e\in\g ^\xi_{(\infty)}$. Because $\te$ is the centre of $\g _{(\infty)}$, we have $\te\subset \g ^\xi_{(\infty)}$ and $\bb(\g)-1 \ge \rk\pi_{\infty}(\xi)\ge \bb(\g)-2$. Since $\rk\pi_{\infty}(\xi)$ is an even number, it is equal to $\bb(\g)-2$ and therefore $\dim\g ^\xi_{(\infty)}=l+2$. This settles both claims. \end{proof} \begin{lm} \label{lm-sing-0} {\sf (i)} Set $D_i=\{\xi\in\g^* \mid \xi(f_i)=0\}$ for $1\le i\le l$. Then the union of all divisors in $\g^*_{(0),\sf sing}$ is equal to\/ $\bigcup_{i:\, a_i>1} D_i$. {\sf (ii)} For any $D_i\subset \g^*_{(0),\sf sing}$\/ and generic $\xi\in D_i$, we have $\dim\g _{(0)}^\xi=l+2$. \end{lm} \begin{proof} {\sf (i)} \ By~\cite[Theorem\,5.5]{contr}, a fundamental semi-invariant of $\g_{(0)}$ is $p=\prod_{i=1}^l f_i^{a_i-1}$. The main property of $p$ is that the union of all divisors in $\g^*_{(0),\sf sing}$ is $\{\xi\in\g^*\mid p(\xi)=0\}$, see~\cite[Def.\,5.4]{contr}. Hence the assertion. {\sf (ii)} \ Take a generic $\xi\in D_i\subset \g^*_{(0),\sf sing}$. Then $\xi = y + e$, where $y\in\mathfrak b^*$ and $e\in\ut $ is a subregular nilpotent element of $\g $, cf.~\cite[Sect.\,5.2]{contr}. According to \cite[Eq.\,(5.1)]{contr}, \[ \dim\g ^\xi_{(0)}=\dim\mathfrak b^e+\ind\mathfrak b^e-l. \] On the one side, $\dim(B{\cdot}e)\le \dim\ut -1$, on the other, $\mathfrak b^e\subset\g ^e$ and $\dim\mathfrak b^e\le l+2$. If $\mathfrak b^e=\g ^e$, then $\ind\mathfrak b^e=l$ \cite[Cor.\,3.4]{p03}, if $\dim\mathfrak b^e=l+1$, then $\ind\mathfrak b^e\le l+1$. In any case $\dim\mathfrak b^e+\ind\mathfrak b^e \le 2l+2$ and hence $\dim\g ^\xi_{(0)}=l+2$. \end{proof} {\bf Remark.} Note that all $a_i=1$ if $\g$ is of type {\sf A}. That is, in that case $\codim \g^*_{(0),\sf sing}\ge 2$. We will need another technical tool, the pencil of skew-symmetric forms on $\g$ related to the family $\{\pi_t(\xi)\}_{t\in \bbk\cup \infty}$ for a given $\xi\in\g^*$. To this end, we recall some general theory presented in the Appendix to~\cite{oy}. Let $\eus P$ be a two-dimensional vector space of (possibly degenerate) skew-symmetric bilinear forms on a finite-dimensional vector space $\mathfrak v$. Set $m=\max_{A\in \eus P }\rk A$, and let $\eus P_{\sf reg}\subset \eus P$ be the set of all forms of rank $m$. Then $\eus P_{\sf reg}$ is an open subset of $\eus P$ and $\eus P_{\sf sing}:=\eus P\setminus \eus P_{\sf reg}$ is either $\{0\}$ or a finite union of lines. For each $A\in \eus P$, let $\ker A\subset \mathfrak v$ be the kernel of $A$. Our object of interest is the subspace $L:=\sum_{A\in \eus P_{\sf reg}} \ker A$. \begin{prop}[{cf. \cite[Theorem\,A.4]{oy}}] \label{prop-JK} Suppose that $\eus P_{\sf sing}= \bbk C$ with $C\ne 0$ and $\rk C=m-2$. Suppose also that $\rk(A|_{\ker C})=2$ for some $A\in\eus P$. Then {\sf (1)} $\dim (L\cap \ker C)=\dim\mathfrak v-m$, $ {\sf (2)} \ \dim L = \dim \mathfrak v-\frac{m}{2} - 1$, and {\sf (3)} \ $A(\ker C, L\cap \ker C)=0$. \end{prop} \begin{proof} The first two assertions are proved in~\cite[Theorem\,A.4]{oy}. We briefly recall the relevant setup. Take non-proportional $A,B\in \eus P_{\sf reg}$. By~\cite[Theorem 1(d)]{JK}, there is the {\it Jordan--Kronecker canonical form\/} for $A$ and $B$. This means that there is a decomposition $\mathfrak v=\mathfrak v_1\oplus\ldots\oplus \mathfrak v_d$ such that $A(\mathfrak v_i,\mathfrak v_j)=0=B(\mathfrak v_i,\mathfrak v_j)$ for $i\ne j$, and the pairs $A_i=A\vert_{\mathfrak v_i}, B_i=B\vert_{\mathfrak v_i}$ have a rather special form. Namely, each pair $(A_i,B_i)$ forms either a {\it Kronecker} or a {\it Jordan block\/} (see \cite[Appendix]{oy} for more details). Assume that $\dim\mathfrak v_i>0$ for each $i$. \textbullet\quad For a Kronecker block, $\dim \mathfrak v_i=2k_i+1$, $\rk A_i=2k_i=\rk B_i$ and the same holds for every nonzero linear combination of $A_i$ and $B_i$. \\ \indent \textbullet\quad For a Jordan block, $\dim \mathfrak v_i$ is even and both $A_i$ and $B_i$ are non-degenerate on $\mathfrak v_i$. Moreover, there is a unique $\lambda_i\in\bbk$ such that $\det (A_i+\lambda_i B_i)=0$ and hence $\rk(A_i+\lambda_i B_i)\le \dim\mathfrak v_i-2$. In particular, any Jordan block gives rise to a line $\bbk(A+\lb_i B)\subset \eus P_{\sf sing}$. Since $\eus P_{\sf sing}$ is a sole line, the critical values $\lb_i$ for all Jordan blocks must be equal. Furthermore, since $\rk C=m-2$, there must be only one Jordan block, and we may safely assume that this block corresponds to $\mathfrak v_d$. Now, we are ready to prove assertion (3). It is clear that $L\subset \bigoplus_{i<d} \mathfrak v_i$ and \[ (L \cap \ker C) \subset \textstyle \bigoplus_{i<d} \ker C_i, \] where $\dim\ker C_i=1$ for each $i<d$. Since $A(\mathfrak v_i,\mathfrak v_j)=0$ for $i\ne j$, we obtain $A(\ker C,L\cap \ker C)=0$. \end{proof} Let $\gC\subset\gS(\g)$ be the subalgebra generated by $\gZ_{\langle\be,\ut_-\rangle}$, $e_\delta$, and $f_i$ with $1\le i\le l$. Recall that $H_l^\bullet\in \gZ_{\langle\be,\ut_-\rangle}$ and that $H_l^\bullet = e_\delta \prod_{i=1}^l f_i^{a_i}$ by~\cite[Lemma~4.1]{alafe}. In view of this and Thereorem~\ref{thm:b-n_polynomial}, $\gC$ has a set $\{F_k\mid 1\le k\le \bb(\mathfrak g)+l\}$ of homogeneous generators such that $\{F_k\mid 1\le k\le l\}$ is a basis of $\mathfrak t$, $F_k$ is of the form $(H_j)_{i,d_j-i}$ if $l<k<\bb(\mathfrak g)$, and the last $l+1$ elements $F_k$ are root vectors. By the very construction, we have \beq \label{eq-t} \gZ_{\langle\be,\ut_-\rangle}\subset\gS(\g)^{\te}. \eeq \begin{prop} \label{prop-C} The subalgebra $\gC$ is algebraically closed in $\gS(\g)$. \end{prop} \begin{proof} For $\gamma\in\g^*$, we set $L(\gamma)= \sum_{t\ne 0,\infty} \ker \pi_t(\gamma)$ and $V(\gamma)=\textsl{d}_\gamma \gZ_{\langle\be,\ut_-\rangle}$. If $\gamma\in\g^*_{(t),\sf reg}$ for all $t\ne 0,\infty$, then $L(\gamma)\subset V(\gamma)$ in view of \eqref{span-dif}. It follows from \eqref{eq-t} that $\gamma([V(\gamma),\te])=0$. Consider the following condition on $\gamma$: \begin{itemize} \item[($\diamond$)] \qquad $\gamma$ is nonzero on at least $l$ elements among $e_\delta,f_1,\ldots,f_l$. \end{itemize} Note that condition~($\diamond$) holds on a big open subset and that the $\te$-weights of the $l$ elements involved, say $x_1,\ldots,x_l$, are linearly independent. The linear independence of the selected $l$-tuple of $\te$-weights implies that if $\gamma$ satisfies ($\diamond$), $\gamma([\te, x])=0$, and $x\in\left<x_i \mid 1\le i \le l\right>_{\bbk}$, then $x=0$. Hence here $\dim\textsl{d}_\gamma \gC \ge \dim V(\gamma)+l$. In the proof, we compute $\dim\textsl{d}_\gamma \gC$ only at points $\gamma$ satisfying ($\diamond$). We readily obtain that $\trdeg\gC=\bb(\g)+l$ and hence the homogeneous generators $F_k$ with $1\le k\le \bb(\mathfrak g)+l$ are algebraically independent. The goal is to show that the differentials of the polynomials $F_k$ are linearly independent on a {\bf big} open subset. Note that the assertion is obvious for $\g =\tri$, because here $\gC=\gS(\g)$. Let $\Omega\subset \g^*$ be the dense open subset defined in Theorem~\ref{thm:dim-Z}. Then $\dim V(\gamma)=\bb(\g)$ for any $\gamma\in \Omega$. However, the complement of $\Omega$ may contain divisors; i.e., the divisors lying in $\g^*_{(0),\sf sing}$ or in $\g^*_{\infty,\sf sing}$, see~\eqref{eq-cdt}. \\ \indent \textbullet\quad Concentrate first on the irreducible divisors in $\g^*_{\infty,\sf sing}$. Such a divisor $D(\ap)$ is the hyperplane defined by $\ap\in \Delta^+$, see Lemma~\ref{lm-sing-inf}{\sf (i)}. There is a non-empty open subset ${\mathcal U}\subset D(\ap)$ such that any $\tilde\gamma\in {\mathcal U}$ is regular for all $t\ne \infty$ and satisfies $\dim\g ^{\tilde\gamma}_{(\infty)}=l+2$, see~\eqref{eq-cdt} and Lemmas~\ref{lm-sing-0},~\ref{lm-sing-inf}. We have $\te\subsetneq \g ^{\tilde\gamma}_{(\infty)}$. Recall from the proof of Lemma~\ref{lm-sing-inf} that there is a nonzero $e\in \mathfrak u\cap \mathfrak g^{\tilde\gamma}_{(\infty)}$. Let $\mu$ be a maximal element in the subset $\{\beta\in\Delta^+\mid (e,f_\beta)\ne 0\}$. Then $([\mathfrak u_-,e],f_\mu)=0$. Hence $e\in\mathfrak g^{\gamma}_{(\infty)}$ for any $\gamma=\tilde\gamma+c f_{\mu}$, where $c\in\bbk$ and $f_\mu$ is regarded as a linear function on $\mathfrak g$. For $h\in\mathfrak t$ such that $[h,e_{\mu}]=e_{\mu}$, we have $\gamma([h,e])=\tilde\gamma([h,e])+c(f_\mu,e)$ and here $(f_\mu,e)\ne 0$. For a generic $c\in\bbk$, one obtains $\gamma([\mathfrak t,\mathfrak g^{\gamma}_{(\infty)}])\ne 0$ and $\gamma\in{\mathcal U}$. On the one hand, $\rk\pi(\gamma)|_{\g ^{\gamma}_{(\infty)}}\ge 2$, on the other hand, $\rk\pi(\gamma)|_{\g^{\gamma}_{(\infty)}}\le 2$ by \cite[Lemma\,A.3]{oy}. According to \cite[Lemma\,A.1]{mrl}, $L(\gamma)=\sum_{t\ne\infty} \ker \pi_t(\gamma)$. Now Proposition~\ref{prop-JK} implies that $\dim L(\gamma)= \bb(\g)-1$ and \[ \pi(\gamma)(L(\gamma)\cap \g^{\gamma}_{(\infty)}, \g^{\gamma}_{(\infty)})=0. \] By the construction $\pi(\gamma)(\te,\g^{\gamma}_{(\infty)})\ne 0$. Hence $\te\not\subset L(\gamma)$ and $\dim (L(\gamma)+\te)>\dim L(\gamma)$. For a generic $\gamma\in D(\ap)$, we have then $\dim V(\gamma)=\bb(\g)$ and $\dim\textsl{d}_{\gamma} \gC=\bb(\g)+l$. \\ \indent \textbullet\quad Consider a divisor $D_i\subset \g^*_{(0),\sf sing}$ that is defined by $f_i$ with $a_i>1$, see Lemma \ref{lm-sing-0}. We can safely assume here that $\g$ is not of type {\sf A}. Otherwise $\g^*_{(0),\sf sing}$ has no divisors, cf. \cite[Prop.\,4.3]{alafe}. Because $[\mathfrak b,f_i]_{(0)}\subset \bbk f_i$, we have $f_i\in\g ^{\tilde\gamma}_{(0)}$ for any $\tilde\gamma\in D_i$. Let $\gamma\in D_i$ be generic. Lemma~\ref{lm-sing-0} shows that $\rk\pi_0(\gamma)=\dim\g -l-2$ and that $\rk\pi_\infty(\gamma)=l$. By \cite[Lemma\,A.1]{mrl}, $L(\gamma)=\sum_{t\ne 0} \ker \pi_t(\gamma)$. The next task is to show that $\gamma$ is nonzero on $[f_i,\g ^\gamma_{(0)}]$. In order to do this, we employ considerations from \cite[Sect.~5.2]{contr}. Set $\mathfrak p=\mathfrak p_i=\mathfrak b\oplus\bbk f_i$. Then $\overline{(D_i\cap\mathfrak u)}=\mathfrak p^{\sf nil}$ is the nilpotent radical of $\mathfrak p$. Write $\gamma=y+e$, where $y\in\mathfrak b^*\simeq\mathfrak b_-$ and $e\in\mathfrak p^{\sf nil}$ is a subregular element of $\mathfrak g$, cf. the proof of Lemma~\ref{lm-sing-0}{\sf (ii)}. We may safely assume that $e$ is a Richardson element, i.e., $Pe\subset\mathfrak p^{\sf nil}$ is the dense orbit of the parabolic subgroup $P\subset G$ with $\Lie(P)=\mathfrak p$. There are two possibilities, either $[\mathfrak p,e]=\mathfrak p^{\sf nil}$ is equal to $[\mathfrak b,e]$ or not. Suppose that $\dim[\mathfrak b,e]<\dim\mathfrak p^{\sf nil}$, then there is a nonzero $f\in\mathfrak p^{\sf nil}_-$ such that $(f,[\mathfrak b,e])=0$. At the same time, $(f,[\mathfrak p,e])\ne 0$. Therefore $(f,[f_i,e])=([f,f_i],e)\ne 0$. Note that $\gamma([f,\mathfrak g]_{(0)})=(e,[f,\mathfrak b])=0$, i.e., $f\in\mathfrak g_{(0)}^\gamma$. We have also $\gamma([f,f_i])=(e,[f,f_i])\ne 0$. Thus $\gamma$ is nonzero on $[f_i,\mathfrak g^\gamma_{(0)}]$. Suppose now that $[\mathfrak b,e]=\mathfrak p^{\sf nil}$. In this case, $\dim\mathfrak b^e=l+1$ and $Be$ is dense and open in $\mathfrak p^{\sf nil}$. By \cite[Lemma~5.10]{contr}, $\mathfrak b^e$ is abelian. Set ${\mathcal U}_0=\{\gamma\in D_i \mid \dim\mathfrak g^\gamma_{(0)}=l+2\}$. Then $\gamma=e+y\in {\mathcal U}_0$ for any $y\in\mathfrak b_-$ in view of a direct calculation from~\cite[Lemma~4.8]{contr}. Furthermore, $\gamma([f_i,\mathfrak g_{(0)}^\gamma])=0$ if and only if $(f_i,[e+y,\mathfrak g^\gamma_{(0)}])=0$. As a point of an appropriate Grassmannian, the subspace $\g^\gamma_{(0)}$ depends on $\gamma\in{\mathcal U}_0$ continuously. Therefore it suffices to find just one point $\tilde\gamma\in{\mathcal U}_0$ such that $\tilde\gamma([f_i,\g ^{\tilde\gamma}_{(0)}])\ne 0$. Consider first the case, where $[f_i,e_\delta]\ne 0$. Set $\tilde\gamma=e+f_{\delta-\alpha_i}$. Then $e_\delta\in\mathfrak g^{\tilde\gamma}$. Here $[f_i,e_\delta]$ is a nonzero scalar multiple of $e_{\delta-\alpha_i}$, hence $\tilde\gamma([f_i,e_\delta])=(f_{\delta-\alpha_i},[f_i,e_\delta])\ne 0$. In the remaining cases, $(\delta,\alpha_i)=0$, $\mathfrak b^e$ is abelian, and still $a_i>1$. This is possible if and only if $\mathfrak g$ is of type ${\sf B}_l$ with $l\ge 3$ and $i\ge 3$, see \cite{g-co} and \cite[Prop.~5.13]{contr}. As a Richardson element in $\p^{\sf nil}$, we take $e=e_{\alpha_{i-1}+\alpha_i}+\sum_{j\ne i} e_j$; next $\beta=\delta-(\alpha_2{+}\alpha_3{+}\ldots{+}\alpha_i)$ and $y=f_\beta$. There is a standard choice of root vectors related to elementary skew-symmetric matrices. It leads, for example, to $e_{\alpha_{i-1}+\alpha_i}=[e_{i-1},e_i]$. After such a normalisation, $\xi:=e_{\beta+\alpha_i}-e_\beta\in\mathfrak b^e$. Furthermore, $\ad\!_{(0)}^*(\xi)f_\beta=-[e_\beta,f_\beta]$ and there is $$\eta\in\left< f_j, [f_{i-1},f_i] \mid j\ne i-1,i\right>_{\bbk}$$ such that $\xi+\eta\in\g^{\tilde\gamma}_{(0)}$ for $\tilde\gamma=e+y$. Finally $(e+y, [f_i,\xi+\eta])=(e,[f_i,\eta])+(f_\beta,[f_i,e_{\beta+\alpha_i}])$ is nonzero, because $([e,f_i],\eta)=0$ and $([f_\beta,f_i],e_{\beta+\alpha_i})\ne 0$. Now we know that $\pi(\gamma)(f_i,\g ^\gamma_{(0)})\ne 0$. By \cite[Lemma\,A.3]{oy}, $\rk(\pi(\gamma)|_{\g ^\gamma_{(0)}})\le 2$, hence the rank in question is equal to $2$. According to Proposition~\ref{prop-JK}, $\dim L(\gamma) = \bb(\g)-1$ and $f_i\not\in L(\gamma)$. Note that $\pi(\gamma)(f_i,\te)=0$. Furthermore, if $x\in\left< e_\delta, f_j \mid j\ne i \right>_{\bbk}$ and $\pi(\gamma)(\te,x)=0$, then $x=0$. Therefore $\dim\textsl{d}_\gamma \gC=\bb(\g)+l$. Since $\textsl{d}_\gamma \gC=\left<\textsl{d}_\gamma F_k \mid 1\le k\le \bb(\mathfrak g)+l\right>_{\bbk}$, the goal is achieved, the differentials $\textsl{d}F_k$ are linearly independent on a big open subset. According to \cite[Theorem \,1.1]{ppy}, the subalgebra $\gC$ is algebraically closed in $\gS(\g)$. \end{proof} \begin{thm} \label{max-u} The algebra $\gZ_{\langle\be,\ut_-\rangle}$ is a maximal Poisson-commutative subalgebra of\/ $\gS(\g)$. \end{thm} \begin{proof} Let $\gA\subset \gS(\g)$ be a Poisson-commutative subalgebra and $\gZ_{\langle\be,\ut_-\rangle}\subset \gA$. Since $\trdeg \gZ_{\langle\be,\ut_-\rangle}=\bb(\g)=\trdeg\gA$, each element $x\in\gA$ is algebraic over $\gZ_{\langle\be,\ut_-\rangle}$. Hence it is also algebraic over $\gC$ and by Proposition~\ref{prop-C}, we have $x\in\gC$. Since $\te\subset \gZ_{\langle\be,\ut_-\rangle}$, we have $\{\te,x\}=0$. The algebra of $\te$-invariants in $\gC$ is generated by $\gZ_{\langle\be,\ut_-\rangle}$ and the monomials $e_\delta^c f_1^{c_1}\ldots f_{l}^{c_l}$ such that $c_i=ca_i$. Each such monomial is a power of $H_l^\bullet$. Therefore $x\in \gZ_{\langle\be,\ut_-\rangle}$ and $\gZ_{\langle\be,\ut_-\rangle}=\gA$. \end{proof} \section{The Poisson-commutative subalgebra $\gZ_{\langle\be,\g_0\rangle}$} \label{sect:g0-b} If $\sigma$ is an involution of $\g$, then $\g=\g_0\oplus \g_1$, where $\g_i=\{x\in\g\mid \sigma(x)=(-1)^ix\}$. As is well-known, $\g_0$ is a spherical subalgebra of $\g$. Therefore, there is a Borel subalgebra $\be$ such that $\g_0+\be=\g$. An involution $\sigma$ is said to be of {\it maximal rank}, if $\g_1$ contains a Cartan subalgebra of $\g$. Then $\dim \g_1=\dim \be$, $\dim\g_0=\dim\ut$, and such $\sigma$ is unique up to $G$-conjugation. Therefore, in the maximal rank case, there is a Borel subalgebra $\be$ such that \beq \label{eq:direct-b-g0} \be\oplus\g_0=\g . \eeq Recall that (for $\bbk=\BC$) there is a bijection between the (conjugacy classes of complex) involutions of $\g$ and the real forms of $\g$, see e.g.~\cite[Ch.\,4,\ 1.3]{t41}. Under this bijection the involution of maximal rank corresponds to the split real form of $\g$. This bijection also allows us to associate the Satake diagram~\cite[Ch.\,4,\ 4.3]{t41} to any involution. In this section, we assume that $\sigma$ is of maximal rank and take $(\h,\rr)=(\be,\g_0)$ such that Eq.~\eqref{eq:direct-b-g0} holds. As in Section~\ref{sect:b-n}, to describe the generators of $\gZ_{\langle\be,\g_0\rangle}$, we need a set of generators for the Poisson centres $\cz_0=\cz\gS(\be\ltimes \g_0^{\sf ab})$ and $\cz_\infty=\cz\gS(\g_0\ltimes\be^{\sf ab})$. By the Independence Principle of Section~\ref{sect:2}, we have \[ \text{$\be\ltimes \g_0^{\sf ab}\simeq \be\ltimes\ut^{\sf ab}_-$ \quad and \quad $\g_0\ltimes\be^{\sf ab}\simeq \g_0\ltimes\g_1^{\sf ab}$.} \] Hence the structure of $\cz_0$ is already described in Prop.~\ref{prop:gen-Z0}, whereas the Poisson centre of $\gS(\g_0\ltimes\g_1^{\sf ab})$ is described in \cite{coadj}. Namely, $\cz\gS(\g_0\ltimes\g_1^{\sf ab})$ is freely generated by the bi-homogeneous components of $\{H_i\}$ of minimal degree w.r.t. $\g_0$, i.e., of maximal degree w.r.t. $\g_1$ (or $\be$). In particular, any generating system $H_1,\dots,H_l\in \gS(\g)^\g$ is a $\g_0$-{\sf g.g.s.} \begin{thm} \label{thm:g0-b-polynomial} The algebra $\gZ_{\langle\be,\g_0\rangle}$ is polynomial. It is freely generated by the bi-homogeneous components $\{(H_j)_{i,d_j-i} \mid 1\le j\le l,\ 1\le i \le d_j\}$. \end{thm} \begin{proof} It follows from the above discussion and Theorem~\ref{thm:main3-1} that $\gZ_{\langle\be,\g_0\rangle}$ is generated by the bi-homogeneous components of all $\{H_i\}$. The total number of all bi-homogeneous components equals $\sum_{j=1}^l(d_j+1)=\bb(\g)+l$. As in the proof of Theorem~\ref{thm:b-n_polynomial}, the component $(H_j)_{0,d_j}$ is the restriction of $H_j$ to $\be^\perp$. Under the identification of $\g$ and $\g^*$, we have $\be^\perp=\ut$. Therefore $(H_j)_{0,d_j}\equiv 0$ for all $j$. Thus, there remain at most $\bb(\g)$ nonzero bi-homogeneous components and, in view of Theorem~\ref{thm:dim-Z}, these components must be nonzero and algebraically independent. \end{proof} \noindent Thus, we have obtained a polynomial Poisson-commutative subalgebra $\gZ_{\langle\be,\g_0\rangle}$ of $\gS(\g)$ of maximal transcendence degree. \begin{ex} \label{ex:sl-so} {\sf (1)} \ If $\g=\sln$ and $\sigma$ is of maximal rank, then $\g_0=\son$. Here $\dim\g^*_{(0),\sf sing}\le \dim\g - 2$ by \cite[Theorem\,3.3]{coadj} and $\dim\g^*_{\infty,\sf sing}\le \dim\g -2$ by~\cite[Section\,4]{alafe}. In view of \eqref{eq-cdt}, this implies that the open subset $\Omega$ of Theorem~\ref{thm:dim-Z} is big. Thus, the differentials of the free generators of $\gZ_{\langle\be,\g_0\rangle}$ are linearly independent on the big open subset $\Omega$. By \cite[Theorem \,1.1]{ppy}, this means that $\gZ_{\langle\be,\g_0\rangle}$ is an algebraically closed subalgebra of $\gS(\g)$. Since $\trdeg \gZ_{\langle\be,\g_0\rangle}$ is maximal possible among all {\sf PC}-subalgebras, $\gZ_{\langle\be,\g_0\rangle}$ is a {\bf maximal} {\sf PC}-subalgebra of $\gS(\sln)$. {\sf (2)} \ By~\cite[Theorem\,4.4]{alafe}, if $\g$ is simple, but $\g\ne\sln$, then $\dim\g^*_{\infty,\sf sing}= \dim\g -1$. Therefore, the above argument does not generalise. Still, this does not prevent $\gZ_{\langle\be,\g_0\rangle}$ from being a maximal {\sf PC}-subalgebra. Actually, we do not know yet whether $\gZ_{\langle\be,\g_0\rangle}$ is maximal for the other simple $\g$. \end{ex} \begin{rmk} \label{rmk:any-invol} If $\sigma$ is not of maximal rank, then $\dim\g_0>\dim\ut$ and the sum $\g_0+\be=\g$ {\bf cannot} be direct. Given $\g_0$, one can choose a generic ``opposite'' Borel subalgebra $\be$ such that $\dim(\be\cap\g_0)$ is minimal possible and $\be\cap\g_0$ is closely related to a Borel subalgebra of a certain Levi subalgebra. Namely, there is a parabolic subalgebra $\p\supset\be$, with the standard Levi subalgebra $\el\subset\p$, such that $[\el,\el]\subset \p\cap \g_0\subset \el$ and $\be\cap\g_0$ is a Borel subalgebra of $\es:=\p\cap \g_0$~\cite[Chapters\,1,\,2]{these-p}. (The semisimple algebra $[\el,\el]$ corresponds to the subset of black nodes of the Satake diagram of $\sigma$.) Therefore, there is always a {\it solvable\/} subalgebra $\h\subset\be$ normalised by $\te$ such that $\h\oplus (\g_0\cap\be)=\be$ and hence $\h\oplus\g_0=\g$. Here $\p^{\sf nil}\subset\h\subset \p^{\sf nil}\oplus\z(\el)$, where $\z(\el)$ is the centre of $\el$. Hence {\it\bfseries any} involution $\sigma$ gives rise to a natural $2$-splitting of $\g$. But this $\h$ not necessarily spherical. A sufficient condition for sphericity is that the Satake diagram of $\sigma$ has no black nodes. (This is equivalent to that $\g_1\cap\g_{\sf reg}\ne\hbox {\Bbbfont\char'077}$.) Then $\p=\be$ and $\be\cap\g_0\subset \te$. Hence $\h\supset\ut=\be^{\sf nil}$ and thereby $\h$ is spherical. Thus any involution of $\g$ having the property that $\g_1\cap\g_{\sf reg}\ne\hbox {\Bbbfont\char'077}$ gives rise to a {\sl non-degenerate} $2$-splitting. \\ \indent \textbullet \ If $\g$ is simple, then such involutions that are not of maximal rank exist only for $\GR{A}{n}$, $\GR{D}{2n+1}$, and $\GR{E}{6}$. However, it is not yet clear how to describe explicitly the Poisson centre $\cz\gS(\h\ltimes\g_0^{\sf ab})$ if \ $\h\ne\be$. \\ \indent \textbullet \ Yet another similar possibility is the semisimple algebra $\g\oplus\g\simeq\g\times\g$, where $\g$ is simple and $\sigma$ is the permutation of summands. Here everything can be accomplished explicitly, see the following section. \end{rmk} \section{Poisson-commutative subalgebras related to a $2$-splitting of $\g\times\g$} \label{sect:k-b} \noindent In this section, we consider in detail the good case mentioned at the end of Remark~\ref{rmk:any-invol} and its application to Lie algebras over $\BR$. Let $\tau$ be the involution of $\tilde\g:=\g\oplus\g\simeq\g\times\g$ such that $\tau(x_1,x_2)=(x_2,x_1)$. Then $\tilde\g_0=\Delta_\g\simeq\g $ is the usual diagonal in $\g\times\g$ and $\tilde\g_1$ is the antidiagonal $\Delta^{(-)}_\g=\{(x,-x)\mid x\in\g\}$. Here a generic opposite Borel subalgebra of $\tilde\g$ for $\Delta_\g$ is $\be\times\be_-$ and $\Delta_\g\cap(\be\times\be_-)=\Delta_\te$. It follows that a complementary solvable subalgebra for $\Delta_\g$ is \[ \h=\Delta^{(-)}_\te\oplus (\ut\times\ut_-) , \] where $\Delta^{(-)}_\te=\{(x,-x)\mid x\in\te\}$. This yields the $2$-splitting \beq \label{eq:2-twisted} \g\times\g=\h\oplus\Delta_\g , \eeq associated with $\tau$ in the sense of Remark~\ref{rmk:any-invol}. Next step is to prove that this $2$-splitting is non-degenerate and both related {\sf IW}-contractions of $\g\times\g$ have a polynomial ring of symmetric invariants. By the Independence Principle, the {\sf IW}-contraction $(\g\times\g)_{(\infty)}=\Delta_\g\ltimes\h^{\sf ab}$ is isomorphic to the {\it Takiff Lie algebra\/} $\g\ltimes\g^{\sf ab}$. A description of the symmetric invariants of $\g\ltimes\g^{\sf ab}$ is due to Takiff~\cite{takiff}, cf. also \cite{p05}. This implies that there is a good generating system here. More explicitly, let $\{H_{j,I}, H_{j,II}\mid 1\le j\le l\}$ be the obvious set of basic symmetric invariants of $\g\times\g$. Then $\{H_{j,I} \pm H_{j,II}\mid 1\le j\le l\}$ is a $(\Delta_\g)$-{\sf g.g.s.} Set $\BV=\Delta_\te\oplus(\ut _-\times\ut)\subset \g\times\g$. As $\BV$ is a complementary space to $\h$ in $\g\times\g$, it follows from the independence principle that \[ \q:= \h\ltimes \BV^{\sf ab}\simeq \h\ltimes\Delta_\g^{\sf ab}= (\g\times\g)_{(0)}, \] i.e., $\q$ is the {\sf IW}-contraction of $\g\times\g$ associated with $\h$. Recall that $d_j=\deg H_j$. \begin{prop} \label{prop-7-0} We have $\ind\q=2l$ and\/ $\gS(\q)^{\q}$ is freely generated by the polynomials \[ \boldsymbol{F}_j=(H_{j,I}- H_{j,II})^\bullet \ \text{ with } \ 1\le j\le l \] and a basis of \, $\Delta_\te$. \end{prop} \begin{proof} Let $\gamma=(\xi,\xi)\in\q^*$ be a linear form such that $\xi\in\te^*$ and $\dim\g^\xi=l$. Then $\dim\q^\gamma = 2l$ and thereby $\ind\q\le 2l$. Note also that $\Delta_\te$ belong to the centre of $\q$, i.e., $[\Delta_\te,\g\times\g]_{(\infty)}=0$. Let $F_j=F_{j,I}$ be the highest $\ut _-$-component of $H_j\in\gS(\g)^{\g }$ w.r.t. the splitting $\g=\be\oplus \ut_-$. By~\cite[Lemma\,5.7]{contr}, we have $F_j\in\gS(\ut \oplus\ut _-)$. Recall that by \cite{alafe} the elements $\{F_j\}$ are algebraically independent and $\deg_{\ut _-} F_j=d_j-1$. Similarly, let $F_{j,II}$ be the highest $\ut$-component of $H_{j,II}$ w.r.t. the splitting $\g=\be_-\oplus \ut$. For each $j\in\{1,\dots,l\}$ and $s\in\{I,II\}$, we have $H_{j,s}^\bullet \in\gS(\Delta_\te)$. In view of this and the above paragraph, \[ \boldsymbol{F}_j= F_{j,I} - F_{j,II} + \tilde F_j, \ \text{ where } \ \tilde F_j \in \Delta_\te\gS(\q). \] We see that $\gS(\q)^{\q}$ contains $2l$ algebraically independent elements and hence $\ind\q\ge 2l$. Thereby $\ind\q=2l$. Assume that $\dim(\left<\textsl{d}_\gamma \boldsymbol{F}_j \mid 1\le j\le l\right>_{\bbk}+\Delta_\te)<2l$ for all points $\gamma$ of a divisor $D\subset\q^*$. Then $D$ is defined by a homogeneous polynomial and $\dim(D\cap\Ann(\Delta_\te))\ge \dim\q-l-1$. If we write $\gamma=\gamma_I+\gamma_{II}$ for $\gamma\in (D\cap\Ann(\Delta_\te))$, then the differentials $\{\textsl{d}_{\gamma_s} F_{j,s}\}$ are linearly dependent at $\gamma_s$ and thus \[ \gamma_I \in (\be\ltimes\ut^{\sf ab}_-)^*_{\sf sing}, \quad \gamma_{II} \in (\be_-\ltimes\ut^{\sf ab})^*_{\sf sing} \] by \cite{contr}. The intersection $(\mathfrak b\ltimes\ut _-^{\sf ab})^*_{\sf sing} \cap \Ann (\te)$ is a proper closed subset of $\Ann\!(\te)$, cf. Lemma~\ref{lm-sing-0}{\sf (i)}. Thereby $\dim(D\cap\Ann (\Delta_\te))\le \dim\q-l-2$, a contradiction. By~\cite[Theorem \,1.1]{ppy}, the polynomials $\{\boldsymbol{F}_j\}$, together with a basis of $\Delta_\te$, generate an algebraically closed subalgebra of $\gS(\q)$. Since $\ind\q=2 l$, we are done. \end{proof} Thus, the above results show that $2$-splitting~\eqref{eq:2-twisted} is non-degenerate, and we can consider the corresponding Poisson-commutative subalgebra of $\gS(\g\times\g)$. \begin{thm} \label{thm:g-h} The algebra $\gZ_{\langle\h,\Delta_\g\rangle}\subset\gS(\g\times\g)$ is polynomial. It is freely generated by the bi-homogeneous components \[ \{(H_{j,I} + H_{j,II})_{s,d_j-s}, (H_{j,I} - (-1)^{d_j}H_{j,II})_{s',d_j-s'} \mid 1\le j\le l,\ 1\le s \le d_j, \, 1\le s' \le d_j-1 \} \] together with a basis for\/ $\Delta_\te$. \end{thm} \begin{proof} It follows from the description of $\cz_{\infty}$ and Theorem~\ref{thm:main3-1} that $\gZ_{\langle\h,\Delta_\g\rangle}$ is generated by all the bi-homogeneous components of $\{H_{j,I} \pm H_{j,II}\}$ and $\cz_0$. By Proposition~\ref{prop-7-0}, $\cz_0$ is generated by the bi-homogeneous components of the form $\{(H_{j,I}- H_{j,II})^\bullet\}$ with $j=1,\dots,l$ and a basis for $\Delta_\te$. Thus, the total number of generators is at most $2\bb(\g)+3l$. Since the components of the form $(H_{j,I} \pm H_{j,II})_{0,d_j}$ are either zero or belong to $\gS(\Delta_\te)$, they are redundant. Notice also that $(H_{j,I} - (-1)^{d_j}H_{j,II})_{d_j,0}=0$ for all $j$. Therefore, there are at most $2\bb(\g)=\bb(\g\times\g)$ nonzero generators and, in view of Theorem~\ref{thm:dim-Z}, they must be algebraically independent. \end{proof} \subsection{The real picture} Assume now that $\bbk=\BC$. Let $\ka$ be a compact real form of $\g$. Then $\be\cap \ka=i\te_{\BR}$, where $\te_{\BR}$ is a maximal torus in a split real form $\g _{\BR}\subset \g$. Set $\rr=\te_{\BR}\oplus\ut$. It is an $\BR$-subalgebra of $\be$ and we have the real $2$-splitting $\g =\rr\oplus\ka$, which is the Iwasawa decomposition of $\g$ as a real Lie algebra. The complexification of this decomposition is conjugate to the $2$-splitting of $\g\times\g$ defined by Eq.~\eqref{eq:2-twisted}. Here $(\g,\ka, \rr)$ is a Manin triple over $\BR$, see~\cite[Sect.~5.3]{duzu}. We choose the basic symmetric invariants of $\g$ such that each $H_j$ takes only real values on $\te_{\mathbb R}$. Over $\mathbb R$, $\gS(\g)^{\g }$ is generated by $\Re H_j$ and $\Im H_j$ with $1\le j\le l$. Translating Theorem~\ref{thm:g-h} to the real setting, we obtain the following result. \begin{thm} \label{thm:k-b-polynomial} Let $\gS_\BR(\g)$ be the symmetric algebra over $\BR$ of the real Lie algebra $\g$. Then the $\BR$-algebra $\gZ_{\langle\rr,\mathfrak k\rangle}\subset \gS_\BR(\g)$ is polynomial. It is freely generated by the bi-homogeneous components \[ \{(\Re H_j)_{s,d_j-s}, (\Im H_j)_{s',d_j-s'} \mid 1\le j\le l,\ 1\le s \le d_j, \, 1\le s' \le d_j-1 \} \] together with a basis of \ $i\te_{\mathbb R}$. \end{thm} \begin{rmk} \label{rem:real-general} We associated a $2$-splitting of $\g$ to any (complex) involution $\sigma$, see Remark~\ref{rmk:any-invol}. If $\g_{\BR,\sigma}$ is the real form of $\g$ corresponding to $\sigma$, then the Iwasawa decomposition of $\g_{\BR,\sigma}$ is just the real form of that $2$-splitting. We hope to elaborate on this relationship and related {\sf PC}-subalgebras in the future. \end{rmk}
1,116,691,501,101
arxiv
\section{Introduction}\label{sec:intro} In hot and dense medium arising in core-collapse supernova (CCSN) and binary neutron star merger (BNSM), neutrinos play a key role in transporting energy, momentum, and lepton-number. Once neutrinos are produced by weak interactions, they travel in medium. A fraction of these neutrinos experiences scatterings with or reabsorption onto matter, that converts neutrino energy and momentum into those of matter, and then affects the fluid-dynamics. The neutrino emission and absorption can also change the electron-fraction of matter. This has a direct influence on the chemical composition, thus accounting for nucleosynthesis in the ejecta. These things highlight the importance of developing accurate modelling of neutrino radiation field. Decades of progress on numerical simulations of CCSN and BNSM with Boltzmann neutrino transport or its approximate methods have improved our understanding of rolls of neutrinos in fluid dynamics, nucleosynthesis, and observational consequences such as neutrino signal. The classical treatment of neutrino kinetics, i.e., Boltzmann neutrino transport is justified as long as neutrinos are stuck in flavor eigenstates, which was naturally expected due to large matter potential in CCSN and BNSM environments (see, e.g., \cite{2000PhRvD..62c3007D}). On the other hand, in dense neutrino environments, neutrino-neutrino self-interactions give rise to refractive effects \cite{1992PhLB..287..128P}, indicating that the neutrino dispersion relation is modified. This potentially triggers large neutrino-flavor conversion \cite{2010ARNPS..60..569D}. It has been suggested that various types of flavor conversions emerge by neutrino self-interactions. For instances, slow neutrino-flavor conversion, that is driven by interplay between vacuum- and self-interactions, leads to synchronized-, bipolar-, and spectral split phenomena (see \cite{2010ARNPS..60..569D} and references therein). Matter neutrino resonances may occur BNSM or collapsar environment \cite{2012PhRvD..86h5015M,2016PhRvD..93d5021M,2016PhRvD..94j5006Z}, in which the dominance of electron-type anti-neutrinos ($\bar{\nu}_{e}$) over electro-type neutrinos ($\nu_{e}$) cancels the matter potential, which induces the similar resonant phenomena as Mikheyev-Smirnov-Wolfenstein (MSW) effect. Collisional instability is a new type of instability of flavor-conversion, in which the disparity of matter interactions between neutrinos- and anti-neutrinos induces the flavor conversion \cite{2021arXiv210411369J}. Fast neutrino-flavor conversion (FFC) has received increased attention from the community, since it would ubiquitously occur in both CCSN \cite{2019PhRvD.100d3004A,2019ApJ...886..139N,2020PhRvR...2a2046M,2020PhRvD.101d3016A,2020PhRvD.101b3018D,2021PhRvD.103f3033A,2020PhRvD.101f3001G,2021PhRvD.103f3013C,2021PhRvD.104h3025N,2022ApJ...924..109H} and BNSM \cite{2017PhRvD..95j3007W,2017PhRvD..96l3015W,2020PhRvD.102j3015G,2021PhRvL.126y1101L,2022arXiv220316559J}. Since the growth rate of FFC is proportional to neutrino number density, it would offer the fastest growing mode of flavor conversion in CCSN and BNSM (see also recent reviews, e.g., \cite{2016NuPhB.908..366C,2020arXiv201101948T}). Theoretical indication of occurrences of FFC in CCSN and BNSM implies that its feedback onto radiation-hydrodynamics and nucleosyntheisis needs to be incorporated in one way or another. Very recently, there have been some attempts to tackle this issue in simulations of BNSM remnants (see e.g., \cite{2021PhRvL.126y1101L,2022arXiv220316559J}), which certainly marked an important stepping stone towards BNSM models with quantum kinetics neutrinos. However, there are many simplifications and approximations in these models, which would discard important features of neutrino-flavor conversion. It is, hence, necessary to consider how we bridge the current gap between CCSN/BNSM simulations and non-linear dynamics of neutrino quantum kinetics. The numerical code which we present in this paper is designed to mediate their binding to each other. In the last decades, considerable progress have also been achieved in neutrino-oscillation community. Analytic approaches with simplifying assumptions and toy models have facilitated our understanding of neutrino-flavor conversion (see, e.g., \cite{2006PhRvD..74j5010H,2007PhRvD..76l5008R,2007PhRvD..75l5005D,2007PhRvD..75h3002R,2015IJMPE..2441008D,2020PhRvD.101d3009J,2022PhRvL.128l1102P}). Numerical simulations are also powerful tools to explore their non-linear behaviors with relaxing assumptions. However, they are not yet at a stage to provide reliable astrophysical consequences of the flavor conversion. This is mainly due to the fact that there are large disparities in spatial- and temporal scales between neutrino-flavor conversion and astrophysical phenomena, exhibiting the need for currently unfeasible computational power. Notwithstanding, there are many numerical simulations to study the non-linear properties of neutrino quantum kinetics. The simplest model would be neutrino bulb model\footnote{We note that there are different levels of approximations in the light bulb model; for instances, steady-state or time-dependent, single or multi-angle, with or without including halo effects. See references for more details.} \cite{2006PhRvD..74l3004D,2006PhRvD..74j5014D,2007PhRvD..76h5013D,2007JCAP...12..010F,2011PhRvL.106i1101D,2012PhRvD..85f5008D,2017PhRvD..96b3009Y,2018PhRvD..97h3011V,2018PhRvD..98j3020Z,2021PhRvD.103f3008Z}. Although this model has many simplifying assumptions, some intriguing features of neutrino-flavor conversion in global scale have been revealed. Two-beam- \cite{2015PhRvD..92l5030D,2019PhRvL.122i1101C} and line-beamed models \cite{2015PhRvD..92f5019A,2018PhRvD..98d3014A,2019PhLB..790..545A}, both of which reduce the computational cost by limiting neutrino flight-directions (see also \cite{2015PhRvD..92b1702M}), are also powerful approaches to study non-linear phase of flavor conversion without much computational burden. More direct simulations of flavor conversion has also been made under homogeneous- \cite{2020PhRvD.101d3009J,2020PhRvD.102j3017J,2021PhLB..82036550X,2021PhRvD.104b3011S,2021arXiv210914011S,2022PhRvL.128l1102P,2022arXiv220411873H} and inhomogeneous- \cite{2017JCAP...02..019D,2019PhRvD.100b3016M,2020PhLB..80035088M,2020PhRvD.102f3018B,2021PhRvL.126f1302B,2021PhRvD.104j3003W,2021PhRvD.103f3001M,2021PhRvD.104j3023R,2021PhRvD.104h3035Z,2021PhRvD.104l3026D,2022JCAP...03..051A,2022PhRvD.105d3005S,2022arXiv220505129B} neutrino medium with resolving neutrino angular distributions in momentum space. It is also worthy of note that a code-comparison across different numerical solvers was made very recently \cite{2022arXiv220506282R}, which is a rewarding effort to convince them and others that their quantum kinetics codes work well, and to understand strengths and weaknesses of each code. On the other hand, time-dependent features of neutrino-flavor conversions in global scale remain an enduring mystery. It should be mentioned that collective neutrino oscillations naturally break their own temporal stationarity \cite{2015PhLB..751...43A,2015PhRvD..92l5030D}, exhibiting the importance of time-dependent simulations. General relativistic (GR) effects also need to be incorporated in global simulations, since the gravity is usually strong in regions where neutrino number density is high (e.g., in the vicinity of neutron star). They may play negligible roles in flavor conversion, since gravitational redshift and light-bending effects have an influence on neutrino distributions in momentum space, that has an impact on self-interaction potentials (see, e.g. \cite{2017PhRvD..96b3009Y}). However, currently available numerical codes (see, e.g., \cite{2021PhRvD.103h3013R,2022arXiv220312866G}), that have a capability of solving time-dependent quantum kinetic equation, were designed for local simulations. More precisely speaking, their numerical approach is not suited for curvilinear coordinate system, and the formulation is not applicable to neutrino transport in curved spacetimes. As is well established in numerical treatments of GR Boltzmann equation, the transport operator can be, in general, written in a conservative form (see, e.g., \cite{2013PhRvD..88b3011C,2014ApJS..214...16N,2020ApJ...888...94D}), which is very useful for numerical simulations. The operator contains not only spatial components but also momentum-space ones (see Sec.~\ref{sec:basiceq} for more details), that accounts for geometrical effects of neutrino transport. As such, the formalism is suited for neutrino transport in global scale. Our CCSN neutrino-radiation-hydrodynamic code with full Boltzmann neutrino transport was developed with the formalism \cite{2014ApJS..214...16N,2017ApJS..229...42N,2019ApJ...878..160N,2021ApJ...909..210A}, and it has worked well in multi-dimensional (multi-D) CCSN simulations \cite{2018ApJ...854..136N,2019ApJ...880L..28N,2019ApJ...872..181H,2020ApJ...903...82I}. In this paper, we present a new numerical code GRQKNT (General-Relativistic Quantum-Kinetics Neutrino Transport), that is designed for time-dependent local- and global simulations of neutrino-flavor conversion in CCSN and BNSM environments. At the moment, we are particularly interested in the dynamics of FFC, that seems to be the most relevant to and the largest uncertainty in the theory of CCSN and BNSM. One may wonder if those simulations are intractable by the current numerical resources. However, we can relax the computational burden by reducing the neutrino number density. Since the physical scale of FFC is determined only by the self-interaction potential, the reduction of neutrino number density makes the global simulations computationally feasible. The similar approach can be seen in other fields; for instance, ion-to-electron mass ratio is frequently reduced in particle-in-cell simulations of plasma physics to save computational time\footnote{It is worth to note that nowadays the increased computational resources allow PIC simulations with real mass ratio (see, e.g., \cite{2015PPCF...57k3001A}).}. Realistic FFC features (i.e., without reduction of neutrino number density) can be obtained by increasing the neutrino number density, and the resolutions in neutrino phase space and the size of computational domain are controlled in accordance with computational power. Following the above approach, we carried out a time-dependent global simulations of FFC; the results are reported in a separate paper \cite{2022arXiv220604097N}. We confine the scope of this paper to describing philosophy, design, and numerical aspects of GRQKNT. This paper is organized as follows. We describe the basic equation and the numerical formalism in Sec.~\ref{sec:basiceq}. We encapsulate the detail of each numerical module into each section: transport module (in Sec.~\ref{sec:transport}), collision term (in Sec.~\ref{sec:Colterm}), and oscillation module (in Sec.~\ref{sec:osc}). Finally, we summarize and conclude in Sec.~\ref{sec:summary}. We use the unit with $c = G = \hbar = 1$, where $c$, $G$, and $\hbar$ are the light speed, the gravitational constant, and the reduced Planck constant, respectively. We use the metric signature of $- + + +$. \section{Basic equations}\label{sec:basiceq} In GRQKNT code, we solve general relativistic mean-field quantum kinetic equation (QKE), which is written as (see also \cite{2019PhRvD..99l3014R}), \begin{equation} \begin{aligned} p^{\mu} \frac{\partial \barparena{f}}{\partial x^{\mu}} + \frac{dp^{i}}{d\tau} \frac{\partial \barparena{f}}{\partial p^{i}} = - p^{\mu} u_{\mu} \barparena{S} + i p^{\mu} n_{\mu} [\barparena{H},\barparena{f}]. \end{aligned} \label{eq:basicneutrinosQKE} \end{equation} In the expression, we use the same convention as \cite{2021ApJS..257...55K}. $f$ and $\bar{f}$ denote the density matrix of neutrinos and anti-neutrinos, respectively; $x^{\mu}$ and $p^{\mu}$ are spacetime coordinates and the four-momentum of neutrinos (and anti-neutrinos); $u^{\mu}$ and $n^{\mu}$ represent the four-velocity of fluid and the unit vector normal to the spatial hypersurface of constant time, respectively; $S$ ($\bar{S}$) represents the collision terms measured at the fluid rest frame; $H$ ($\bar{H}$) denotes the Hamiltonian operator associated with neutrino-flavor conversion. The Hamiltonian is composed of three compositions, \begin{equation} \barparena{H} = \barparena{H}_{\rm vac} + \barparena{H}_{\rm mat} + \barparena{H}_{\nu \nu}, \label{eq:Hdecompose} \end{equation} where \begin{equation} \begin{aligned} &\bar{H}_{\rm vac} = H^{*}_{\rm vac} , \\ &\bar{H}_{\rm mat} = - H^{*}_{\rm mat} ,\\ &\bar{H}_{\nu \nu} = - H^{*}_{\nu \nu}. \end{aligned} \label{eq:Hantineutrinos} \end{equation} $H_{\rm vac}$ denotes the vacuum Hamiltonian with the expression in the neutrino-flavor eigenstate, which can be written as \begin{equation} \begin{aligned} H_{\rm vac} = \frac{1}{2 \nu} U \begin{bmatrix} m^{2}_{1} & 0 & 0\\ 0 & m^{2}_{2} & 0 \\ 0 & 0 & m^{2}_{3} \end{bmatrix} U^{\dagger} ,\\ \end{aligned} \label{eq:Hvdef} \end{equation} where $\nu = -p^{\mu} n_{\mu} = p^{0} \alpha$; $\alpha$ denotes the lapse function associated with space-time foliation (3+1 formalism of curved space-time); $m_{i}$ denotes the mass of neutrinos; $U$ denotes the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix. The matter potential $H_{\rm mat}$ can be written as \begin{equation} \begin{aligned} H_{\rm mat} = D \begin{bmatrix} V_e & 0 & 0\\ 0 & V_{\mu} & 0 \\ 0 & 0 & V_{\tau} + V_{\mu \tau} \end{bmatrix} ,\\ \end{aligned} \label{eq:Hmatdef} \end{equation} where $D = (-p^{\mu} u_{\mu})/\nu$ denotes the effective Doppler factor between the laboratory frame and the fluid-rest frame, i.e., representing the Lorentz boost between $\mbox{\boldmath $n$}$ and $\mbox{\boldmath $u$}$ under local flatness (see \cite{2014ApJS..214...16N,2017ApJS..229...42N} for more details). The leading order of $V_{\ell}$ can be written as \begin{equation} V_{\ell} = \sqrt{2} G_F (n_{\ell^{-}} - n_{\ell^{+}}) , \label{eq:Velldef} \end{equation} where $G_F$ and $n_{\ell}$ represent the Fermi constant and the number density of charged-leptons $(\ell = e, \mu, \tau)$, respectively. As a default set, we assume that on-shell heavy leptons ($\mu$ and $\tau$) do not appear, i.e., $V_{\mu}$ and $V_{\tau}$ are set to be zero. It should be mentioned, however, that $V_{\mu}$ may not always be zero, since on-shell muons would appear in the vicinity of (or inside) neutrino star \cite[see, e.g.,][]{2017PhRvL.119x2702B,2020PhRvD.102l3001F}. $V_{\mu \tau}$ represents, on the other hand, the radiative correction of neutral current \cite{1987PhRvD..35..896B,2000PhRvD..62c3007D}, which is a leading order to distinguish $\nu_{\mu}$ and $\nu_{\tau}$ in cases with $V_{\mu}=V_{\tau}=0$. Following \cite{2000PhRvD..62c3007D}, $V_{\mu \tau}$ can be computed as, \begin{equation} V_{\mu \tau} = V_e \frac{3 G_F m_{\tau}^2 }{2 \sqrt{2} \pi^2 Y_e} \left( \ln \frac{m_{W}^2}{m_{\tau}^{2}} - 1 + \frac{Y_n}{3} \right), \label{eq:Vmyutau} \end{equation} where $m_{\tau}$ and $m_{W}$ denote the mass of tau and W boson, respectively. $Y_e$ and $Y_n$ represents the electron-fraction and neutron-fraction, respectively. $H_{\nu \nu}$ represents the neutrino self-interaction potential, which can be written as \begin{equation} H_{\nu \nu} = \sqrt{2} G_F \int \frac{d^3 q^{\prime}}{(2 \pi)^3} (1 - \sum_{i=1}^{3} \ell^{\prime}_{(i)} \ell_{(i)} ) (f(q^{\prime}) - \bar{f}^{*}(q^{\prime})), \label{eq:Hselfpotedef} \end{equation} where $d^3q$ denotes the momentum space volume of neutrinos, which are measured at the laboratory frame; $\ell_{i} (i = 1, 2, 3)$ denote directional cosines for the direction of neutrino propagation. The two angles of neutrino flight directions are measured with respect to a spatial tetrad basis $\mbox{\boldmath $e$}_{(1)}$. There are multiple options to choose $\mbox{\boldmath $e$}_{(1)}$, and we usually set it as a unit vector in the same direction of the radial coordinate basis (see e.g., \cite{2014PhRvD..89h4073S,2017ApJS..229...42N}). By using the polar- ($\theta_{\nu}$) and azimuthal angles ($\phi_{\nu}$) in neutrino momentum space, $\ell_{i} (i = 1, 2, 3)$ can be written as \begin{equation} \begin{split} &\ell_{(1)} = \cos \hspace{0.5mm} \theta_{\nu}, \\ &\ell_{(2)} = \sin \hspace{0.5mm} \theta_{\nu} \cos \hspace{0.5mm} \phi_{\nu}, \\ &\ell_{(3)} = \sin \hspace{0.5mm} \theta_{\nu} \sin \hspace{0.5mm} \phi_{\nu}. \end{split} \label{eq:el} \end{equation} There are four remarks regarding the QKE. First, we take the relativistic limit of neutrinos in the expression; the energy of neutrinos is much larger than the rest-mass energy, which is a reasonable approximation for CCSN and BNSM\footnote{The typical energy of neutrinos in CCSN and BNSM is an order of $10$ MeV, meanwhile the current upper bound of neutrino mass is $\lesssim 0.1$ eV \cite{2019PhRvL.123h1301L}.}. Hence, we treat the neutrinos as massless particles in the transport equation (the left hand side of Eq.~\ref{eq:basicneutrinosQKE}) and the collision term (the first term in the right hand side of Eq.~\ref{eq:basicneutrinosQKE}). On the other hand, we leave the leading term of $\nu \times (m/\nu)^2$ in the Hamiltonian operator (see Eq.~\ref{eq:Hvdef}). Second, we define the Hamiltonian operator in the laboratory frame, although the choice of the frame is arbitrary (see, e.g., \cite{2019PhRvD..99l3014R}). Third, GRQKNT code is also compatible with two-flavor approximations. In simulations under the two-flavor approximation, we can change the size of density matrix and Hamiltonian operators from $3 \times 3$ to $2 \times 2$ in GRQKNT. In the two-flavor case, the vacuum oscillation parameters are also changed, which is determined according to the problem. Fourth, Eq.~\ref{eq:basicneutrinosQKE} corresponds to the mean-field approximation or one-body density matrix description with the first truncation of BBGKY hierarchy (see \cite{2015IJMPE..2441009V} for more details). Under the assumption, the neutrino self-interaction is treated as an interaction between each neutrino and their mean-field neutrino medium in its vicinity. However, there may be astrophysical regimes where mean-field approximation is inappropriate (see e.g., \cite{2019PhRvD..99l3013P,2019PhRvD.100h3001C}) and may lead to different astrophysical consequence \cite{2018PhRvD..98h3002B}. Thus, GRQKNT is not capable of capturing all features of neutrino-flavor conversion. We leave the task incorporating these many-body corrections into GRQKNT to future work. Following \cite{2014PhRvD..89h4073S}, we cast the QKE in a conservative form. This is a useful formalism for numerical simulations, since the neutrino-number conservation can be ensured up to machine-precision. This can be written as, \begin{equation} \begin{split} &\frac{1}{\sqrt{-g}} \left. \frac{\partial}{\partial x^{\alpha}} \right|_{q_{i}} \Biggl[ \Bigl( n^{\alpha} + \sum^{3}_{i=1} \ell_{i} e^{\alpha}_{(i)} \Bigr) \sqrt{-g} \barparena{f} \Biggr] \\ & - \frac{1}{\nu^2} \frac{\partial}{\partial \nu}( \nu^3 \barparena{f} \omega_{(0)} ) + \frac{1}{\sin\theta_{\nu}} \frac{\partial}{\partial \theta_{\nu}} ( \sin\theta_{\nu} \barparena{f} \omega_{(\theta_{\nu})} ) \\ & + \frac{1}{ \sin^2 \theta_{\nu}} \frac{\partial}{\partial \phi_{\nu}} (\barparena{f} \omega_{(\phi_{\nu})}) = D \barparena{S} - i [\barparena{H},\barparena{f}], \end{split} \label{eq:conformQKE} \end{equation} where $g, x^{\alpha}$ are the determinant of the four-dimensional metric, coordinates of spacetime, respectively. $e^{\alpha}_{(i)} (i = 1, 2, 3)$ denote a set of the (spatial) tetrad bases normal to $n$. The factors of $\omega_{(0)}, \omega_{(\theta_{\nu})}, \omega_{(\phi_{\nu})}$ are given as, \begin{equation} \begin{split} & \omega_{(0)} \equiv \nu^{-2} p^{\alpha} p_{\beta} \nabla_{\alpha} n^{\beta}, \\ & \omega_{(\theta_{\nu})} \equiv \sum^{3}_{i=1} \omega_{i} \frac{ \partial \ell_{(i)} }{\partial \theta_{\nu} }, \\ & \omega_{(\phi_{\nu})} \equiv \sum^{3}_{i=2} \omega_{i} \frac{ \partial \ell_{(i)} }{\partial \phi_{\nu} }, \\ &\omega_{i} \equiv \nu^{-2} p^{\alpha} p_{\beta} \nabla_{\alpha} e^{\beta}_{(i)}, \end{split} \label{eq:Omega} \end{equation} which can also be expressed with the Ricci rotation coefficients \cite{2014PhRvD..89h4073S}. Spherical polar coordinate is often employed in solving Boltzmann equation, and we chose a set of tetrad basis, $\mbox{\boldmath $e$}_{(i)}$, having the following coordinate components, \begin{equation} \begin{split} & e^{\alpha}_{(1)} = (0, \gamma^{-1/2}_{rr}, 0, 0 ) \\ & e^{\alpha}_{(2)} = \Biggl(0, -\frac{\gamma^{-1/2}_{r \theta}}{\sqrt{\gamma_{rr} (\gamma_{rr} \gamma_{\theta \theta} - \gamma^2_{r \theta})}}, \sqrt{ \frac{\gamma_{rr}}{ \gamma_{rr} \gamma_{\theta \theta} - \gamma^2_{r \theta} } }, 0 \Biggr) \\ & e^{\alpha}_{(3)} = \Biggl(0, \frac{\gamma^{r \phi}}{\sqrt{\gamma^{\phi \phi}}} , \frac{\gamma^{\theta \phi}}{\sqrt{\gamma^{\phi \phi}}}, \sqrt{\gamma^{\phi \phi}} \Biggr), \end{split} \label{eq:polartetrad} \end{equation} where $\mbox{\boldmath $\gamma$}$ denotes the induced metric on each spatial hypersurface. Here, we explicitly write down the QKE in flat spacetime with spherical polar coordinate, which is also useful to see geometrical effects. Eq.~\ref{eq:conformQKE} can be rewritten in flat spacetime as, \begin{equation} \begin{split} & \frac{\partial \barparena{f}}{\partial t} + \frac{1}{r^2} \frac{\partial}{\partial r} ( r^2 \cos \theta_{\nu} \barparena{f} ) + \frac{1}{r \sin \theta} \frac{\partial}{\partial \theta} ( \sin \theta \sin \theta_{\nu} \cos \phi_{\nu} \barparena{f} ) \\ & + \frac{1}{r \sin \theta} \frac{\partial}{\partial \phi} ( \sin \theta_{\nu} \sin \phi_{\nu} \barparena{f} ) - \frac{1}{r \sin \theta_{\nu}} \frac{\partial}{\partial \theta_{\nu}} ( \sin^2 \theta_{\nu} \barparena{f}) \\ & - \frac{\cot \theta}{r} \frac{\partial}{\partial \phi_{\nu}} ( \sin \theta_{\nu} \sin \phi_{\nu} \barparena{f} ) = D \barparena{S} - i [\barparena{H},\barparena{f}]. \end{split} \label{eq:flatQKE} \end{equation} In order to see differences from QKE with Cartesian coordinate, there are two points to which attention should be paid. First, Jacobian determinant of three-dimensional real space ($r^2 \sin \theta$) appears in the spatial transport terms (the second to fourth terms in the left hand side of Eq.~\ref{eq:flatQKE}), which is directly related to $\sqrt{-g}$ in Eq.~\ref{eq:conformQKE}. Second, Eq.~\ref{eq:flatQKE} has transport terms in momentum space (the fifth- and sixth terms in the left hand side of the equation). This is attributed to the fact that $\mbox{\boldmath $e$}_{(i)}$ is not spatially uniform but rather rotates with $\theta$. As a result, $\omega_{i}$ becomes non-zero values even in the flat-spacetime (see in Eq.~\ref{eq:Omega}). The neutrino advection in angular directions of momentum space can also be interpreted more intuitively as follows. Neutrinos traveling straight in space experience different directional cosine with respect to $\mbox{\boldmath $e$}_{(1)}$ except for those flighting the same direction with $\mbox{\boldmath $e$}_{(1)}$. The out-going neutrinos with a finite angle with $\mbox{\boldmath $e$}_{(1)}$ becomes more forward peaked with increasing radius. This is essentially the same as geometrical effects that have also been discussed in light bulb model. In fact, light bulb model can be restored by solving Eq.~\ref{eq:flatQKE} in spherically symmetry with injected outgoing neutrinos from a certain radius. It is worthy of note that Eq.~\ref{eq:flatQKE} does not compromise the applicability of GRQKNT to local simulations. As mentioned above, transport terms in angular directions of momentum space are associated with variations of coordinate basis, indicating that they are negligible if we make the simulation box enough small so that the coordinate curvature can be safely neglected. In this case, Jacobian determinant appearing in spatial transport terms can also be dropped, exhibiting that QKE with Cartesian coordinate is restored. Here, we provide an example of the numerical setup. Let us consider a three-dimensional box in space with a region of $R \le r \le R+\Delta R$, $\Theta - \Delta \theta/2 \le \theta \le \Theta + \Delta \theta/2$, and $\Phi - \Delta \phi/2 \le \phi \le \Phi + \Delta \phi/2$ in radial, zenith- and azimuthal direction, respectively. The simulation box becomes a cubic shape when we choose a set of parameters as $\Delta R/R = \Delta \theta = \Delta \phi \ll 1$, $\Theta = \pi/2$, and $\Phi = 0$. In Sec.~\ref{sec:FFC}, we demonstrate 1D local simulations by following this numerical setup. In the current version of GRQKNT code, we can run simulations of neutrino-flavor conversion in three representative spacetimes: flat spacetime, Schwarzschild black hole, and Kerr black hole. In Schwarzschild spacetime, we employ the Schwarzschild coordinate. The line element can be written as, \begin{equation} \begin{split} ds^2 =& - \Bigl(1 - \frac{2M}{r} \Bigr) dt^2 + \Bigl(1 - \frac{2M}{r} \Bigr)^{-1} dr^2 \\ & + r^2 d\theta^2 + r^2 \sin^2 \theta d\phi^2, \end{split} \label{eq:lineSchw} \end{equation} where $M$ denotes the black hole mass. By using a set of tetrad basis described in Eq.~\ref{eq:polartetrad}, the resultant QKE can be written as, \begin{equation} \begin{split} & \frac{\partial }{\partial t} \biggl[ \Bigl(1 - \frac{2M}{r} \Bigr)^{-1/2} \barparena{f} \biggr] + \frac{1}{r^2} \frac{\partial}{\partial r} \biggl[ r^2 \cos \theta_{\nu} \Bigl(1 - \frac{2M}{r} \Bigr)^{1/2} \barparena{f} \biggr] \\ & + \frac{1}{r \sin \theta} \frac{\partial}{\partial \theta} ( \sin \theta \sin \theta_{\nu} \cos \phi_{\nu} \barparena{f} ) \\ & + \frac{1}{r \sin \theta} \frac{\partial}{\partial \phi} ( \sin \theta_{\nu} \sin \phi_{\nu} \barparena{f} ) \\ & - \frac{1}{\nu^2} \frac{\partial}{\partial \nu} \biggl[ \frac{M}{r^2} \Bigl( 1 - \frac{2M}{r} \Bigr)^{-1/2} \nu^3 \cos \theta_{\nu} \barparena{f} \biggr] \\ &- \frac{1}{\sin \theta_{\nu}} \frac{\partial}{\partial \theta_{\nu}} \biggl[ \sin^2 \theta_{\nu} \frac{r-3M}{r^2} \Bigl(1 - \frac{2M}{r} \Bigr)^{-1/2} \barparena{f} \biggl] \\ & - \frac{\cot \theta}{r} \frac{\partial}{\partial \phi_{\nu}} ( \sin \theta_{\nu} \sin \phi_{\nu} \barparena{f} ) = D \barparena{S} - i [\barparena{H},\barparena{f}]. \end{split} \label{eq:SchQKE} \end{equation} We note that Eq.~\ref{eq:SchQKE} becomes identical to that in flat-spacetimes (Eq.~\ref{eq:flatQKE}) by taking the limit of $M \to 0$. In Kerr spacetimes, we employ Kerr-Schild coordinates; the line element can be written as, \begin{equation} \begin{split} ds^2 =& - \Bigl(1 - \frac{2Mr}{\Sigma} \Bigr) dt^2 + \frac{4Mr}{\Sigma} dt dr + \Bigl(1 + \frac{2Mr}{\Sigma} \Bigr) dr^2 \\ & + \Sigma d\theta^2 + \frac{\sin^2 \theta}{\Sigma} \biggl[ (r^2 + a^2)^2 - \Delta a^2 \sin^2 \theta \biggr] d\phi^2 \\ & - 2 a \sin^2 \theta \Bigl(1 + \frac{2Mr}{\Sigma} \Bigr) d\phi dr \\ & - \frac{4Mar}{\Sigma} \sin^2 \theta d\phi dt, \end{split} \label{eq:lineKerrSchild} \end{equation} where \begin{equation} \begin{split} & \Delta \equiv r^2 - 2Mr + a^2, \\ & \Sigma \equiv r^2 + a^2 \cos^2 \theta, \\ \end{split} \label{eq:kerrvali} \end{equation} and $a$ denotes kerr parameter (angular momentum of black hole per unit mass). The explicit description of conservative form of QKE is quite lengthy; hence we omit to explicitly write down here. However, it can be straightforwardly derived from Eqs.~\ref{eq:conformQKE}-\ref{eq:polartetrad} by following the procedure outlined in this section. \section{Transport module}\label{sec:transport} \subsection{Design}\label{sec:des_transport} Consistent treatments of transport- and collision terms in multi-D Boltzmann neutrino transport was a technical challenge for discrete-ordinate Sn method. As described in Sec. 2 and 3 of \cite{2014ApJS..214...16N}, the main source of difficulty is solving neutrino transport while interacting with moving-matter through iso-energetic scatterings. The problem was, however, resolved by a mixed-frame approach with a two-energy-grid technique \cite{2014ApJS..214...16N}. We make a good use of Lagrangian-remapping grid (LRG) and Laboratory-fixed grid (LFG); the former has an advantage of treatments of collision term, and the latter is useful in handling transport operator. This technique has already been extended in cases with curved spacetimes \cite{2017ApJS..229...42N,2021ApJ...909..210A}; hence, we adopt the same technique in GRQKNT code. As described in Sec.~\ref{sec:basiceq}, we choose $\mbox{\boldmath $n$}$ as a tetrad basis in transport terms of QKE, exhibiting that the neutrino transport is solved in the laboratory frame. We use LFG to evaluate the neutrino advection in both space and momentum space except for energy direction\footnote{We handle the neutrino advection in energy direction by LRG, since no technical issues arise in LRG.}. It should be mentioned that we employ an explicit time-integration scheme in GRQKNT, whereas a semi-implicit method is implemented in our Boltzmann code \cite{2014ApJS..214...16N}. This suggests that the implementation of neutrino advection in GRQKNT code is much simpler than the Boltzmann solver. More precisely speaking, computations of matrix inversion are not involved in GRQKNT. We also note that the two-energy-grid technique is not necessary in cases with neglecting fluid-velocities or gray neutrino transport (i.e., energy-integrated QKE). In these cases, the LFG is set to be the same as LRG. We refer readers \cite{2014ApJS..214...16N} for the detail of numerical implementation of the two-energy-grid technique. Except for the two-energy-grid technique, solving Eq.~\ref{eq:conformQKE} is quite straightforward. We apply a well-established hyperbolic solver, 5th-order weighted essentially non-oscillatory or WENO scheme \cite{1994JCoPh.115..200L} with some extensions. The WENO scheme is easy to implement and run multi-D simulations; in fact, another neutrino-flavor conversion solver recently developed in \cite{2022arXiv220312866G} also employs 7th-order WENO. Below, we describe the WENO scheme implemented in our GRQKNT code. The numerical implementation of WENO is essentially the same way used in \cite{2018AcMSn..34...37H}. The advantage of this method is a simple and computationally-cheap implementation of WENO on non-uniform grids. It should be mentioned that non-uniform grids are frequently used in global simulations of CCSN and BNSM, since they are useful to reduce computational costs without compromising accuracy. On the other hand, WENO schemes on non-uniform grids requires, in general, computations of complicated weight-functions (see, e.g., \cite{2008JCoPh.227.2977C}), causing the increase of CPU-time. In \cite{2018AcMSn..34...37H}, they proposed a new method to save the CPU time with sustaining the accuracy; we hence adopt the method in our GRQKNT code\footnote{There is a caveat, however. Highly non-uniform grids would reduce the accuracy of solver to second-order, which is a weak point in the method of \cite{2018AcMSn..34...37H}. However, such highly non-uniform grids are not necessary in simulations in CCSN and BNSM environments; hence this limitation does not compromise the usability of GRQKNT code.}. For the basic part of the WENO scheme, we refer readers to \cite{2018AcMSn..34...37H}, and we only describe two extensions from the original one. First, we implement a five-stage fourth-order strong stability-preserving TVD Runge-Kutta by following \cite{spiteri2002new,2020JOUC...19..747G}. It should be mentioned that a fourth-order is a requirement to follow the time evolution of collective neutrino oscillations (see \cite{2021PhRvD.103h3013R,2021ApJS..257...55K}). Second, we change to compute weight function $\Omega_k$ that is used to reconstruct a physical quantity at each cell interface\footnote{In our GRQKNT code, the primitive variable corresponds to each matrix element of $\barparena{f}$.}. The $\Omega_{k}$ is defined as, \begin{equation} \Omega_k = \frac{\alpha_k}{\alpha_0 + \alpha_1 + \alpha_2}. \label{eq:compomega_k} \end{equation} where $k$ runs from 0 to 1. In the original WENO scheme, $\alpha_k$ is computed as, \begin{equation} \alpha_k = \frac{C_k}{(\epsilon + IS_k)^p}, \label{eq:compalpha_k_orig} \end{equation} with $\epsilon=10^{-6}$ and $p=2$. $C_k$ are $C_0 = 0.1, C_1=0.6,$ and $C_2=0.3$. $IS_k$ denotes a smoothness measure. The explicit description of $IS_k$ can be seen in Eqs.~10, 12, and 15 in of \cite{2018AcMSn..34...37H}. In our test computations, however, we find that the weight function is not sufficient to sustain the stability. This is mainly due to the fact that density matrix of neutrinos has, in general, order-of-magnitude variations, which has been frequently observed CCSN simulations with full Boltzmann neutrino transport. In such a large variation, $\epsilon$ does not work well as a limiter to determine $\alpha_k$; consequently, it leads to numerical instabilities. We resolve the issue by introducing a normalization factor Q and another limiter $\epsilon_2$ to evaluate $\alpha_k$, which are \begin{equation} \alpha_k = \frac{C_k}{Z}, \label{eq:compalpha_k_ours} \end{equation} where \begin{equation} \begin{split} &Z = \max \biggl( \epsilon_2, (\epsilon \hspace{0.5mm} Q + IS_k)^p \biggr), \\ &Q = \frac{(|q^0| + |q^1| + |q^3|)^2}{9}. \end{split} \label{eq:Qequiv} \end{equation} $q^k$ denotes the interfacial states of physical quantity (i.e., each element of density matrix of neutrinos); the exact expression can be seen in Sec. 2.2 of \cite{2018AcMSn..34...37H}. $Q$ corresponds to a normalization factor to make a limiter $\epsilon$ work properly. It should be noted that both Q and $IS_k$ become null when the density matrix is zero everywhere, leading to division by zero in computations of $\alpha_k$. We thus introduce another limiter $\epsilon_2$, which is set as $=10^{-50}$. \subsection{Code test}\label{sec:tests_transport} \begin{figure*} \includegraphics[width=\linewidth]{graph_LagLabo.eps} \caption{Transport test to check the capability of two-energy-grid technique. Left: comparison of energy-spectrum of outgoing neutrinos ($\cos \theta_{\nu}=1$) between inner- (black line) and outer region (red). The neutrinos are emitted from the matter at rest in the inner region. In the outer region, we assume that the fluid has a velocity of $- 0.2 c$, where $c$ denotes the speed of light. Since the LRG is defined on the fluid rest frame, the energy-spectrum is blue shifted. Right: Same as the left panel but we correct the neutrino energy by Doppler factor, i.e., measuring spectra in Laboratory frame. The two spectra are well-matched each other, exhibiting the fluid-velocity dependence is properly handled. See text for more details. } \label{graph_LagLabo} \end{figure*} \begin{figure} \includegraphics[width=\linewidth]{graph_Radi_vs_Numdenratio_kerrcheck.eps} \caption{Transport test in Kerr-spacetime. The plots show number density of neutrinos as a function of radius at $t = 0.6$ ms. The vertical axis is normalized by that at the inner boundary ($R=30 {\rm km}$). Color distinguishes different resolutions: Low (red), Medium (blue), and High (green). The vertical black dashed-line denotes the radius where the initially-injected neutrinos reaches, which is obtained by solving a geodesic equation; see text for details. } \label{graph_Radi_vs_Numdenratio_kerrcheck} \end{figure} \begin{figure*} \includegraphics[width=\linewidth]{graph_Radi_vs_angular_kerrcheck.eps} \caption{Transport test in Kerr-spacetime. Top: radius vs. neutrino angles (in momentum space). Bottom: radius vs. neutrino energy. Color denotes the energy-integrated $f_{ee}$ normalized by their maximum at each radius. The black solid line denotes the neutrino trajectory obtained by solving geodesic equation. From left to right, low-, medium-, and high resolutions. } \label{graph_Radi_vs_angular_kerrcheck} \end{figure*} \begin{figure} \includegraphics[width=\linewidth]{graph_GeoComp.eps} \caption{Radius vs. neutrino angles (in momentum space) for three different space-times: flat (red), Schwarzschild black hole (green), and Kerr black hole (black). To make a fair comparison, we use a Kerr-Schild coordinate with $a=0$ for Schwarzschild spacetime. The results are obtained by computing geodesic equations. We show each line up to the radius where the neutrino can reach by $t= 0.6$ ms. } \label{graph_GeoComp} \end{figure} We present results of some basic tests to assess capabilities of the transport module in GRQKNT. In this test, we set collision- and oscillation terms as zero. As a result, the transport equation becomes identical among all species of neutrinos; thus, we only focus on $\nu_e$ in this section. We compare the results to those obtained by solving geodesic equation, which provides the neutrino trajectory in phase space. It should be mentioned that we check more complicated situations in this paper, in which neutrino transport and fast neutrino-flavor conversion (FFC) are coupling each other. We shall discuss the detail of the test in Sec.~\ref{sec:osc}. We carried out a suite of transport tests in flat-spacetimes, Schwarzschild black hole, and Kerr black hole. Here, we present only the essentials with focusing on two novelties compared to other schemes: two-energy-grid technique and neutrino transport in Kerr spacetime. The tests are essentially the same as those carried out in our previous studies \cite{2014ApJS..214...16N,2021ApJ...909..210A}; hence, we refer readers them for more details. To check the numerical implementation of two-energy grid, we inject neutrinos in the out-going direction ($\cos \theta_{\nu} = 1$) at a certain radius where the fluid is at rest. The energy spectrum of injected neutrinos is assumed to be Fermi-Dirac distribution with zero chemical potential and temperature of $5$~MeV. At the outer region, the fluid has a radial velocity of $- 0.2 c$, where $c$ denotes the speed of light. We set a discontinuity of fluid-velocity in the middle of the computational domain. It should be noted that, since LRG is defined so as to be the energy mesh becomes isotropic in the fluid-rest frame, the energy-spectrum should be shifted on LRG. We carry out a spherically symmetric simulation with $20$ energy grids, where it is discretized from 1 MeV to 100 MeV logarithmically. We employ 12 radial grid points in this simulation, i.e., the discontinuity of fluid velocity is located at the cell edge of $6$-th radial grid point. In the left panel of Fig.~\ref{graph_LagLabo}, we show the energy-spectrum on the LRG at the inner- and outer region. As expected, the energy-shift of the spectrum is confirmed\footnote{Fluid-rest-frame to Laboratory-frame transformation of energy spectrum is straightforward. The neutrino energies at laboratory- and fluid-rest frames can be transformed by one into another by Doppler factor. We also note that $f_{ee}$ is Lorentz scalar. See \cite{2014ApJS..214...16N} for more details.}. In the right panel, on the other hand, we show the energy spectrum measured at laboratory frame. This panel exhibits that the energy spectrum is good agreement with the injected energy, illustrating that the two-energy-grid technique works well. We now turn our attention to neutrino transport in Kerr spacetime. We set $M=5 M_{\rm sun}$ and $a = 0.5 M$, where $M_{\rm sun}$ denotes the mass of the sun. Neutrinos are injected from a certain radius on equatorial plane with specifying a flight direction so as to be bounded in equatorial plane. This computational setup makes the simulation 1 (time) + 1 (radial direction in space) + 2 ($\theta_{\nu}$ and $\nu$ in momentum space) problem (see also \cite{2021ApJ...909..210A}). We chose $\phi_{\nu} = 3 \pi/2$ to maximize the frame-dragging effect of Kerr black hole. The radius, $\theta_{\nu}$-direction, and energy for injected neutrinos are assumed to be $r=30 {\rm km}$, $\cos{\theta_{\nu}}=0.655$, and $\nu=25$~MeV, respectively\footnote{It should be mentioned that the injected neutrino energy is not exactly monochromatic, and similarly there is also a finite width in the angular distribution of neutrinos. It is due to the finite-volume method in GRQKNT. Although this is one of the source of errors in the comparison to results obtained by solving geodesic equation, the deviation should be reduced with increasing resolutions in momentum space.}. In these tests, we solve the neutrino transport in a spatial region of $30 {\rm km} \le r \le 100 {\rm km}$, where $r$ denotes the radius measured with a coordinate basis\footnote{In general, it is necessary to determine a yardstick to measure the spatial scale in curved-spacetimes. In this paper, we measure it based on a coordinate basis used in each simulation.}. We deploy uniformly $N_{r}$ grid points in the radial direction. Neutrino angular direction $\theta_{\nu}$ (lateral angular direction in momentum space) is discretized uniformly by $N_{\theta_{\nu}}$ grid points with respect to the cosine of the angle from $0^{\circ} \le \theta_{\nu} \le 180^{\circ}$. The energy-grid is also discretized uniformly by $N_{\nu}$ grid points from the range of $0 {\rm MeV} \le \nu \le 50 {\rm MeV}$. The simulations are performed at three different resolutions: low $288 (N_r) \times 128(N_{\theta_{\nu}}) \times 20 (N_{\nu})$, medium $576 \times 256 \times 40$, and high $1152 \times 512 \times 80$. In Fig.~\ref{graph_Radi_vs_Numdenratio_kerrcheck}, we show the number density of neutrinos, measured in the laboratory frame, as a function of radius at $t=0.6$ ms. To compare the results on equal footing among three models, we normalize the density by that at $r=30$ km\footnote{In this test, we constantly inject neutrinos in time by setting $f_{ee}=0.1$ on the corresponding grid point in momentum space at the inner boundary. Since both angular- and energy grids in neutrino momentum space are not identical among different resolution models, the number density is also varied. We, hence, normalize the density by that at $r=30$ km.}. Two important conclusions can be derived from Fig.~\ref{graph_Radi_vs_Numdenratio_kerrcheck}. First, our transport module does not suffer from any numerical oscillations. It is emphasized that the stable simulation is not trivial, since the problem involves strong discontinuities both in real space and momentum space. Neutrinos are injected at a certain grid point in phase space, indicating that there are strong discontinuities of $f_{ee}$ distributions in its vicinity. Numerical viscosity plays a role for the stabilization; in fact, we find some numerical diffusions around the neutrino front-position (see $r \sim 90$ km in Fig.~\ref{graph_Radi_vs_Numdenratio_kerrcheck}). Our result suggests that a limiter of our WENO scheme works properly. In Fig.~\ref{graph_Radi_vs_Numdenratio_kerrcheck}, we also compare the result to the geodesic equation. The forefront radius of neutrinos at $t=0.6$ ms, which is $r = 88.5$ km, is displayed as a vertical dashed-line in Fig.~\ref{graph_Radi_vs_Numdenratio_kerrcheck}. We confirm that this is consistent with our results, and that the deviation decreases with increasing resolutions. To see the resolution dependence of neutrino distributions in momentum space, we display two different color maps of $f_{ee}$ in Fig.~\ref{graph_Radi_vs_angular_kerrcheck}. In the top, we show the energy-integrated $f_{ee}$ as functions of radius and neutrino-angle ($\theta_{\nu}$). In the bottom panels, we display the angular-integrated $f_{ee}$ as functions of radius and neutrino-energy ($\nu$). For visualization purpose, we normalize $f_{ee}$ by its maximum over all angles (top panels) or energy (bottom panels) at the same radius. We find that the neutrino trajectory obtained by GRQKNT simulations is good agreement with that obtained by solving geodesic equation (black lines in each panel). We confirm that numerical diffusions occur in both angular- and energy- directions, which are reduced with increasing resolutions. These results exhibit correct implementation of neutrino transport in our code. Before closing this section, we put some remarks on effects of curved spacetimes. Aside from gravitational redshift (as shown in bottom panels of Fig.~\ref{graph_Radi_vs_angular_kerrcheck}), there are remarkable effects of curved spacetimes on the neutrino angular advection, that can be seen in Fig.~\ref{graph_GeoComp}. By comparing to the case with flat spacetime (red line), the neutrinos angular distributions are less forward-peaked in black hole spacetimes. We also note that the forefront radius of neutrinos at $t=0.6$ ms strongly depends on the choice of spacetimes. Since we inject the neutrino in the retrograde direction with respect to angular momentum of Kerr black hole, the frame-dragging effect add an attractive force (gravity becomes effectively strong); consequently the forefront radius of neutrinos becomes smaller than the case with the same mass of Schwarzschild black hole. As shown in Fig.~\ref{graph_Radi_vs_Numdenratio_kerrcheck}, the neutrino forefront radii obtained by our simulations are good agreement with that obtained by geodesic equation, supporting that frame-dragging effects are also correctly captured in our transport module. \section{Collision term}\label{sec:Colterm} Neutrino-matter interactions contributing to the collisional term ($S$ in Eq.~\ref{eq:basicneutrinosQKE}) naturally change neutrino distributions in momentum space. It has been demonstrated that the interplay between neutrino transport and collision term leads to energy-, angle-, and flavor dependent neutrino dynamics in CCSN and BNSM. There remain, however, large uncertainties in rolls of collision term in neutrino-flavor conversion. This issue attracted a great deal of attention, and some important progress has been made in the last decade. The collisional instability proposed by \cite{2021arXiv210411369J} represents a case that collision term directly drives flavor conversions. The so-called neutrino halo effect, that is associated with momentum-exchanged scatterings, potentially plays a dominant role to induce slow neutrino-flavor conversion \cite{2012PhRvL.108z1104C,2013PhRvD..87h5037C,2018JCAP...11..019C,2020JCAP...06..011Z}. FFC can also be triggered by various mechanisms with the interplay between neutrino transport and matter interactions in CCSN (see, e.g., \cite{2019PhRvL.122i1101C,2020PhRvR...2a2046M,2021PhRvD.104h3025N,2021PhRvD.104f3014N,2022JCAP...03..051A}). In the last few years, it has been demonstrated that the neutrino-matter interactions, in particular momentum-exchanged scatterings, affect flavor conversion in non-linear phase \cite{2021PhRvD.103f3002S,2021arXiv210914011S,2021PhRvD.103f3001M,2021ApJS..257...55K,2022PhRvD.105d3005S,2022arXiv220411873H}. In these numerical models, we have witnessed that the effect (enhancement or suppression of flavor conversion) depends on not only weak processes but also initial condition (e.g., angular distributions of neutrinos), and numerical setup (homogeneous or inhomogeneous). This exhibits the importance of self-consistent treatment of neutrino transport, collision term, and flavor conversion, i.e., the need of global simulations. This motivates us to incorporate collision term into GRQKNT code. The collision term is implemented into GRQKNT by following the same approach of \cite{2019PhRvD..99l3014R} (see also \cite{1993PhRvL..70.2363R,1996slfp.book.....R,2000PhRvD..62i3026Y,2014PhRvD..89j5004V,2016PhRvD..94c3009B} for more general discussions in treatments of collision term). In the current version, four major weak-processes relevant to CCSN and BNSM are implemented; we describe the essence below. Incorporating more reactions and including high-order corrections on each weak process are postponed to future work. \subsection{Emission and absorption}\label{sec:emisabs} \begin{figure*} \includegraphics[width=\linewidth]{graph_Emisabstest.eps} \caption{Time evolution of $ee$ component of density matrix in a test of emission and absorption processes. Left: electron capture by free proton (and the inverse process, $\nu_e$ absorption). Right: positron capture by free neutron (and its inverse process, $\bar{\nu}_e$ absorption). We show the result of 1 MeV, 3.8 MeV, and 14.3 MeV neutrinos traveling in outgoing- ($\cos \theta_{\nu}=1$) and incoming ($\cos \theta_{\nu}=-1$) directions. Black solid-lines and red dashed-ones are the results computed by our Boltzmann code and GRQKNT one, respectively. } \label{graph_Emisabstest} \end{figure*} \begin{figure*} \includegraphics[width=\linewidth]{graph_Scattest.eps} \caption{Same as Fig.~\ref{graph_Emisabstest} but for nucleon scatterings (left) and coherent scatterings with heavy nuclei (right). Since there are no flavor dependence in the two weak processes due to neglecting high-order corrections, we only show the result of $f_{ee}$. } \label{graph_Scattest} \end{figure*} The two emission processes, electron capture by free protons and positron capture by free neutrons, and their inverse reactions, i.e., neutrino absorptions, are the dominant charged-current reactions in CCSN and BNSM. The neutrino production ($\barparena{R}_{emis}$) and extinction rates ($\barparena{R}_{abs}$) in classical Boltzmann equation can be given as, \begin{equation} \begin{split} &\barparena{R}_{emis} = \barparena{j}_{e} (1 - \barparena{f}_{ee}), \\ &\barparena{R}_{abs} = \barparena{\kappa}_{e} \barparena{f}_{ee}, \end{split} \label{eq:ColClaEmisAbs} \end{equation} where $\barparena{j}_{e}$ and $\barparena{\kappa}_{e}$ denote the emissivity and absorption opacity, respectively. Given neutrino energy and the chemical composition of electron (positron), proton, and neutron\footnote{To obtain these thermodynamical quantities, we need to specify an equation-of-state (EOS). In the current version of GRQKNT, we employ a nuclear EOS of \cite{2017JPhG...44i4001F}, which has been used in our CCSN simulations with full Boltzmann neutrino transport (see, e.g., \cite{2019ApJS..240...38N,2019ApJ...880L..28N}).}, $\barparena{\kappa}_{e}$ can be computed from $\barparena{j}_{e}$ with a detailed-balance relation. The emissivity and absorption opacity are computed based on \cite{1985ApJS...58..771B}, which ignores high-order corrections (such as recoil effects) but captures the essential properties of these reactions. As shown in \cite{2019PhRvD..99l3014R}, those emission and absorption processes of charged-current reactions can be extended in quantum kinetic treatments. It can be written as, \begin{equation} \barparena{S}_{ab} = \barparena{j}_{a} \delta_{ab} - \biggl( \langle \barparena{j} \rangle_{ab} + \langle \barparena{\kappa} \rangle_{ab} \biggr) \barparena{f}_{ab}, \label{eq:ColQKEEmisAbs} \end{equation} where the bracket is defined as \begin{equation} \langle A \rangle_{ab} \equiv \frac{A_a + A_b}{2}. \label{eq:defbracket} \end{equation} In the above expression, the indices ($a$ and $b$) specify a neutrino flavor. Since we ignore charged-current processes for heavy leptonic neutrinos, we can set $\barparena{j}_{\mu}=\barparena{j}_{\tau}=\barparena{\kappa}_{\mu}=\barparena{\kappa}_{\tau}=0$. The computations of these charged-current reactions are straightforward and computationally cheap due to no integral operations in momentum space. To check the correct numerical implementation, we perform a comparison study to our Boltzmann code. In this test, transport- and oscillation operators are switched off. To determine reaction rates, we assume a matter state as $\rho=2 \times 10^{12} {\rm g/cm^3}$, $Y_e=0.3$, and $T=10 {\rm MeV}$, where $\rho$, $Y_e$, and $T$ denote baryon mass density, electron-fraction, and temperature, respectively. As an initial condition of neutrino distributions, we set $\barparena{f}$ as \begin{equation} \barparena{f} = \frac{1 + 0.5 \cos \theta_{\nu}}{10} . \label{eq:fini_coltest} \end{equation} It should be mentioned that there are no energy dependence in this initial condition of $\barparena{f}$. Here, we perform the test simulations for three different neutrino energy: $1$ MeV, $3.8,$ MeV and $14.3$ MeV. The results are displayed in Fig.~\ref{graph_Emisabstest}. Except for $\bar{\nu}_e$ with the energy of 1 MeV in positron capture reaction (see right panel of Fig.~\ref{graph_Emisabstest}), all neutrinos approach the equilibrium state, known as Fermi-Dirac distribution. We also note that the energy-dependent feature of each charged-current reaction is properly captured; for instance, the higher energy of neutrinos settles into the equilibrium state earlier. It should be noted that the emissivity of positron capture process for $1 {\rm MeV}$ $\bar{\nu}_e$ is zero due to the energy threshold of the reaction. Consequently, the $\bar{f}_{ee}$ is constant in time. We confirm that both Boltzmann- and GRQKNT codes give identical results (black solid-lines and red dashed-ones are overlapped each other), ensuring that the emission and absorption terms in GRQKNT are correctly implemented. \subsection{Scattering}\label{sec:scattering} We implement two processes of momentum-exchanged scatterings into GRQKNT: nucleon scattering and coherent scatterings with heavy nuclei. We assume that the scatterings are elastic, which are reasonable assumptions in CCSN and BNSM. The resultant collision term has a similar form as that of classical Boltzmann equation, that can be written as, \begin{equation} \begin{split} \barparena{S}_{ab} (\nu^{F},\Omega^{F}) & = - \frac{(\nu^{F})^{2}}{(2 \pi)^3} \int d \Omega^{{\rm \prime F}} R (\nu^{F},\Omega^{F},\Omega^{{\rm \prime F}}) \\ & \times \Bigl( f^{F}_{ab} (\nu^{F},\Omega^{F}) - f^{F}_{ab} (\nu^{F},\Omega^{{\rm \prime F}}) \Bigr), \end{split} \label{eq:ColElaSca} \end{equation} where the superscript (F) is put on the variables measured in the fluid-rest frame. For nucleon scattering processes, energy-dependence in the reaction kernel $R$ can be dropped in our assumptions (no recoils and weak-magnetism). We compute these reaction rates by following \cite{1985ApJS...58..771B}. We perform a similar test-simulation as that used in emission and absorption processes. We employ the same matter background (to determine the reaction rate) and initial neutrino distributions as those used in Sec.~\ref{sec:emisabs}. In this test, we focus only on $\nu_e$, since the two scattering processes do not depend on neutrino species in our assumptions. The results are summarized in Fig.~\ref{graph_Scattest}. As expected, $f_{ee}$ evolves towards isotropic distributions. It should be noted that the initial angular distribution does not depend on neutrino energy; hence, the isotropic distribution becomes the same among different energy of neutrinos, which is why all lines in the figure converges to the same value. On the other hand, the time-evolution of $f_{ee}$ is energy dependent, in which the higher energy of neutrinos achieve isotropic distributions earlier. We confirm that the result of GRQKNT is good agreement with Boltzmann simulation, illustrating the correct implementation. As another test related to collision term, we perform a homogeneous simulation of FFC with scatterings. This corresponds to a representative example to assess the capability of GRQKNT for problems coupling neutrino oscillations with scatterings; we shall present the result in the next section. \section{Oscillation module}\label{sec:osc} Implementing neutrino oscillation module is the most important upgraded element from our classical Boltzmann solver. Aside from a requirement of high-order accuracy of time integration scheme, the numerical treatment of neutrino oscillation is straightforward. All we need to do is the matrix calculation of $[\barparena{H},\barparena{f}]$, indicating that no numerical instabilities occur\footnote{We note, however, that coarse resolutions in momentum space may generate spurious modes (see, e.g., \cite{2012PhRvD..86l5020S}). This issue should be kept in mind for any numerical simulations of collective neutrino oscillations.}. In this section, we only highlight some representative tests. Most of them are the same as those performed in our previous paper \cite{2021ApJS..257...55K}. We measure the capability of our GRQKNT by comparing to analytic solutions or reproducing the results obtained by previous studies. We also perform inhomogeneous simulations of FFC, in which both neutrino transport and flavor conversion are taken into account. This test shows the applicability of GRQKNT to local simulations, which was discussed in Sec.~\ref{sec:basiceq}. We note that the purpose of this paper is to present the capability of GRQKNT code, and therefore we do not enter into details of physical aspects of each flavor conversion. We refer readers to other references for physics-based discussions of each test. \subsection{Vacuum Oscillation}\label{sec:vacosc} \begin{figure*} \includegraphics[width=\linewidth]{graph_VacOSC.eps} \caption{Time evolution of each flavor component of density matrix for vacuum oscillation. Black solid-lines denote the analytic solution, meanwhile the red dashed-ones are the result obtained from GRQKNT simulation. The time is normalized by a vacuum frequency $\omega$, which is defined as ($\omega \equiv \Delta m^2/2 \nu$). } \label{graph_VacOSC} \end{figure*} We start with a check of vacuum oscillation. In this test, we assume normal ordering of neutrino masses. The oscillation parameters are set as $\Delta m^2 = 2.45 \times 10^{-15} {\rm MeV^2}$ and $\sin^2 \theta_0 = 2.24 \times 10^{-2}$ where $\Delta m^2$ and $\theta_0$ denotes the neutrino squared-mass difference and mixing angle under two-flavor approximation, respectively. The neutrino energy is assumed to be $20$ MeV. Our parameter choice is the same as that used in \cite{2021ApJS..257...55K} (see Sec. 5 of the the paper). Fig.~\ref{graph_VacOSC} portrays the time evolution of each component of density matrix of neutrinos. As a reference, we also show the analytic solution in each panel (see Eqs.~14-17 in \cite{2021ApJS..257...55K}). As shown in the figure, our results are good agreement with them. \subsection{MSW resonance}\label{sec:MSWreso} \begin{figure*} \includegraphics[width=\linewidth]{graph_MSW.eps} \caption{Time evolution of $f_{ee}$ for MSW neutrino oscillation. Black solid-lines denote the analytic solutions. Color distinguish our simulation models with varying the number density of electron, where $n_{e0}$ denotes the resonance density (see Eq.~\ref{eq:ne0}). } \label{graph_MSW} \end{figure*} To check implementation of matter Hamiltonian, we carry out a simulation of MSW effect. Assuming homogeneous electron distributions, the solution can be derived analytically (see Sec. 6 in \cite{2021ApJS..257...55K}). Under the two flavor approximation with normal mass hierarchy, the resonant electron-number density can be written as, \begin{equation} n_{e0} = \frac{\Delta m^2 \cos 2 \theta_0}{2 \sqrt{2} G_F \nu}. \label{eq:ne0} \end{equation} With varying electron-number density, 0.1 $n_{e0}$, 0.5 $n_{e0}$, 0.8 $n_{e0}$, and $n_{e0}$, we solved QKE by GRQKNT with the same oscillation parameters as that used in test of vacuum oscillation. As shown in Fig.~\ref{graph_MSW}, the results are good agreement with analytic solutions, exhibiting the correct implementation of matter Hamiltonian. \subsection{Fast neutrino-flavor conversion (FFC)}\label{sec:FFC} \begin{figure} \includegraphics[width=\linewidth]{graph_FFChomo.eps} \caption{Time evolution of $P_{ex}$ (see Eq.~\ref{eq:defPex}) for FFC simulations. Red (blue) line represents the result of FFC without (with) isoenergetic scatterings. } \label{graph_FFChomo} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{graph_FFC_inhomo_radi_vs_numden.eps} \caption{Radial profile of number density of $\nu_x$ ($n_{\nu_x}$) normalized by the flavor-integrated one ($n_{\nu_e} + n_{\nu_x}$) for inhomogeneous FFC simulations. Color distinguishes models: red (fixed-boundary) and blue (periodic-boundary). We display the result at $t=10^{-8}$ s. } \label{graph_FFC_inhomo_radi_vs_numden} \end{figure} \begin{figure*} \includegraphics[width=\linewidth]{graph_Radi_vs_angular_inhomo.eps} \caption{Radial and angular distributions of neutrinos for two local inhomogeneous simulations of FFC. The color map displays $f_{\nu_x}/(f_{\nu_e} + f_{\nu_x})$. $R_0$ is set to be $50$ km, and the computational domain is $\Delta R = 100$ cm. The left panel shows the result of simulation with a fixed boundary, in which the neutrino angular distribution in $\cos \theta_{\nu} > 0$ flight directions at the inner boundary ($R-R_{0} = 0$) is constant in time. The right one shows the result in the case with periodic boundary condition. We display the result at $t=10^{-8}$ s. At that time, the neutrino distributions have already settled into a quasi-steady state. } \label{graph_Radi_vs_angular_inhomo} \end{figure*} We assess the capability of GRQKNT to problems that self-interaction potential plays a dominant role in flavor conversion. As a representative example, we adopt fast neutrino-flavor conversion (FFC) in this test. We demonstrate that homogeneous multi-angle simulations of FFC with and without iso-energetic scatterings. Since there are no analytic solutions for FFC problems which neutrinos in all different angles of momentum space are coupling, we compare our results to those obtained in previous studies \cite{2021PhRvD.103f3002S,2021ApJS..257...55K}. We also carry out simulations of FFC with transport terms, that exhibits the applicability of GRQKNT to any local inhomogeneous simulations. The initial condition for homogeneous simulations is set as follows. The angular distributions of $f_{ee}$ and $\bar{f}_{ee}$ are set as (see also \cite{2021PhRvD.103f3002S,2021ApJS..257...55K}), \begin{equation} \begin{split} & f_{ee} = 0.5 C \\ & \bar{f}_{ee} = \biggl[ 0.47 + 0.05 \exp( -(\cos \theta_{\nu}-1)^2 ) \biggr] C, \end{split} \label{eq:initialFFChomo} \end{equation} where $C$ denotes the normalization factor, which is determined so as to be $\mu = 10^5 {\rm km^{-1}}$ ($\mu \equiv \sqrt{2} G_F n_{\nu}$ where $n_{\nu}$ denotes the number density of $\nu_e$). We assume that other components of density matrix are zero. We trigger FFC by adding a vacuum potential, whose parameters are set as $\theta_0 = 10^{-6}$, $\Delta m^2 = 2.5 \times 10^{-6} {\rm eV^2}$. The neutrino energy is assumed to be $50$ MeV. In this simulation, we deploy $128$ angular grid points uniformly with respect to $\cos \theta_{\nu}$. Under the same set of these oscillation parameters and initial conditions, we perform another homogeneous simulation with incorporating iso-energetic scattering. The inverse mean free path of scattering is set as $1 {\rm km^{-1}}$ isotropically. Figure.~\ref{graph_FFChomo} summarizes the results of the two homogeneous FFC simulations. To measure the degree of flavor conversion, we use $P_{ex}$, that is defined as \begin{equation} P_{ex} = 1 - \frac{n_{\nu_e}}{n_{\nu_{e0}}}, \label{eq:defPex} \end{equation} where $n_{\nu_e}$ and $n_{\nu_{e0}}$ denote the number density of $\nu_e$ and its initial value, respectively. These results are good agreement with previous studies (\cite{2021PhRvD.103f3002S,2021ApJS..257...55K}, and private communication), ensuring reliability of our module computing neutrino self-interaction potential. Next, we turn our attention to inhomogeneous simulations. As described in Sec.~\ref{sec:basiceq}, GRQKNT is applicable to local simulations by setting a small spatial box that is located away from the origin of spherical polar coordinate. In this test, we set $R_0 = 50$ km (the distance from the coordinate origin) and $\Delta R = 100$ cm (computational domain) to meet the requirement. We also assume spherically symmetric, flat spacetime, and gray neutrino transport. The number of radial and neutrino angular grid points are $N_r = 3072$ and $N_{\theta_{\nu}}=256$, respectively. It should be mentioned that we obtain essentially the same results in low resolution simulations ($N_r=1536$ and $N_{\theta_{\nu}}=128$), suggesting that the adopted resolutions are high enough to capture the essential features. In this test, we solve QKE with the two-flavor approximation, and we set $\nu_x=\bar{\nu}_x=0$ in the initial condition. We run the simulation up to $t=10^{-8}$ s, which is $\sim 3$ times longer than the light crossing time of simulation box for out-going neutrinos. We observe that the system achieves a quasi-steady state by the end of our simulation. We set the initial angular distributions of $\nu_e$ and $\bar{\nu}_e$ through a newly-proposed analytic function, which is a simple but has a capability to capture essential characteristics of neutrino angular distributions in CCSN and BNSM. In this model, we focus only on outgoing neutrinos and put negligible atmospheres of neutrinos for incoming neutrinos. We consider a situation that outgoing neutrinos dominate over incoming ones, and that electron neutrinos lepton number (ELN) crossing appears in out-going directions; which would occur in CCSN (e.g., Type-II ELN crossings found in \cite{2021PhRvD.104h3025N}) and BNSM (see, e.g., \cite{2017PhRvD..95j3007W}). The analytic function is written as, \begin{equation} f_i = \begin{cases} \langle f_i \rangle \biggl( 1 + \beta_i ( \cos \theta_{\nu} - 0.5 ) \biggr) & \hspace{4mm} \cos \theta_{\nu } \ge 0, \\ \langle f_i \rangle \times \eta & \hspace{4mm} \cos \theta_{\nu } < 0, \end{cases} \label{eq:anaAngdistri} \end{equation} where the subscript $i$ denotes the neutrino flavor. We set $\eta=10^{-6}$ for negligible contribution of incoming neutrinos to self-interaction potentials. There are two parameters to characterize the angular distribution: $\langle f_i \rangle$ and $\beta_i$. The former is directly associated with the number density of neutrinos, and the latter characterizes the anisotropy of neutrino distributions. Since we need to determine $\nu_e$ and $\bar{\nu}_e$ angular distributions, we have four parameters in total. Below, we describe how we determine them. In CCSN environment, it has been suggested that ELN crossing tends to occur at the region where the ratio of number density of $\bar{\nu}_e$ to $\nu_e$ becomes unity \cite{2019PhRvD.100d3004A,2019ApJ...886..139N,2021PhRvD.104h3025N}. Hence, we set $\langle f_{ee} \rangle = \langle \bar{f}_{ee} \rangle$ in this test. We also note that $\bar{\nu}_e$ tends to be more forward-peaked angular distributions than $\nu_e$, i.e., $0<\beta_{\nu_e} < \beta_{\bar{\nu}_e}$, which is mainly due to the disparity of neutrino-matter interactions between $\nu_e$ and $\bar{\nu}_e$. As a simple case, we set $\beta_{\nu_e} = 0$ and $\beta_{\bar{\nu}_e} = 1$ in this test. We note that the ELN crossing is located at $\cos \theta_{\nu} = 0.5$ when we set $\langle f_{ee} \rangle = \langle \bar{f}_{ee} \rangle$ regardless of the choice of $\beta$ for both species. We determine $\langle f_{ee} \rangle$ so that the number density of $\nu_e$ becomes $6 \times 10^{32} {\rm cm^{-3}}$. This number density corresponds to the case that $\nu_e$ has a luminosity of $\sim 4 \times 10^{52} {\rm erg/cm^3}$ with an average energy of $\sim 12 {\rm MeV}$ at $R = 50$~km. These are typical values for each variable at a few hundred milliseconds after bounce in CCSN. The systematic study for the parameter dependence is very interesting, which is, however, postponed to our future work. We carry out two simulations with different boundary conditions. One of them is to use a fixed boundary condition at $R=R_{0}$, in which the neutrino distributions are frozen. More precisely speaking, the initial value of the density matrix of neutrinos with $\cos \theta_{\nu} > 0$ in the first radial grid point is restored at each time step. On the other hand, we employ a copy boundary for incoming neutrinos ($\cos \theta_{\nu} < 0$). As an outer boundary condition, we set a copy boundary for $\cos \theta_{\nu} > 0$, while we inject dilute neutrino ($\eta \times \langle f_i \rangle$, see Eq.~\ref{eq:anaAngdistri}) in the incoming directions. We start our simulations with setting anisotropic neutrinos (following Eq.~\ref{eq:anaAngdistri}) homogeneously in space. We find that strong flavor conversion occurs in both cases. Fig.~\ref{graph_FFC_inhomo_radi_vs_numden} displays the radial profile of $n_{\nu_x}/(n_{\nu_e} + n_{\nu_x})$ at the end of our simulation\footnote{This ratio is an appropriate quantity to measure the degree of flavor conversion in our models. Since there are $\nu_x$ are not injected in the simulations, $n_{\nu_x}$ becomes zero if no flavor conversions occur.}. It should be mentioned that the weak flavor conversion in the region of $0~{\rm cm} \le R-R_{0} \lesssim 20~{\rm cm}$ appeared in the fixed boundary simulation (red line in the figure) is an expected result, since the neutrino conversion is artificially suppressed at $R=R_{0}$. Except for the region, the flavor states nearly reaches the equipartition ($n_{\nu_x}/(n_{\nu_e} + n_{\nu_x}) = 0.5$) in both models. On the other hand, there is a distinct feature in angular distributions of neutrinos between the two models, which can be seen in Fig~\ref{graph_Radi_vs_angular_inhomo}. We measure the degree of flavor conversion by using a variable of $f_{\nu_x}/(f_{\nu_e} + f_{\nu_x})$ in this figure. In the case of fixed-boundary simulation, the flavor conversion is not outstanding for $\cos \theta_{\nu} \gtrsim 0.8$ neutrinos (see left panel), whereas strong flavor conversion emerges regardless of neutrino flight directions in the case with periodic boundary condition (right panel). This result suggests that the boundary condition affects non-linear evolution of FFC. We postpone the detailed analysis how the boundary condition gives an impact on angular distributions of FFC to future work, since this study requires a systematic study with varying neutrino angular distributions and changing the computational domain, that is clearly out of the scope of this code paper. \section{Summary}\label{sec:summary} In this paper, we describe the design and implementation of our new neutrino transport code GRQKNT with minimum but essential tests. This corresponds to an upgraded solver of full Boltzmann neutrino transport; indeed, we transplanted several modules of our Boltzmann solver to GRQKNT (e.g., two-energy-grid technique). Below, we briefly summarize the capability. \begin{enumerate} \item GRQKNT code is capable of solving the time-dependent QKE in the full phase space (six dimension). The transport operator is written in a conservative form of general relativistic QKE (see Sec.~\ref{sec:basiceq}). In the current version, neutrino transport in three different spacetimes (flat spacetime, Schwarzschild black hole, and Kerr black hole) are implemented. The two-energy-grid technique is equipped to treat fluid-velocity dependence self-consistently (see Sec.~\ref{sec:transport}). \item Major weak processes (neutrino emission, absorption, and scatterings) contributing in collision term are implemented in GRQKNT: electron-capture by free proton (and its inverse reaction, $\nu_e$ absorption), positron-capture by free neutrons (and its inverse reaction, $\bar{\nu}_e$ absorption), nucleon scattering, and coherent scattering with heavy nuclei. Collision term for the flavor off-diagonal components are also taken into account by following \cite{2019PhRvD..99l3014R} (see Sec.~\ref{sec:Colterm}). \item Vacuum, matter, and self-interaction Hamiltonian are implemented. The tests demonstrated in Sec.~\ref{sec:osc} lends confidence to our numerical treatment of these oscillation modules. \end{enumerate} The versatile design of GRQKNT allows us to study many features of neutrino kinetics, and therefore it will contribute to fill the gap between astrophysics community and neutrino oscillation one. As the first demonstration, we preform time-dependent global simulations of FFC by using GRQKNT, which is reported in another paper \cite{2022arXiv220604097N}. This is an important step to understand astrophysical consequences of FFC in CCSN and BNSM, and we will extend the work to more realistic situation in future. It should be mentioned, however, that there still remain work needed in developments of GRQKNT. Improving input physics such as neutrino-matter interactions is necessary to study more detailed features of neutrino quantum kinetics. Another shortcoming in GRQKNT code is that it is only applicable to problems with frozen matter background, and the feedback on matter dynamics is completely neglected. This indicates that the radiation-hydrodynamic features with quantum kinetic neutrino transport can not be investigated by the current version of GRQKNT. Although the numerical technique to link to hydrodynamic solver has been already established as demonstrated in our full Boltzmann neutrino transport code, the huge disparity of length and time scales between neutrino flavor conversion and other input physics is a major obstacle. We intend to overcome the issue by involving sub-grid models or developing better methods and approximations in future, although technical and algorithmic innovations are indispensable to achieve this goal. Nevertheless, the present study does mark an important advance towards the first-principle numerical modeling of CCSN and BNSM. We will tackle many unresolved issues surrounding the neutrino quantum kinetics with this code. \section{Acknowledgments} H.N is grateful to Chinami Kato, Lucas Johns, Sherwood Richers, George Fuller, Taiki Morinaga, Masamichi Zaizen, and Shoichi Yamada for useful comments and discussions. This research used the high-performance computing resources of "Flow" at Nagoya University ICTS through the HPCI System Research Project (Project ID: 210050, 210051, 210164, 220173, 220047), and XC50 of CfCA at the National Astronomical Observatory of Japan (NAOJ). For providing high performance computing resources, Computing Research Center, KEK, and JLDG on SINET of NII are acknowledged. This work is supported by the Particle, Nuclear and Astro Physics Simulation Program (Nos. 2022-003) of Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization (KEK).
1,116,691,501,102
arxiv
\section{Introduction} Recent work in meta reinforcement learning (RL) has begun to tackle the challenging problem of automatically discovering general-purpose RL algorithms~\citep{kirsch2019improving,alet2020meta,oh2020discovering}. These methods learn to reinforcement learn by optimizing for earned reward over the lifetimes of many agents in multiple environments. If the discovered learning principles are sufficiently general-purpose, then the learned algorithms should generalise to novel environments. Depending on the structure of the learned algorithm, these methods can be partitioned into backpropagation-based methods, which learn to use the backpropagation algorithm to reinforcement learn, and black-box-based methods, in which a single (typically recurrent) neural network jointly specifies the agent and RL algorithm~\citep{wang2016learning,duan2016rl}. While backpropagation-based methods are more prevalent due to their relative ease of implementation and theoretical guarantees, black-box methods can be more expressive and have the potential to avoid some of the issues with backpropagation-based optimization, such as memory requirements, catastrophic forgetting, and differentiability. % Unfortunately, black-box methods have not yet been successful at discovering general-purpose RL algorithms. In this work, we show that black-box methods exploit fewer symmetries than backpropagation-based methods. We hypothesise that introducing more symmetries to black-box meta-learners can improve their generalisation capabilities. We test this hypothesis by introducing a number of symmetries into an existing black-box meta learning algorithm, including (1) the use of the same learned learning rule across all nodes of the neural network (NN), (2) the flexibility to work with any input, output, and architecture sizes, and (3) invariance to permutations of the inputs and outputs (for dense layers). Permutation invariance implies that for any permutation of inputs and outputs the learning algorithm produces the same policy. We refer to such agents as \emph{symmetric learning agents} (SymLA). To introduce these symmetries, we build on variable shared meta learning (VSML)~\citep{kirsch2020meta}, which we adapt to the RL setting. VSML arranges multiple RNNs like weights in a NN and performs message passing between these RNNs. We then perform meta training and meta testing similar to black-box MetaRNNs~\citep{wang2016learning,duan2016rl}. We experimentally validate SymLA on bandits, classic control, and grid worlds, comparing generalisation capabilities to MetaRNNs. SymLA improves generalisation when varying action dimensions, permuting observations and actions, and significantly changing tasks and environments. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{assets/rl-figure.pdf} \caption{ The architecture for the proposed \emph{symmetric learning agents} (SymLA) that we use to investigate black-box learning algorithm with symmetries. Weights in a neural network are replaced with small parameter-shared RNNs. Activations in the original network correspond to messages passed between RNNs, both in the forward $\protect\fmsg$ and backward $\protect\bmsg$ direction in the network. These messages may contain external information such as the environment observation, previously taken actions, and rewards from the environment. } \label{fig:architecture} \end{figure*} \section{Preliminaries}\label{sec:background} \subsection{Reinforcement Learning} The RL setting in this work follows the standard (PO)MDP formulation. At every time step, $t = 1,2,\ldots$ the agent receives a new observation $o_t \in \mathcal{O}$ generated from the environment state $s_t \in \mathcal{S}$ and performs an action $a_t \in \mathcal{A}$ sampled from its (recurrent) policy $\pi_\theta = p(a_t|o_{1:t},a_{1:t-1})$. The agent receives a reward $r_t \in \mathcal{R} \subset \mathbb{R}$ and the environment transitions to the next state. This transition is defined by the environment dynamics $e = p(s_{t+1}, r_t|s_t, a_t)$. The initial environment state $s_1$ is sampled from the initial state distribution $p(s_1)$. The goal is to find the optimal policy parameters $\theta^*$ that maximise the expected return $R = \mathbb{E}[\sum_{t=1}^{T} \gamma^t r_t]$ where $T$ is the episode length, and $0 < \gamma \leq 1$ is a discount factor ($T = \infty$, $\gamma < 1$ for non-episodic MDPs). \subsection{Meta Reinforcement Learning}\label{sec:mrl} The meta reinforcement learning setting is concerned with discovering novel agents that learn throughout their multi-episode lifetime ($L \geq T$) by making use of rewards $r_t$ to update their behavior. This can be formulated as maximizing $\mathbb{E}_{e \sim p(e)}[\mathbb{E}[\sum_{t=1}^{L} \gamma^t r_t]]$ where $p(e)$ is a distribution of meta-training environments. The objective itself is similar to a multi-task setting. In this work, we discuss how the structure of the agent influences the degree to which it \emph{learns} and \emph{generalises} in novel tasks and environments. We seek to discover \emph{general-purpose} learning algorithms that generalise outside the meta-training distribution. We can think of an agent that learns throughout its lifetime as a history-dependent map $a_t, h_t = f(h_{t-1}, o_t, r_{t-1}, a_{t-1})$ that produces an action $a_t$ and new agent state $h_t$ given its previous state $h_{t-1}$, an observation $o_t$, environment reward $r_{t-1}$, and previous action $a_{t-1}$. In the case of backpropagation-based learning, $f$ is decomposed into: (1) a \emph{stationary} policy $\pi_\theta^{(s)}$ that maps the current state into an action, $a_t = \pi_\theta^{(s)}(o_t)$; and (2) a backpropagation-based update rule that optimizes a given objective $J$ by propagating the error signal backwards and updating the policy in fixed intervals (e.g. after each episode). In its simplest form, for any dense layer $k \in \{1, \ldots, K\}$ of a NN policy with size $A^{(k)} \times B^{(k)}$, inputs $x^{(k)}$, outputs $x^{(k+1)}$, and weights $w^{(k)} \subset \theta$, the backpropagation update rule is given by \begin{align} x^{(k+1)}_b &= \sum_a x^{(k)}_a w^{(k)}_{ab} & \textrm{(forward pass)} \label{eq:fwd_backprop} \\ \delta^{(k-1)}_a &= \sum_b \delta_b^{(k)} w_{ab}^{(k)} & \textrm{(backward pass)} \label{eq:bwd_backprop} \\ \Delta w_{ab}^{(k)} &= -\alpha\frac{\partial J}{\partial w_{ab}^{(k)}} = -\alpha x_a^{(k)} \delta_b^{(k)} & \textrm{(update)}\label{eq:meta-rl-backprop} \end{align} where $a \in \{1, \ldots, A^{(k)}\}$, $b \in \{1, \ldots, B^{(k)}\}$, $\alpha$ is the learning rate, $\delta$ are error terms, and the agent state $h$ corresponds to parameters $\theta$. The initial error is given by the gradient at the NN outputs, $\delta^{(k)} = \frac{\partial J}{\partial x^{(K+1)}}$. Transformations such as non-linearities are omitted here. Works in meta-reinforcement learning that take this approach parameterise the objective $J_\phi$ and meta-learn its parameters \citep{kirsch2019improving,oh2020discovering}. In contrast, black-box meta RL~\citep{duan2016rl,wang2016learning} meta-learns $f$ directly in the form of a single \emph{non-stationary} policy $\pi_\theta$ with memory. Parameters of $f$ represent the learning algorithm (no explicit $J_\phi$) while the state $h$ represents the policy. In the simplest form of an RNN representation of $f$, given a current hidden state $h$ and inputs $o,r,a$ (concatenated $[\cdot]$), updates to the policy take the form \begin{equation}\label{eq:meta-rl-blackbox} a_b, h_b \leftarrow f_{\theta}(h, o, r, a)_{b} = \sigma(\sum_a [h, o, r, a]_a v_{ab}), \end{equation} with parameters $\theta = v$ and activation function $\sigma$, omitting the bias term. We refer to this as the MetaRNN. The inputs must include, beyond the observation $o$, the previous reward $r$ and action $a$, so that the meta-learner can learn to associate past actions with rewards~\citep{schmidhuber1992steps,wang2016learning}. Further, black-box systems do not reset the state $h$ between episode boundaries, so that the learning algorithm can accumulate knowledge through the agent's lifetime. \section{Symmetries in Meta RL} In this section, we demonstrate how the learning dynamics in backpropagation-based systems (Equation \ref{eq:meta-rl-backprop}) differ from the learning dynamics in black-box systems (Equation \ref{eq:meta-rl-blackbox}), and how this affects the generalisation of black-box methods to novel environments. \subsection{Symmetries in backpropagation-based Meta RL}\label{sec:symmetries_backprop} We first identify three symmetries that backpropagation-based systems exhibit and discuss how they affect the generalisability of the learned learning algorithms. \begin{enumerate} \item \textbf{Symmetric learning rule.} In Equation \ref{eq:meta-rl-backprop}, each parameter $w_{ab}$ is updated by the same update rule based on information from the forward and backward pass. Meta-learning an objective $J_\phi$ affects the updates of each parameter symmetrically through backpropagation. \item \textbf{Flexible input, output, and architecture sizes.} Because the same rule is applied everywhere, the learning algorithm can be applied to arbitrarily sized neural networks, including variations in input and output sizes. This involves varying $A$ and $B$ and the number of layers, affecting how often the learning rule is applied and how many parameters are being learned. \item \textbf{Invariance to input and output permutations.} Given a permutation of inputs and outputs in a layer, defined by the bijections $\rho: \N \to \N$ and $\rho': \N \to \N$, the learning rule is applied as $x^{(k+1)}_{\rho'(b)} = \sum_a x^{(k)}_{\rho(a)} w^{(k)}_{ab}$, $\delta^{(k-1)}_{\rho(a)} = \sum_b \delta^{(k)}_{\rho'(b)} w^{(k)}_{ab}$, and $\Delta w^{(k)}_{ab} = -\alpha x^{(k)}_{\rho(a)} \delta^{(k)}_{\rho'(b)}$. Let $w'$ be a weight matrix with $w'^{(k)}_{\rho(a)\rho'(b)} = w^{(k)}_{a,b}$, then we can equivalently write $x^{(k+1)}_{\rho'(b)} = \sum_a x^{(k)}_{\rho(a)} w'^{(k)}_{\rho(a)\rho'(b)}$, $\delta^{(k-1)}_{\rho(a)} = \sum_b \delta^{(k)}_{\rho'(b)} w'^{(k)}_{\rho(a)\rho'(b)}$, and $\Delta w'^{(k)}_{\rho(a)\rho'(b)} = -\alpha x^{(k)}_{\rho(a)} \delta^{(k)}_{\rho'(b)}$. If all elements of $w'^{(k)}$ are initialized i.i.d., we can interchangeably use $w$ in place of $w'$ in the above updates. By doing so, we recover the original learning rule equations for any $a, b$. Thus, the learning algorithm is invariant to input and output permutations. \end{enumerate} While backpropagation has inherent symmetries, these symmetries would be violated if the objective function $J_\phi$ would be asymmetric. Formally, when permuting the NN outputs $y = x^{(K+1)}$ such that $y'_b = y_{\rho'(b)}$, $J_\phi$ should satisfy that the gradient under the permutation is also a permutation \begin{equation} \frac{\partial J_\phi(y')}{\partial y'_b} = \left[ \frac{\partial J_\phi(y)}{\partial y} \right]_{\rho'(b)} \end{equation} where the environment accepts the action permuted by $\rho'$ in the case of $J_\phi(y')$. This is the case for policy gradients, for instance, if the action selection $\pi(a|s)$ is permuted according to $\rho'$. When meta-learning objective functions, prior work carefully designed the objective function $J_\phi$ to be symmetric. In MetaGenRL~\citep{kirsch2019improving}, taken actions were processed element-wise with the policy outputs and sum-reduced by the loss function. In LPG~\citep{oh2020discovering}, taken actions and policy outputs were not directly fed to $J_\phi$, but instead only the log probability of the action distribution was used. \subsection{Insufficient Symmetries in Black-box Meta RL} Black-box meta learning methods are appealing as they require few hard-coded biases and are flexible enough to represent a wide range of possible learning algorithms. We hypothesize that this comes at the cost of the tendency to overfit to the given meta training environment(s) resulting in overly specialized learning algorithms. Learning dynamics in backpropagation-based systems (Equation \ref{eq:meta-rl-backprop}) differ significantly from learning dynamics in black-box systems (Equation \ref{eq:meta-rl-blackbox}). In particular, meta-learning $J_\phi$ is significantly more constrained, since $J_\phi$ can only indirectly affect each policy parameter $w_{ab}^{(k)}$ through the \emph{same} learning rule from Equation \ref{eq:meta-rl-backprop}. In contrast, in black-box systems (Equation \ref{eq:meta-rl-blackbox}), each policy state $h_b$ is directly controlled by \emph{unique} meta-parameters (vector $v_{\cdot b}$), thereby encouraging the black-box meta-learner to construct specific update rules for each element of the policy state. This results in sensitivity to permutations in inputs and outputs. Furthermore, input and output spaces must retain the same size as those are directly dependent on the number of RNN parameters. As an example, consider a meta-training distribution of two-armed bandits where the expected payout of the first arm is much larger than the second. If we meta-train a MetaRNN on these environments then when meta-testing the MetaRNN will have learned to immediately increase the probability of pulling the first arm, independent of any observed rewards. If instead the action probability is adapted using REINFORCE or a meta-learned symmetric objective function then, due to the implicit symmetries, the learning algorithm could not differentiate between the two arms to favor one over the other. While the MetaRNN behavior is optimal when meta-testing on the same meta-training distribution, it completely fails to generalise to other distributions. Thus, the MetaRNN results in a non-learning, biased solution, whereas the backpropagation-based approach results in a learning solution. In the former case, the learning algorithm is overfitted to only produce a fixed policy that always samples the first arm. In the latter case, the learning algorithm is unbiased and will learn a policy from observed rewards to sample the first arm. Beyond bandits, for reasonably sized meta-training distributions, we may have any number of biases in the data that a MetaRNN will inherit, impeding generalisation to unseen tasks and environments. \section{Adding Symmetries to Black-box Meta RL} A solution to the illustrated over-fitting problem with black-box methods is the introduction of symmetries into the parameterisation of the policy. This can be achieved by generalising the forward pass (Equation \ref{eq:fwd_backprop}), backward pass (Equation \ref{eq:bwd_backprop}), and element-wise update (Equation \ref{eq:meta-rl-backprop}) to parameterized versions. We further subsume the loss computation into these parameterized update rules. Together, they form a single recurrent policy with additional symmetries. Prior work on variable shared meta learning (VSML)~\citep{kirsch2020meta} used similar principles to meta-learn supervised learning algorithms. In the following, we extend their approach to deal with the RL setting. \subsection{Variable Shared Meta Learning} VSML describes neural architectures for meta learning with parameter sharing. This can be motivated by meta learning how to update weights~\citep{bengio1992optimization,schmidhuber1993reducing} where the update rule is shared across the network. Instead of designing a meta network that defines the weight updates explicitly, we arrange small parameter-shared RNNs (LSTMs) like weights in a NN and perform message passing between those. In VSML, each weight $w_{ab}$ with $w \in \mathbb{R}^{A \times B}$ in a NN is replaced by a small RNN with parameters $\theta$ and hidden state $h_{ab} \in \R^{N}$. We restrict ourselves to dense NN layers here, where $w$ corresponds to the weights of that layer with input size $A$ and output size $B$. This can be adapted to other architectures such as CNNs if necessary. All these RNNs share the same parameters $\theta$, defining both what information propagates in the neural network, as well as how states are updated to implement learning. Each RNN with state $h_{ab}$ receives the analogue to the previous activation, here called the vectorized forward message $\fmsg_a \in \R^{\Fmsg}$, and the backward message $\bmsg_b \in \R^{\Bmsg}$ for information flowing backwards in the network (asynchronously). The backward message may contain information relevant to credit assignment, but is not constrained to this. The RNN update equation (compare Equation \ref{eq:meta-rl-backprop} and \ref{eq:meta-rl-blackbox}) is then given by \begin{equation}\label{eq:vsml_state_update} h_{ab}^{(k)} \leftarrow f_{\textrm{RNN}}(h_{ab}^{(k)}, \fmsg_a^{(k)}, \bmsg_b^{(k)}) \end{equation} for layer $k$ where $k \in \{1, \ldots, K\}$ and $a \in \{1, \ldots, A^{(k)}\}, b \in \{1, \ldots, B^{(k)}\}$. Similarly, new forward messages are created by transforming the RNN states using a function $f_{\fmsg}: \R^N \to \R^{\Fmsg}$ (compare Equation \ref{eq:fwd_backprop}) such that \begin{equation}\label{eq:gen_fwd_msg} \fmsg_b^{(k+1)} = \sum_a f_{\fmsg}(h_{ab}^{(k)}) \end{equation} defines the new forward message for layer $k + 1$ with $b \in \{1, \ldots, B^{(k)} = A^{(k+1)}\}$. The backward message is given by $f_{\bmsg}: \R^N \to \R^{\Bmsg}$ (compare Equation \ref{eq:bwd_backprop}) such that \begin{equation}\label{eq:gen_bwd_msg} \bmsg_a^{(k-1)} = \sum_b f_{\bmsg}(h_{ab}^{(k)}) \end{equation} and $a \in \{1, \ldots, A^{(k)} = B^{(k-1)}\}$. For simplicity, we use $\theta$ below to denote all of the VSML parameters, including those of the RNN and forward and backward message functions. In the following, we derive a black-box meta reinforcement learner based on VSML (visualized in Figure \ref{fig:architecture}). \subsection{RL Agent Inputs and Outputs} At each time step in the environment, the agent's inputs consist of the previously taken action $a_{t-1}$, current observation $o_t$ and previous reward $r_{t-1}$. We feed $r_{t-1}$ as an additional input to each RNN, the observation $o_t \in \R^{A^{(1)}}$ to the first layer ($\fmsg_{\cdot1}^{(1)} := o_{t}$), and the action $a_{t-1} \in \{0,1\}^{B^{(K)}}$ (one-hot encoded) to the last layer ($\bmsg_{\cdot1}^{(K)} := a_{t-1}$). The index $1$ refers to the first dimension of the $\Fmsg$ or $\Bmsg$-dimensional message. We interpret the agent's output message $y = \fmsg_{\cdot1}^{(K+1)}$ as the unnormalized logits of a categorical distribution over actions. While we focus on discrete actions only in our present experiments, this can be adapted for probabilistic or deterministic continuous control. \subsection{Architecture Recurrence and Reward Signal} Instead of using multiple layers ($K > 1$), in this paper we use a single layer ($K = 1$). In Equation \ref{eq:vsml_state_update}, RNNs in the same layer can not coordinate directly as their messages are only passed to the next and previous layer. To give that single layer sufficient expressivity for the RL setting, we make it `recurrent' by processing the layer's own messages $\fmsg_b^{(k+1)}$ and $\bmsg_a^{(k-1)}$. The network thus has two levels of recurrence: (1) Each RNN that corresponds to a weight of a standard NN and (2) messages that are generated according to Equation \ref{eq:gen_fwd_msg} and \ref{eq:gen_bwd_msg} and fed back into the same layer. Furthermore, each RNN receives the current reward signal $r_{t-1}$ as input. The update equation is given by \begin{equation}\label{eq:state_update} h_{ab}^{(k)} \leftarrow f_{\textrm{RNN}}(h_{ab}^{(k)}, \underbrace{\fmsg_a^{(k)}, \bmsg_b^{(k)}, r_{t-1}}_{\textrm{environment inputs}}, \underbrace{\fmsg_b^{(k+1)}, \bmsg_a^{(k-1)}}_{\textrm{from previous step}}) \end{equation} where $a \in \{1, \ldots, A^{(k)}\}, b \in \{1, \ldots, B^{(k)}\}$. As we only use a single layer, $k = 1$, we apply the update multiple times (multiple micro ticks) for each step in the environment. % This can also be viewed as multiple layers with shared parameters, where parameters correspond to states $h$. For pseudo code, see Algorithm \ref{alg:meta_training} in the appendix. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{assets/inner-outer-loop.pdf} \caption{In SymLA, the inner loop recurrently updates all RNN states $h_{ab}(t)$ for agent steps $t \in \{1, \ldots, L\}$ starting with randomly initialized states $h_{ab}$. Based on feedback $r_t$, RNN states can be used as memory for learning. The learning algorithm encoded in the RNN parameters $\theta$ is updated in the outer loop by meta-training using ES.} \label{fig:inner-outer} \end{figure} \subsection{Symmetries in SymLA} By incorporating the above changes to inputs, outputs, and architecture, we arrive at a black-box meta RL method with symmetries, here represented by our proposed \emph{symmetric learning agents} (SymLA). By construction, SymLA exhibits the same symmetries as those described in Section \ref{sec:symmetries_backprop}, despite not using the backpropagation algorithm. \begin{enumerate} \item \textbf{Symmetric learning rule.} The learning rule as defined by Equation \ref{eq:state_update} is replicated across $a \in \{1, \ldots, A\}$ and $b \in \{1, \ldots, B\}$ with the same parameter $\theta$. \item \textbf{Flexible input, output, and architecture sizes.} Changes in $A$, $B$, and $K$ correspond to input, output, and architecture size. This does not affect the number of meta-parameters and therefore these quantities can also be varied at meta-test time. \item \textbf{Invariance to input and output permutations.} When permuting messages using bijections $\rho$ and $\rho'$, the state update becomes $h_{ab}^{(k)} \leftarrow f_{\textrm{RNN}}(h_{ab}^{(k)}, \fmsg_{\rho(a)}^{(k)}, \bmsg_{\rho'(b)}^{(k)}, r_{t-1}, \fmsg_{\rho'(b)}^{(k+1)}, \bmsg_{\rho(a)}^{(k-1)})$, and the message transformations are $ \fmsg_{\rho'(b)}^{(k+1)} = \sum_a f_{\fmsg}(h_{ab}^{(k)})$ and $\bmsg_{\rho(a)}^{(k-1)} = \sum_b f_{\bmsg}(h_{ab}^{(k)})$. Similar to backpropagation, when RNN states $h_{ab}$ are initialized i.i.d., we can use $h_{\rho(a),\rho'(b)}$ in place of $h_{ab}$ to recover the original Equations \ref{eq:gen_fwd_msg}, \ref{eq:gen_bwd_msg}, \ref{eq:state_update}. \end{enumerate} \subsection{Learning / Inner Loop} Learning corresponds to updating RNN states $h_{ab}$ (see Figure \ref{fig:inner-outer}). This is the same as the MetaRNN~\citep{wang2016learning,duan2016rl} but with a more structured neural model. For fixed RNN parameters $\theta$ which encode the learning algorithm, we randomly initialize all states $h_{ab}$. Next, the agent steps through the environment, updating $h_{ab}$ in each step. If the environment is episodic with $T$ steps, the agent is run for a lifetime of $L \geq T$ steps with environment resets in-between, carrying the agent state $h_{ab}$ over. \subsection{Meta Learning / Outer Loop} Each outer loop step unrolls the inner loop for $L$ environment steps to update $\theta$. The SymLA objective is to maximize the agent's lifetime sum of rewards, i.e. $\sum_{t=1}^L r_t(\theta)$. We optimize this objective using evolutionary strategies~\citep{wierstra2008natural,salimans2017evolution} by following the gradient \begin{equation}\label{eq:evolution_strategies} \nabla_\theta \E_{\phi \sim \N(\phi|\theta, \Sigma)}[\E_{e \sim p(e)}[\sum_{t=1}^L r_t^{(e)}(\phi)]]. \end{equation} with some fixed diagonal covariance matrix $\Sigma$ and environments $e \sim p(e)$. We chose evolution strategies due to its ability to optimize over long inner-loop horizons without memory constraints that occur due to backpropagation-based meta optimization. Furthermore, it was shown that meta-loss landscapes are difficult to navigate and the search distribution helps in smoothing those~\citep{metz2019understanding}. \section{Experiments} Equipped with a symmetric black-box learner, we now investigate how its learning properties differ from a standard MetaRNN. Firstly, we learn to learn on bandits from \citet{wang2016learning} where the meta-training environments are similar to the meta-test environments. Secondly, we demonstrate generalisation to unseen action spaces, applying the learned algorithm to bandits with varying numbers of arms at meta-test time---something that MetaRNNs are not capable of. Thirdly, we demonstrate how symmetries improve generalisation to unseen observation spaces by creating permutations of observations and actions in classic control benchmarks. Fourthly, we show how permutation invariance leads to generalisation to unseen tasks by learning about states and their associated rewards at meta-test time. Finally, we demonstrate how symmetries result in better learning algorithms for unseen environments, generalising from a grid world to CartPole. Hyper-parameters are in Appendix \ref{app:hyperparameters}. \begin{figure} \centering \includegraphics[width=\columnwidth]{assets/wang-bandits.pdf} \caption{We compare SymLA to a standard MetaRNN on a set of bandit benchmarks from ~\citet{wang2016learning}. We train (y-axis) and test (x-axis) on two-armed bandits of varying difficulties. We report expected cumulative regret across 3 meta-training and 100 meta-testing runs with 100 arm-pulls (smaller is better). We observe that SymLA tends to perform comparably to the MetaRNN. } \label{fig:wang_bandits} \end{figure} \subsection{Learning to Learn on Similar Environments} We first compare SymLA and the MetaRNN on the two-armed (dependent) bandit experiments from \citet{wang2016learning} where there is no large variation in the meta-test environments. These consist of five different settings of varying difficulty that we use for meta-training and meta-testing (see Appendix \ref{app:wang_bandits}). There are no observations (no context), only two arms, and a meta-training distribution where each arm has the same marginal distribution of payouts. Thus, we expect the symmetries from SymLA to have no significant effect on performance . We meta-train for an agent lifetime of $L = 100$ arm-pulls and report the expected cumulative regret at meta-test time in Figure \ref{fig:wang_bandits}. We meta-train on each of the five settings, and meta-test across all settings. The performance of the MetaRNN reproduces the average performance of \citet{wang2016learning}, here trained with ES instead of A2C. When using symmetries (as in SymLA), we recover a similar performance compared to the MetaRNN. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{assets/varying-arms-bandits.pdf} \caption{We meta-train and meta-test SymLA on varying numbers of independent arms to measure generalisation performance on unseen configurations. We do this by adding or removing RNNs to accommodate the additional output units. The number of meta-parameters remains constant. We report expected cumulative regret across 3 meta-training and 100 meta-testing runs with 100 arm-pulls (smaller is better). Particularly relevant are the out-of-distribution scenarios (off-diagonal). } \label{fig:varying_arms_bandits} \end{figure} \begin{figure*}[h] \centering \includegraphics[width=0.8\textwidth]{assets/classic-control-permute_errband_font.pdf} \caption{SymLA's architecture is inherently permutation invariant. When meta-training on standard CartPole, Acrobot, and MountainCar, the performance of the MetaRNN and SymLA are comparable. We then meta-test on a setting where both the observations and actions are shuffled. In this setting SymLA still performs well as it has meta-learned to identify observations and actions at meta-test time. In contrast, the MetaRNN fails to do so. Standard deviations are over 3 meta-training and 100 meta-testing runs. } \label{fig:classic_control_permute} \end{figure*} \begin{figure*}[h] \centering \raisebox{-0.4\height}{\includegraphics[width=0.3\textwidth]{assets/grid-world.pdf}} \raisebox{-0.5\height}{\includegraphics[width=0.6\textwidth]{assets/grid-concept-invariance_font.pdf}} \caption{ We extend the permutation invariant property to concepts - varying the rewards associated with different object types (+1 and -1) in a grid world environment (left). SymLA is forced to learn about the rewards of object types at meta-test time (starting at near zero reward and increasing the reward intake over time). When switching the rewards and running the same learner, the MetaRNN collects the wrong rewards, whereas SymLA still infers the correct relationships. Standard deviations are over 3 meta-training and 100 meta-testing runs. } \label{fig:grid_concept_invariance} \end{figure*} \subsection{Generalisation to Unseen Action Spaces} In contrast to the MetaRNN, in SymLA we can vary the number of arms at meta-test time. The architecture of SymLA allows to change the network size arbitrarily by replicating existing RNNs, thus adding or removing arms at meta-test time while retaining the same meta-parameters from meta-training. In Figure \ref{fig:varying_arms_bandits} we train on different numbers of arms and test on seen and unseen configurations. All arms are independently drawn from the uniform distribution $p_i \sim U[0, 1]$. We observe that SymLA works well within-distribution (diagonal) and generalises to unseen numbers of arms (off-diagonal). We also observe that for two arms a more specialized solution can be discovered, impeding generalisation when only training on this configuration. \subsection{Generalisation to Unseen Observation Spaces} In the next experiments we want to specifically analyze the permutation invariance created by our architecture. In the previous bandit environments, actions occurred in all permutations in the training distribution. In contrast, RL environments usually have some structure to their observations and actions. For example in CartPole the first observation is usually the pole angle and the first action describes moving to the left. Human-engineered learning algorithms are usually invariant to permutations and thus generalise to new problems with different structure. The same should apply for our black-box agent with symmetries. We demonstrate this property in the classic control tasks \emph{CartPole}, \emph{Acrobot}, and \emph{MountainCar}. We meta-train on each environment respectively with the original observation and action order. We then meta-test on either (1) the same configuration or (2) across a permuted version. The results are visualized in Figure \ref{fig:classic_control_permute}. Due to the built-in symmetries, the performance does not degrade in the shuffled setting. Instead, our method quickly learns about the ordering of the relevant observations and actions at meta-test time. In comparison, the MetaRNN baseline fails on the permuted setting where it was not trained on, indicating over-specialization. Thus, symmetries help to generalise to observation permutations that were not encountered during meta training. \subsection{Generalisation to Unseen Tasks} The permutation invariance has further reaching consequences. It extends to learning about tasks at meta-test time. This enables generalisation to unseen tasks. We construct a grid world environment (see Figure \ref{fig:grid_concept_invariance}) with two object types: A trap and a heart. The agent and the two objects (one of each type) are randomly positioned every episode. Collecting the heart gives a reward of +1, whereas the trap gives -1. All other rewards are zero. The agent observes its own position and the position of both objects. The observation is constructed as an image with binary channels for the position and each object type. When meta-training on this environment, at meta-test time we observe in Figure \ref{fig:grid_concept_invariance} that the MetaRNN learns to directly collect hearts in each episode throughout its lifetime. This is due to having overfitted to the association of hearts with positive rewards. In comparison, SymLA starts with near-zero rewards and learns through interactions which actions need to be taken when receiving particular observations to collect the heart instead of the trap. With sufficient environment interactions $L$ we would expect SymLA to eventually (after sufficient learning) match the average reward per time of the MetaRNN in the non-shuffled grid world. Next, we swap the rewards of the trap and heart, i.e. the trap now gives a positive reward, whereas the heart gives a negative reward. This is equivalent to swapping the input channels corresponding to the heart and trap. We observe that SymLA still generalises, learning at meta-test time about observations and their associated rewards. In contrast, the MetaRNN now collects the wrong item, receiving negative rewards. These results show that black-box meta RL with symmetries discovers a more general update rule that is less specific to the training tasks than typical MetaRNNs. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{assets/unseen-environments-with-std_font.pdf} \caption{ Generalisation capabilities of SymLA from GridWorld to CartPole. We meta-train the learning algorithm on GridWorld. We then meta-test on GridWorld and CartPole and report standard error of the mean and mean rewards (100 seeds) relative to a random policy - this highlights the learning process. While SymLA generalises from GridWorld to CartPole, the MetaRNN does not. } \label{fig:unseen_environments} \end{figure} \subsection{Generalisation to Unseen Environments} \input{sections/exp_unseen_envs} \section{Related Work} \input{sections/related_work} \section{Conclusion} In this work, we identified symmetries that exist in backpropagation-based methods for meta RL but are missing from black-box methods. We hypothesized that these symmetries lead to better generalisation of the resulting learning algorithms. To test this, we extended a black-box meta learning method~\citep{kirsch2020meta} that exhibits these same symmetries to the meta RL setting. This resulted in SymLA, a flexible black-box meta RL algorithm that is less prone to over-fitting. We demonstrated generalisation to varying numbers of arms in bandit experiments (unseen action spaces), permuted observations and actions with no degradation in performance (unseen observation spaces), and observed the tendency of the meta-learned RL algorithm to learn about states and their associated rewards at meta-test time (unseen tasks). Finally, we showed that the discovered learning behavior also transfers between grid world and (unseen) classic control environments. \section*{Acknowledgements} We thank Nando de Freitas, Razvan Pascanu, and Luisa Zintgraf for helpful comments. Funded by DeepMind.
1,116,691,501,103
arxiv
\section{Introduction} The focus of the present paper is the asymptotic shape of positive global solutions of parabolic systems with competition on bounded radial domains with Neumann boundary conditions. The problem which mainly motivates our study is the following Lotka-Volterra System of two equations: \begin{equation}\label{Lotka:Volterra:system} \begin{aligned} (u_1)_t-\mu_1\Delta u_1 &= a_1(t) u_1 - b_1(t) u_1^2 -\alpha_1(t) u_1u_2,&&\quad x \in B,\ t>0,\\ (u_2)_t-\mu_2\Delta u_2 &= a_2(t) u_2 - b_2(t) u_2^2 -\alpha_2(t) u_1u_2,&&\quad x\in B,\ t>0,\\ \frac{\partial u_i}{\partial\nu}&=0 &&\quad \text{on $\partial B \times (0,\infty)$,}\\ u_i(x,0)&=u_{0,i}(x)\geq 0&&\quad \text{for $x\in B,$ $i=1,2$.} \end{aligned} \end{equation} Here and in the remainder of the paper, $B$ denotes a ball or an annulus in $\mathbb R^N$ with $N\geq 2$, and $\nu$ denotes the unit outer normal on $\partial B$. Moreover, $\mu_1$ and $\mu_2$ are positive constants and \begin{equation}\label{Lotka:coefficients} \begin{aligned} &\text{$a_i,b_i, \alpha_i \in L^\infty((0,\infty))$ satisfy}\\ &\text{$a_i(t),b_i(t) \geq 0$ for $t>0$ \ \ and\ \ $\inf_{t>0} \alpha_i(t)>0$ for }i=1,2. \end{aligned} \end{equation} The Lotka-Volterra System is commonly used to model the competition between two different species, and the coefficients $\mu_i, a_i, b_i, \alpha_i$ represent diffusion, birth, saturation, and competition rates respectively (see \cite{holmes}). In the literature, the system is mostly considered with constant coefficients for matters of simplicity, whereas it is more natural to assume time-dependence as e.g. in \cite{cantrell,langa,Mierczynski} in order to model the effect of different time periods on the birth rates, the movement, or the aggressiveness of the species. Even in the case of constant coefficients, the possible dynamics of the system has a very rich structure and depend strongly on relationships between these constants, see e.g. \cite{cantrell,crooks,dancer,lou,dancer:zhang}. In the case of time-dependent coefficients, a full understanding of the asymptotic dynamics is out of reach, but one can still guess that the shape of the underlying domain has some effect on the shape of solutions for large positive times. In the present paper we study this question on a radial bounded domain $B$. More precisely, for a solution $u=(u_1,u_2)$ of (\ref{Lotka:Volterra:system}), we study symmetry and monotonicity properties of elements in the associated omega limit set, which is defined as \begin{align* \omega(u)&= \omega(u_1,u_2):=\{(z_1,z_2)\in C(\overline{B})\times C(\overline{B})\mid \\& \max_{i=1,2}\lim_{n\to\infty}\|u_i(\cdot,t_n)-z_i\|_{L^\infty(B)}=0 \text{ for some sequence } t_n\to\infty\}.\nonumber \end{align*} For global solutions which are uniformly bounded and have equicontinuous semiorbits $\{u_i(\cdot,t):t\geq 1\}$, the set $\omega(u)$ is nonempty, compact, and connected. The equicontinuity can be obtained under mild boundedness and regularity assumptions on the equation and using boundary and interior H\"{o}lder estimates (see Lemma \ref{regularity} below). To present our results we need to introduce some notation. Let $\Sn=\{x\in\mathbb R^N: |x|=1\}$ be the unit sphere in $\R^N,$ $N\geq 2$. For a vector $e\in \Sn$, we consider the hyperplane $H(e):=\{x\in \mathbb R^N: x\cdot e=0\}$ and the half domain $B(e):=\{x\in B: x\cdot e>0\}.$ We write also $\sigma_e: \overline{B}\to \overline{B}$ to denote reflection with respect to $H(e),$ i.e. $\sigma_e(x):=x-2(x\cdot e)e$ for each $x\in B.$ Following \cite{smets}, we say that a function $u\in C(B)$ is \textit{foliated Schwarz symmetric with respect to some unit vector $p\in \Sn$} if $u$ is axially symmetric with respect to the axis $\mathbb R p$ and nonincreasing in the polar angle $\theta:= \operatorname{arccos}(\frac{x}{|x|}\cdot p)\in [0,\pi].$ We refer the reader to the survey article \cite{wethsurvey} for a broad discussion of symmetry properties of this type. Our main result concerning (\ref{Lotka:Volterra:system}) is the following. \begin{theo}\label{corollary:Lotka:Volterra} Suppose that \eqref{Lotka:coefficients} holds, and let $u=(u_1,u_2)$ be a classical solution of \eqref{Lotka:Volterra:system} such that $\|u_i\|_{L^\infty(B \times (0,\infty))} < \infty$ for $i=1,2$. Moreover, assume that $$ \leqno{\rm (h0)}\quad u_{0,1} \geq u_{0,1} \circ \sigma_e, \:u_{0,2} \leq u_{0,2}\circ \sigma_e \quad \text{in $B(e)$}\ \ \left \{ \begin{aligned} &\text{for some $e\in \Sn$ with}\\ &\text{$u_{0,i}\not\equiv u_{0,i}\circ \sigma_e$ for $i=1,2$.} \end{aligned} \right. $$ Then there is some $p\in \Sn$ such that every $(z_1,z_2)\in \omega(u)$ has the property that $z_1$ is foliated Schwarz symmetric with respect to $p$ and $z_2$ is foliated Schwarz symmetric with respect to $-p.$ \end{theo} This theorem is a direct consequence of a more general result which we state in Theorem~\ref{main:theorem:neumann} below. Note that the inequality condition (h0) does not seem very strong, but it is a key assumption in order to obtain the result. Indeed, for general positive initial data, foliated Schwarz symmetry cannot be expected, as one may see already by looking at equilibria (i.e., stationary solutions) in special cases. Consider e.g. the elliptic system \begin{equation}\label{Lotka:Volterra:system-elliptic} \begin{aligned} -\Delta u_1 &= \lambda u_1 - u_1u_2&&\qquad \text{ in } B,\\ -\Delta u_2 &= \lambda u_2 - u_1u_2&&\qquad \text{ in } B,\\ \partial_\nu u_1&=\partial_\nu u_2=0&&\qquad \text{ on $\partial B.$} \end{aligned} \end{equation} Using bifurcation theory, one can detect values $\lambda>0$ and $\eps>0$ such that (\ref{Lotka:Volterra:system-elliptic}) admits positive solutions in the annulus $B= \{x \in \R^2\::\: 1-\eps < |x|<1\}$ such that the angular derivatives of $u_1,u_2$ change sign multiple times and therefore neither of the components is foliated Schwarz symmetric, see Theorem~\ref{thm:local:maxima:system} below in the appendix. Theorem~\ref{corollary:Lotka:Volterra} is somewhat related to our previous work \cite{saldana:weth} on scalar nonlinear parabolic equations under Dirichlet boundary conditions. The main idea of both \cite{saldana:weth} and the present paper is to obtain the symmetry of elements in the omega limit set by a rotating plane argument. However, different tools are required to set up the method under Neumann boundary conditions. In particular, the main result of \cite{saldana:weth} does not extend in a straightforward way to the scalar nonlinear Neumann problem \begin{equation}\label{model:scalar} \begin{aligned} u_t-\mu(|x|,t)\Delta u &=f(t,|x|,u)&&\qquad \text{in $B \times (0,\infty)$,}\\ \partial_\nu u&=0&&\qquad \text{on $\partial B \times (0,\infty)$,}\\ u(x,0)&=u_0(x)&& \qquad \text{for $x\in B$.} \end{aligned} \end{equation} It therefore seems appropriate to include a symmetry result for (positive and sign changing) solutions of (\ref{model:scalar}) in the present paper. This result will be easier to prove than Theorem~\ref{corollary:Lotka:Volterra}. We need the following hypotheses on the nonlinearity $f$ and the diffusion coefficient $\mu$. In the following, we put $I_B:=\{|x| : x\in \overline{B}\}$. \begin{enumerate} \item[(H1)] The nonlinearity $f:[0,\infty)\times I_B \times \R\to\R,$ $(t,r,u)\mapsto f(t,r,u)$ is locally Lipschitz continuous in $u$ uniformly in $r$ and $t,$ i.e., $$ \sup_{\genfrac{}{}{0pt}{}{\scriptstyle{r\in I_B, t>0,}}{\scriptstyle{u,\bar u\in K, u\neq\bar u}}}\!\!\!\frac{|f(t,r,u)-f(t,r,\bar u)|}{|u-\bar u|}< \infty\ \ \ \text{for any compact subset $K\subset [0,\infty)$.} $$ \item[(H2)] $\sup\limits_{r\in I_B, t>0} |f(t,r,0)|<\infty.$ \item[(H3)] $\mu\in C^1(I_B\times(0,\infty))$ and there are constants $\mu^* \ge \mu_* >0$ such that $\|\mu_i\|_{C^1(I_B\times(0,\infty))}\le \mu^*$ and $\mu_i(r,t)\geq \mu_*$ for all $r\in I_B,$ $t>0.$ \end{enumerate} The following is our main result on (\ref{model:scalar}). \begin{theo}\label{main:theorem:scalar} Assume that (H1)-(H3) are satisfied, and let $u\in C^{2,1}(\overline{B}\times(0,\infty))\cap C(\overline{B}\times[0,\infty))$ be a classical solution of \eqref{model:scalar} such that \begin{equation} \label{eq:15} \|u\|_{L^\infty(B\times(0,\infty))}<\infty. \end{equation} Suppose furthermore that \begin{enumerate} \item [(H4)] there is $e\in \Sn$ such that $u_0 \geq u_0 \circ \sigma_e$ in $B(e)$ and $u_0\not\equiv u_0\circ \sigma_e$. \end{enumerate} Then, there is some $p\in \Sn$ such that every element of the omega limit set $$ \omega(u):= \{z \in C(\overline{B})\mid \lim_{n\to\infty}\|u(\cdot,t_n)-z\|_{L^\infty(B)}=0 \text{ for some sequence } t_n\to\infty\} $$ is foliated Schwarz symmetric with respect to $p$. \end{theo} We now turn to a general class of two-component nonlinear competitive systems which includes (\ref{Lotka:Volterra:system}). More precisely, we consider, for $i=1,2$, \begin{equation}\label{model:competitive:neumann} \begin{aligned} (u_i)_t-\mu_i(|x|,t)\Delta u_i &=f_i(t,|x|,u_i)-\alpha_i(|x|,t) u_1u_2, &&\; \text{$x\in B,\,t>0$,}\\ \partial_\nu u_i&=0,&&\; \text{$x\in \partial B,\ t>0$,}\\ u_i(x,0)&=u_{0,i}(x) \ge 0 && \; \text{for $x\in B$.} \end{aligned} \end{equation} On the data, we assume the following. \begin{itemize} \item[(h1)] For $i=1,2,$ the function $f_i:[0,\infty)\times I_B\times [0,\infty)\to\R,$ $(t,r,v)\mapsto f(t,r,v)$ is locally Lipschitz continuous in $v$ uniformly with respect to $r$ and $t,$ i.e. $$ \sup_{\genfrac{}{}{0pt}{}{\scriptstyle{r\in I_B, t>0,}}{\scriptstyle{v,\bar v\in K, v\neq\bar v}}}\!\!\!\frac{|f_i(t,r,v)-f_i(t,r,\bar v)|}{|v-\bar v|}< \infty\ \ \ \text{for any compact subset $K\subset [0,\infty)$}. $$ Moreover $f_1(t,r,0)=f_2(t,r,0)=0$ for all $r\in I_B, t>0.$ \item[(h2)] $\mu_i\in C^{2,1}(I_B\times(0,\infty))$ and there are constants $\mu^* \ge \mu_*>0$ such that $\|\mu_i\|_{C^{2,1}(I_B\times(0,\infty))}\le \mu^*$ and $\mu_i(r,t)\geq \mu_*$ for all $r\in I_B,$ $t>0,$ and $i=1,2.$ \item[(h3)] $\alpha_i\in L^\infty(I_B\times(0,\infty))$ and there are constants $\alpha^* \ge \alpha_*>0$ such that $\alpha_* \le \alpha_i(r,t)\leq \alpha^*$ for all $r\in I_B,$ $t>0,$ and $i=1,2.$ \end{itemize} Then we have the following result. \begin{theo}\label{main:theorem:neumann} Let $(h1)$--$(h3)$ be satisfied, and let $u_1,u_2\in C^{2,1}(\overline{B}\times(0,\infty))\cap C(\overline{B}\times[0,\infty))$ be functions such that $u=(u_1,u_2)$ solves \eqref{model:competitive:neumann} and \begin{equation} \label{eq:12} \|u_i\|_{L^\infty(B\times(0,\infty))}<\infty \qquad \text{for $i=1,2$.} \end{equation} Suppose furthermore that assumption $(h0)$ of Theorem~\ref{corollary:Lotka:Volterra} holds. Then there is some $p\in \Sn$ such that every $(z_1,z_2)\in \omega(u)$ has the property that $z_1$ is foliated Schwarz symmetric with respect to $p$ and $z_2$ is foliated Schwarz symmetric with respect to $-p.$ \end{theo} As mentioned before, Theorem~\ref{corollary:Lotka:Volterra} is an immediate consequence of Theorem~\ref{main:theorem:neumann}. As far as we know, there is no previous result on the asymptotic symmetry of competition-diffusion parabolic systems. For a related class of Dirichlet problems for elliptic competing systems with a variational structure, Tavares and the second author proved recently in \cite{weth:tavares} that the ground state solutions are foliated Schwarz symmetric with respect to antipodal points. Note that, in contrast, the elliptic counterpart of (\ref{Lotka:Volterra:system}) has no variational structure which could lead to symmetry information. More is known in the case of Dirichlet problems for {\em cooperative} systems. In particular, for a class of parabolic cooperative systems, F\"{o}ldes and Pol\'{a}\v{c}ik \cite{polacik:systems} proved that, in the case where the underlying domain is a ball, positive solutions are asymptotically radially symmetric and radially decreasing. Moreover, for elliptic cooperative systems with variational structure and some convexity properties of the data, Damascelli and Pacella \cite{pacella} proved foliated Schwarz symmetry of solutions having Morse index less or equal to the dimension of the domain. To prove Theorems \ref{main:theorem:scalar} and \ref{main:theorem:neumann}, we follow the strategy of our previous work \cite{saldana:weth} on a scalar Dirichlet problem, using a rotating plane argument. However, the proofs in \cite{saldana:weth} rely strongly on parabolic maximum principles for small domains due to Pol\'{a}\v{c}ik \cite{polacik}, and these are only available under Dirichlet boundary conditions. In the present paper, we replace this tool by a Harnack-Hopf type estimate, Lemma~\ref{hopf:lemma:Neumann} below, which yields information up to the nonsmooth part of the boundary of cylinders over half balls and half annuli. With the help of this tool we show a stability property of reflection inequalities with respect to small perturbations of a hyperplane, see Lemma~\ref{perturbationlemma} below. The adjustment of the rotating plane method to systems gives rise to a further difficulty. When dealing with the so-called semi-trivial limit profiles, that is, elements of $\omega(u_1,u_2)$ of the form $(z,0)$ and $(0,z),$ the perturbation argument within the rotating plane method cannot be carried out directly. To overcome this obstacle, we apply a new normalization procedure and distinguish different cases for the asymptotics of the normalized profile. We remark that a similar normalization argument can be made for the Dirichlet problem version of system \eqref{model:competitive:neumann}. In this case, the estimates given in \cite{huska:polacik:safonov} play a decisive role, and the argument is somewhat more technical. To keep this paper short we do not include the Dirichlet case here. We note that the occurrence and nature of semi-trivial limit profiles have been studied extensively in recent years, see e.g. \cite{cantrell,langa,dancer,lou,dancer:zhang}. It is natural to ask whether similar symmetry properties are available for the cooperative version of problem \eqref{model:competitive:neumann}, i.e., \begin{equation}\label{model:cooperative:neumann} \begin{aligned} (u_i)_t-\mu_i(|x|,t)\Delta u_i &=f_i(t,|x|,u_i)+\alpha_i(|x|,t) u_1u_2 && \text{in $B \times (0,\infty)$},\\ \partial_\nu u_i&=0&& \text{on $\partial B \times (0,\infty)$},\\ u_i(x,0)&=u_{0,i}(x) \ge 0&& \text{for $x\in B,$ $i=1,2$.} \end{aligned} \end{equation} Indeed, the proof of Theorem~\ref{main:theorem:neumann} can easily be adjusted to deal with \eqref{model:cooperative:neumann}. More precisely, we have the following result. \begin{theo}\label{main:theorem:neumann-cooperative} Let $(h1)$--$(h3)$ be satisfied, and let $u_1,u_2\in C^{2,1}(\overline{B}\times(0,\infty))\cap C(\overline{B}\times[0,\infty))$ be functions such that $u=(u_1,u_2)$ solves \eqref{model:cooperative:neumann} and satisfies \eqref{eq:12}. Suppose furthermore that $$ \leqno{\rm (h0)'} \qquad u_{0,i} \geq u_{0,i} \circ \sigma_e\ \ \ \text{ in }B(e) \ \ \ \ \left\{ \begin{aligned} &\text{for some $e \in \Sn$ with }\\ &\text{ $u_{0,i}\not\equiv u_{0,i}\circ \sigma_e$ for $i=1,2$.} \end{aligned} \right. $$ Then there is some $p\in \Sn$ such that every $(z_1,z_2)\in \omega(u)$ has the property that $z_1,z_2$ are foliated Schwarz symmetric with respect to $p$. \end{theo} The paper is organized as follows. In Section \ref{technical:lemmas} we collect some preliminary tools which are rather easy consequences of already established results. In Section~\ref{hopflemma} we derive a Harnack-Hopf type estimate for scalar equations in a half cylinder under mixed boundary conditions, a related version of the Hopf Lemma for cooperative systems and a perturbation lemma for hyperplane reflection inequalities. In Section \ref{results:scalar:equations} we complete the proof of Theorem~\ref{main:theorem:scalar}, and in Section~\ref{normalization:argument} we complete the (more difficult) proof of Theorem~\ref{main:theorem:neumann}. In Section~\ref{sec:other:problems} we first provide the proof of Theorem~\ref{main:theorem:neumann-cooperative} and then briefly discuss further classes of competitive and cooperative systems (see \eqref{cubic:system} and \eqref{general:cooperative:model} below).\\ \noindent \textbf{Acknowledgements:} The work of the first author is supported by a joint grant from CONACyT (Consejo Nacional de Ciencia y Tecnolog\'{\i}a - Mexico) and DAAD (Deutscher Akademischer Austausch Dienst - Germany). The authors would like to thank Nils Ackermann, Sven Jarohs, Filomena Pacella, Peter Pol\'{a}\v{c}ik, and Hugo Tavares for helpful discussions related to the paper. The authors also wish to thank the referee for his/her helpful comments. \section{Preliminaries} \label{technical:lemmas} First we fix some notation. Throughout the paper, we assume that $B$ is a ball or an annulus in $\R^N$ centered at zero, and we fix $0 \le A_1 <A_2< \infty$ such that \begin{equation}\label{B:definition} B:= \begin{cases} \{x\in\mathbb R^N: A_1<|x|<A_2\}, & \text{ if } A_1>0,\\ \{x\in\mathbb R^N: |x|<A_2\}, & \text{ if } A_1=0. \end{cases} \end{equation} Note that $I_B=[A_1,A_2]$. For $\Omega\subset\mathbb R^N,$ we let $\Omega^\circ$ denote the interior of $\Omega.$ For two sets $\Omega_1,\Omega_2 \subset \R^N$, we put $\dist(\Omega_1,\Omega_2):= \inf \{|x-y|\::\: x \in \Omega_1,\,y \in \Omega_2\}$. If $\Omega_1= \{x\}$ for some $x \in \R^N$, we simply write $\dist(x,\Omega_2)$ in place of $\dist(\{x\},\Omega_2)$. We will need equicontinuity properties of uniformly bounded global solutions of (\ref{model:scalar}) and (\ref{model:competitive:neumann}) and their gradients. These properties are derived from standard uniform regularity estimates as collected in the following lemma. \begin{lemma}\label{regularity} Let $\Omega \subset \mathbb R^{N}$ be a smooth bounded domain, $I\subset \R$ open, $\mu\in C^{1}(\Omega\times I),$ $g\in L^\infty(\Omega\times I),$ and let $v\in C^{2,1}(\overline{\Omega}\times I)\cap C(\overline{\Omega \times I})$ be a classical solution of \begin{equation*} \begin{aligned} v_t-\mu(x,t)\Delta v&=g(x,t) &&\qquad \text{in $\Omega \times I$},\\ \partial_\nu v&=0 &&\qquad \text{on $\partial \Omega \times I$.} \end{aligned} \end{equation*} Suppose moreover that \begin{equation* \begin{aligned} &\mu_*:= \inf_{\Omega \times I} \mu(x,t)>0,\\ &K:= \|v\|_{L^\infty(\Omega\times I)}+\|\mu\|_{C^1(\Omega\times I)}+\|g\|_{L^\infty(\Omega\times I)}<\infty. \end{aligned} \end{equation*} Let $h\in \{v,v_{x_j}:j=1,\ldots,N\}$ and ${\cal I}\subset I$ with $\operatorname{dist}({\cal I},\partial I)\geq 1$. Then there exist positive constants $C$ and $\gamma,$ depending only on $\Omega,$ $\mu_*,$ and $K,$ such that \begin{align}\label{equicontinuity:h} \sup_{\genfrac{}{}{0pt}{}{\scriptstyle{x,\bar x\in \overline{\Omega},\, t,\bar t\in[t_0,t_0+1],}}{\scriptstyle{x \not= \bar x,\, t \not= \bar t,\, {t_0\in {\cal I}}}}} \frac{|h(x,t)-h(\bar x,\bar t)|}{|x-\bar x|^\gamma+|t-\bar t|^{\frac{\gamma}{2}}}< C. \end{align} \end{lemma} \begin{proof} Fix $t_0\in {\cal I}$ and set $Q:=\overline{\Omega}\times[t_0,t_0+1].$ Then, by \cite[Theorem 7.35, p.185]{lieberman} there is a constant $C>0,$ which depends only on $\Omega,$ $\mu_*,$ and $K$ such that \begin{align*} \|D^2 v\|_{L^{N+3}(Q)}+\|v_t\|_{L^{N+3}(Q)}&\leq C(\|g\|_{L^{N+3}(Q)}+\|v\|_{L^{N+3}(Q)})\leq 2 C|\Omega|K. \end{align*} In particular, there is some constant $\tilde K>0$ independent of $t_0$ such that \begin{align* \|v\|_{W_{N+3}^{2,1}(Q)}\leq \tilde K. \end{align*} Next, fix $0 < \gamma < 1-\frac{N+2}{N+3}\in(0,1).$ By Sobolev embeddings (see, for example, \cite[embedding (1.2)]{quittner-souplet}) there exists a constant $\tilde C>0$ which only depends on $\Omega$ such that \begin{equation} \label{regularity-new-equation} \|v\|_{C^{1+\gamma,(1+\gamma)/2}(Q)}\leq \tilde C \|v\|_{W_{N+3}^{2,1}(Q)}\leq \tilde C \tilde K, \end{equation} where $$ \|u\|_{C^{1+\gamma,(1+\gamma)/2}(Q)}:= \|u\|_{L^\infty(Q)}+ |u|_{\gamma;Q}+ \|\nabla u\|_{L^\infty(Q)}+ |\nabla u|_{\gamma;Q} $$ and $$ |v|_{\gamma;Q}:= \sup\bigg\{ \frac{|v(x,t)-v(y,s)|}{|x-y|^\gamma+|t-s|^\frac{\gamma}{2}}\::\: (x,t),(y,s)\in \overline{Q},\ (x,t)\neq (y,s)\bigg\} $$ for functions $v: Q \to \R$ resp. $v: Q \to \R^N$. Since the constant $\tilde C \tilde K$ in (\ref{regularity-new-equation}) does not depend on the choice of $t_0,$ we obtain \eqref{equicontinuity:h}. \end{proof} \begin{remark}\label{equicontinuity} If $u=(u_1,u_2)$ is a nonnegative solution of \eqref{model:competitive:neumann} with $u_1,u_2\in C^{2,1}(\overline{B}\times(0,\infty))\cap C(\overline{B}\times[0,\infty))$, then $u_i$ satisfies \begin{equation*} (u_i)_t-\mu_i(|x|,t)\Delta u_i = g_i(x,t),\qquad x\in B,\ t>0, \end{equation*} with $g_i: B \times (0,\infty) \to \R$ given by $$ g_i(x,t)=f_i(t,|x|,u_i(x,t))-\alpha_i(|x|,t)u_1(x,t)u_2(x,t)\qquad \text{for $i=1,2$.} $$ If, moreover, (h1)-(h3) and (\ref{eq:12}) are satisfied, then we have $\|u_i\|_{L^\infty(B \times (0,\infty))}<\infty$ and $\|g_i\|_{L^\infty(B \times (0,\infty))}< \infty$ for $i=1,2$. Since also the diffusion coefficients $\mu_i$ satisfy the assumptions of Lemma~\ref{regularity}, we conclude that $u_i$ and $\partial_j u_i$ for all $j=1,\dots,N,$ $i=1,2,$ satisfy \eqref{equicontinuity:h} with $\cI=(1,\infty).$ As a consequence of \eqref{equicontinuity:h}, the semiorbits $\{u_i(\cdot,t):t\geq 1\}$ are precompact sets for $i=1,2$. Hence, by a standard compactness argument, the omega limit sets of $u=(u_1,u_2)$ and its components are related as follows: \begin{equation} \label{eq:3} \omega(u_i)= \{z_i \::\: z=(z_1,z_2) \in \omega(u)\} \qquad \text{for $i=1,2$.} \end{equation} \end{remark} \medskip Next we define extensions of solutions to second order Neumann problems on $B$ to a larger domain via inversion at the boundary. Recalling (\ref{B:definition}), we define \begin{equation} \label{eq:14} \widetilde B:= \begin{cases} \{x\in\mathbb R^N: \frac{A_1^2}{A_2}<|x|<\frac{A_2^2}{A_1}\}, & \text{ if } A_1>0,\\ \{x\in\mathbb R^N: |x|<2A_2\}, & \text{ if } A_1=0, \end{cases} \end{equation} and for $x \in \widetilde B \setminus B$ we put $$ \hat x := \begin{cases} \frac{A_2^2}{|x|^2}x, & \text{ if $|x| \ge A_2$,}\\ \frac{A_1^2}{|x|^2}x, & \text{ if $|x| \le A_1$.} \end{cases} $$ \begin{lemma} \label{extension:lemma} Let $I \subset \R$ be an open interval, $\mu,g: B\times I\to \R$ be given functions and let $u\in C^{2,1}(\overline{B}\times I)\cap C(\overline{B \times I})$ be a solution of $$ \left \{ \begin{aligned} u_t-\mu(x,t)\Delta u &= g(x,t)&& \qquad \text{in $B \times I$,}\\ \partial_\nu u &=0 &&\qquad \text{on $\partial B \times I.$} \end{aligned} \right. $$ Then the function \begin{equation} \label{eq:16} \tilde u: \widetilde B \to \R,\qquad \tilde u(x,t):= \begin{cases} u(x,t), & x\in \overline{B},\ t\in I,\\\vspace{.1cm} u(\hat x,t), & x\in \widetilde B \setminus \overline {B},\ t\in I, \end{cases} \end{equation} satisfies that $\tilde u\in W^{2,1}_{p,loc}(\widetilde B\times I)\cap C^{1}(\widetilde B\times I)$ for any $p\geq 1$, and it is a strong solution of the equation \begin{align} \label{extended:neumann:system} {\tilde u}_t-\tilde \mu(x,t)\Delta \tilde u-\tilde b(x,t) \partial_r \tilde u &= \tilde g(x,t) \qquad \text{ in } \widetilde B\times I. \end{align} Here $\partial_r = \frac{1}{|x|} \sum \limits_{j=1}^N x_j \partial_j$ is the radial derivative and \begin{align*} \tilde \mu(x,t)&:=\begin{cases} \mu(x,t), & \qquad x\in B,\ t\in I, \\\vspace{.1cm} \frac{|x|^2}{|\hat x|^2}\mu(\hat x,t\big), &\qquad x\in\widetilde B \setminus B,\ t\in I, \end{cases}\\ \tilde b(x,t)&:=\begin{cases} 0, & x\in B,\ t\in I\\\vspace{.1cm} \frac{(4-2N)|x|}{|\hat x|^2} \mu(\hat x,t), & x\in\widetilde B \setminus B,\ t\in I, \end{cases}\\ \tilde g(x,t)&:=\begin{cases} g(x,t), & \qquad \qquad x\in B,\ t\in I,\\\vspace{.1cm} g(\hat x,t), & \qquad \qquad x\in\widetilde B \setminus B,\ t\in I. \end{cases} \end{align*} \end{lemma} \begin{proof} As a consequence of the Neumann boundary conditions we have $\tilde u\in C^{1}(\widetilde{B} \times I)$. Fix $p\geq 1.$ By assumption, $\|u\|_{W_{p}^{2,1}(B\times J)}<\infty$ for any subinterval $J\subset\subset I.$ Since the map $x \mapsto \hat x$ has uniformly bounded first and second derivatives in $\tilde B \setminus B$, it follows that $\|\tilde u\|_{W_{p}^{2,1}(\tilde B\times J)}<\infty.$ Finally, it is easy to check by direct calculation that \eqref{extended:neumann:system} holds for $x\in \widetilde B \setminus \partial B$ and $t\in I$. Combining these facts, we find that $\tilde u$ is a strong solution of (\ref{extended:neumann:system}). \end{proof} \begin{remark}\label{extension:remark} A similar extension property is valid in half balls and half annuli under mixed boundary conditions. More precisely, let $B_+:= \{x \in \overline B\::\:x_N>0\}$, $I \subset \R$ be an open interval, let $\mu,g:B_+\times I\to R$ given functions and let $u\in C^{2,1}(\overline{B_+}\times I)\cap C(\overline{B_+\times I})$ be a solution of $$ u_t-\mu(x,t)\Delta u = g(x,t) \qquad \text{in $B_+^\circ \times I$,}\\ $$ satisfying $u=0$ on $\Sigma_1\times I$ and $\partial_\nu u =0$ on $\Sigma_2\times I,$ where \begin{equation} \label{eq:19} \Sigma_1 := \{x\in\partial B_+ : x_N=0\}, \qquad \Sigma_2 := \{x\in\partial B_+ : x_N>0\}. \end{equation} Let $\widetilde {B_+}:= \{x \in \widetilde B\::\: x_N>0\}$ and define $\tilde u: \widetilde {B_+} \to \R$ by (\ref{eq:16}) for $x \in \widetilde {B_+}$. Then $\tilde u\in W^{2,1}_{p,loc}(\widetilde {B_+}\times I)\cap C^{1}(\widetilde {B_+} \times \overline{I})$ for any $p>N+2$ and it is a strong solution of (\ref{extended:neumann:system}) in $\widetilde {B_+}$ with coefficients defined analogously as in Lemma~\ref{extension:lemma}. \end{remark} The final preliminary tool we need is a geometric characterization of a set of foliated Schwarz symmetric functions. We first recall the following result from \cite[Proposition 3.2]{saldana:weth}. \begin{prop} \label{sec:char-foli-schw-1} Let $\cU$ be a set of continuous functions defined on a radial domain $B\subset \mathbb R^N,$ $N\geq 2,$ and suppose that there exists \begin{equation} \label{eq:5} \tilde e\in \cM_\cU:=\{e\in \Sn \mid z(x) \ge z(\sigma_e(x)) \text{ for all }x\in B(e) \text{ and } z \in \cU\}. \end{equation} If for all two dimensional subspaces $P\subseteq \mathbb R^N $ containing $\tilde e$ there are two different points $p_1, \ p_2$ in the same connected component of $\cM_\cU\cap P$ such that $z \equiv z \circ \sigma_{p_1}$ and $z \equiv z \circ \sigma_{p_2}$ for every $z \in \cU$, then there is $p \in \Sn$ such that every $z \in \cU$ is foliated Schwarz symmetric with respect to $p$. \end{prop} Instead of applying this Proposition directly, we will rather use the following corollary. \begin{coro} \label{sec:symm-char} Let $\cU$ be a set of continuous functions defined on a radial domain $B\subset \mathbb R^N,$ $N\geq 2,$ and suppose that the set $\cM_\cU$ defined in (\ref{eq:5}) contains a nonempty subset $\cN$ with the following properties: \begin{itemize} \item[(i)] $\cN$ is relatively open in $\Sn$; \item[(ii)] For every $e \in \partial \cN$ and $z \in \cU$ we have $z \le z \circ \sigma_e$ in $B(e)$. Here $\partial \cN$ denotes the relative boundary of $\cN$ in $\Sn$. \end{itemize} Then there is $p \in \Sn$ such that every $z \in \cU$ is foliated Schwarz symmetric with respect to $p$. \end{coro} \begin{proof} By assumption, there exists $\tilde e \in \cN \subset \cM_\cU$. Let $P\subseteq \mathbb R^N$ be a twodimensional subspace containing $\tilde e$, and let $L$ denote the connected component of $\overline {\cN \cap P}$ containing $\tilde e$. Since $\cM_\cU$ is closed, $L$ is a subset of the connected component of $\cM_\cU \cap P$ containing $\tilde e$. By Proposition~\ref{sec:char-foli-schw-1}, it suffices to show that there are different points $p_1,p_2 \in L$ such that $z \equiv z \circ \sigma_{p_1}$ and $z \equiv z \circ \sigma_{p_2}$ for every $z \in \cU$. We distinguish two cases. If $L= \Sn \cap P$, then we have $z \equiv z \circ \sigma_p$ in $B$ for every $p \in L$, $z \in \cU$ by the definition of $\cM_\cU$ and since $L \subset \cM_\cU$. If $L \not = \Sn \cap P$, then there exists two different points $p_1,p_2$ in the relative boundary of $L$ in $\Sn \cap P$. Since $\cN$ is relatively open in $\Sn$, these points are contained in $\partial \cN \subset \cM_\cU$, and by assumption and the definition of $\cM_\cU$ we have $z \equiv z \circ \sigma_{p_1}$ and $z \equiv z \circ \sigma_{p_2}$ in $B$ for every $z \in \cU$, as required. \end{proof} \section{A Harnack-Hopf type lemma and related estimates}\label{hopflemma} The first result of this section is an estimate related to a linear parabolic boundary value problem on a (parabolic) half cylinder. The estimate can be seen as an extension of both the Harnack inequality and the Hopf lemma since it also gives information on a \textquotedblleft tangential \textquotedblright derivative at corner points. A somewhat related (but significantly weaker) result for supersolutions of the Laplace equation was given in \cite[Lemma A.1]{girao:weth}. \begin{lemma}\label{hopf:lemma:Neumann} Let $a,b \in \R$, $a<b$, $I:=(a,b),$ $B_+:=\{x\in \overline B\::\: x_N>0\}.$ Suppose that $v\in C^{2,1}(\overline{B_+}\times I)\cap C(\overline{B_+\times I})$ satisfies \begin{equation* \begin{aligned} v_t-\mu\Delta v - c v&\geq 0 &&\qquad \text{in $B^\circ_+ \times I$,}\\ \frac{\partial v}{\partial \nu}&= 0 &&\qquad \text{on $\Sigma_2 \times I$,}\\ \hspace{2.7cm} v &= 0 && \qquad \text{on $\Sigma_1 \times I$,}\\ \hspace{2.7cm} v(x,a) &\geq 0&&\qquad \text{for $x\in B_+$,} \end{aligned} \end{equation*} where the sets $\Sigma_i$ are given in (\ref{eq:19}) and the coefficients satisfy \begin{equation* \frac{1}{M} \leq \mu(x,t)\leq M \quad \text{and}\quad |c(x,t)|\leq M \qquad \text{ for } (x,t)\in B_+\times I \end{equation*} with some positive constant $M>0.$ Then $v\geq 0$ in $B_+\times (a,b)$. Moreover, if $v(\cdot,a) \not\equiv 0$ in $B_+$, then \begin{equation}\label{hopf:conclusion:general:notation} v>0 \text{ in } B_+\times I\qquad \text{and}\qquad \frac{\partial v}{\partial e_N}>0 \text{ on } \Sigma_1\times I. \end{equation} Furthermore, for every $\delta_1>0,$ $\delta_2 \in (0,\frac{b-a}{4}]$, there exist $\kappa>0$ and $p>0$ depending only on $\delta_1,$ $\delta_2$, $B$ and $M$ such that \begin{equation} \label{eq:1} v(x,t) \ge x_N \,\kappa \bigg(\int_{Q(\delta_1,\delta_2)} v^p \ dxdt\bigg)^{\frac{1}{p}}\ \ \ \text{for every $x \in B_+,\: t \in [a+3\delta_2,a+4\delta_2],$} \end{equation} with $Q(\delta_1,\delta_2):= \{ (x,t) \::\: x \in B_+,\: x_N \ge \delta_1, \: a+ \delta_2 \le t \le a+2\delta_2\}.$ \end{lemma} \begin{proof} We begin by showing that $v\geq 0$ in $B_+\times I$. The argument is standard. Let $\varepsilon>0$ and define $\varphi(x,t):=e^{-2M t}v(x,t)+\varepsilon$ for $x\in \overline{B_+}$ and $t\in \overline{I}.$ Then \begin{align*} \varphi_t-\mu\Delta \varphi-(c-2M)\varphi&\ge \varepsilon(2M-c )\geq \eps M, &&x\in B^\circ_+, t\in I,\\ \frac{\partial \varphi}{\partial \nu}(x,t) &= 0, &&x\in\Sigma_2, t\in I,\\ \varphi(x,t) &= \varepsilon, &&x\in \Sigma_1, t\in I,\\ \varphi(x,a) &\geq \varepsilon, &&x\in B_+. \end{align*} Suppose by contradiction that $\bar t := \sup \{t \in [a,b)\::\: \text{$\varphi> 0$ in $B_+\times [a,t)$}\} \: <\: b.$ By continuity, we have $\bar t>a$, $\varphi(\cdot,\bar t)\geq 0$ in $B_+$ and $\varphi(\bar x,\bar t)=0$ for some $\bar x\in B_+$. As a consequence of the Neumann boundary conditions on $\Sigma_2 \times I$ and the boundary point lemma (see for example \cite[Lemma 2.8]{lieberman}), we find that $\bar x\in {B_+}^\circ$ . But then \begin{align*} 0\geq \varphi_t(\bar x,\bar t)-\mu\Delta \varphi(\bar x,\bar t)-(c-2M)\varphi(\bar x,\bar t) \ge \eps M > 0, \end{align*} a contradiction. Therefore $\varphi>0 $ in $B_+\times I$. Since $\eps>0$ was chosen arbitrarily, we conclude that $v\geq 0$ in $B_+\times I$. Then the first claim in \eqref{hopf:conclusion:general:notation} follows by the strong maximum principle and the boundary point lemma (see e.g. \cite[Theorem 2.7 and Lemma 2.8]{lieberman}). Next we note that the second claim in \eqref{hopf:conclusion:general:notation} is a consequence of the first claim and the inequality (\ref{eq:1}) (for suitably chosen $\delta_1,\delta_2$). It thus remains to prove (\ref{eq:1}). Let $\delta_1>0$, $\delta_2 \in (0,\frac{b-a}{4}]$ and consider $\widetilde B, \widetilde {B_+}$ as defined in (\ref{eq:14}) and Remark~\ref{extension:remark}. Without loss, we may assume that \begin{equation} \label{eq:2} \delta_1 < \min \Bigl \{\frac{\delta_2}{2}, \frac{\dist(B,\partial \widetilde B)}{3}\Bigr \}. \end{equation} By Remark \ref{extension:remark}, there exists an extension $\tilde v\in W_{N+1,loc}^{2,1}(\widetilde {B_+} \times I)$ of $v$ which satisfies $\cL(t,x)\tilde v\geq 0$ in $\widetilde {B_+} \times I$ in the strong sense. Here the linear differential operator $\cL$ is given by \begin{align* \cL(t,x) w:= w_t-\tilde \mu(x,t)\Delta w- \tilde b(x,t)\partial_r w-\tilde c(x,t)w \end{align*} with $\tilde \mu,\: \tilde b$ given as in Lemma \ref{extension:lemma} and \begin{equation*} \tilde c(x,t):= \begin{cases} c(x,t), &\quad x\in B_+,\ t\in I,\\\vspace{.1cm} c(\hat x,t), &\quad x\in\widetilde B_+ \setminus B_+,\ t\in I. \end{cases} \end{equation*} Moreover, there is a positive constant $\beta_0$ which only depends on $B$ and $M$ such that $\tilde \mu,$ $\tilde b,$ and $\tilde c$ are uniformly bounded by $\beta_0,$ and $\tilde\mu$ is bounded below by $\beta_0^{-1}.$ Next, we define the compact sets \begin{align*} K_{\delta_1}&:= \{x \in B_+\::\: x_N \ge \frac{\delta_1}{2}\}\quad \text{and}\\ \tilde K_{\delta_1}&:= \{x \in \widetilde {B_+} \::\: x_N\geq \frac{\delta_1}{2},\:\dist(x,\partial \widetilde B) \ge \delta_1 \}. \end{align*} By the parabolic Harnack inequality given in \cite[Lemma 3.5]{polacik}, there exist $\kappa_1>0$ and $p>0$, depending only on $\delta_1, \delta_2$, $B$ and $M$, such that \begin{align}\label{eq1:hopf-0} \inf_{\genfrac{}{}{0pt}{}{\scriptstyle{x\in \tilde K_{\delta_1}}}{\scriptstyle{t\in[a+\frac{5}{2}\delta_2,a+4\delta_2]}}}\tilde v(x,t) \nonumber &\geq \kappa_1 \bigg(\int_{\tilde K_{\delta_1} \times [a+\delta_2,a+2\delta_2]} {\tilde v}^p\ dxdt\bigg)^{\frac{1}{p}}\\ &\geq \kappa_1 \bigg(\int_{Q(\delta_1,\delta_2)} v^p\ dxdt\bigg)^{\frac{1}{p}}. \end{align} Here we used in the last step that $$ Q(\delta_1,\delta_2) \;\subset \; K_{\delta_1} \times [a+\delta_2,a+2\delta_2] \;\subset \; \tilde K_{\delta_1} \times [a+\delta_2,a+2\delta_2]. $$ Next, we define \begin{align*}\begin{split D&:=\{(x,t)\::\:t<0,\: x_N < \frac{\delta_1}{2},\: |x-\delta_1 e_N|^2 + t^2 < \delta_1^2 \};\\ \Gamma_1&:= \{(x,t)\::\: t \le 0,\: x_N < \frac{\delta_1}{2},\: |x-\delta_1 e_N|^2 + t^2 = \delta_1^2 \};\\ \Gamma_2&:= \{(x,t)\::\:t \le 0,\: |x-\delta_1 e_N|^2 + t^2 \le \delta_1^2, \: x_N = \frac{\delta_1}{2} \}. \end{split} \end{align*} Note that $\Gamma_1 \cup \Gamma_2$ equals $\partial_P D$, the parabolic boundary of $D$. Let $x_0\in \Sigma_1$ and $t_0 \in [a+3\delta_2,a+4 \delta_2]$. By construction and (\ref{eq:2}), we then have $$ \{(x_0+x,t_0+t)\::\: (x,t) \in D\} \subset \widetilde {{B_+}}^\circ\times [a+\frac{5}{2}\delta_2,a + 4 \delta_2] $$ and \begin{equation} \label{eq:17} \{(x_0+x,t_0+t)\::\: (x,t) \in \Gamma_2\} \subset \tilde K_{\delta_1} \times [a+\frac{5}{2}\delta_2,a + 4 \delta_2]. \end{equation} Next we fix $k>0$ such that \begin{align*} k\geq \frac{2\beta_0[{\delta_1}+\beta_0N(1+{\delta_1})]}{{\delta_1}^2}. \end{align*} Moreover, we define the function \begin{equation*} z:\overline D\to\mathbb R,\qquad z(x,t):= \Bigl(e^{-k(|x-{\delta_1} e_N|^2+t^2)}-e^{-k{\delta_1}^2}\Bigr)e^{-\beta_0 t} \end{equation*} Let also $$ \eps:= \frac{\min \limits_{(x,t) \in \Gamma_2} \tilde v(x_0+x,t_0+t)}{\max \limits_{(x,t) \in \Gamma_2} z(x,t)} >0 $$ and consider \begin{align* w: \overline D\to\mathbb R,\qquad w(x,t):=\tilde v(x_0+x,t_0+t)-\varepsilon z(x,t) \end{align*} Then $w \geq 0$ on $\Gamma_2$ and also $w\geq 0$ on $\Gamma_1$, since $z\equiv 0$ on $\Gamma_1$. Moreover, for $(x,t)\in D$ we have \begin{align*} &\cL(t_0+t,x_0+x)z(x,t)\\ = &[-\beta_0 - \tilde c(t+t_0,x_0+x)]z(x,t)\\ & + 2k\, e^{-k(|x-{\delta_1} e_N |^2+t^2)-\beta_0 t}\Bigl[\tilde \mu(t_0+t,x_0+x)(N-2k|x-{\delta_1} e_N|^2) \\&-t -\tilde b(t_0+t, x_0+x)\frac{x_0+x}{|x_0+x|}\cdot (x-{\delta_1} e_N)\Bigr]\\ \leq& 2k\, e^{-k(|x-{\delta_1} e_N|^2+t^2)-\beta_0t}\Bigl[{\delta_1}- \frac{2k}{\beta_0} \Bigl(\frac{{\delta_1}}{2}\Bigr)^2 + \beta_0 N(1 + {\delta_1})\Bigr]\leq 0, \end{align*} by the definition of $k.$ Therefore we have $$ \cL(t_0+t,x_0+x)w(x,t) \geq 0 \;\: \text{for $(x,t)\in D$}\quad \text{and}\quad w\geq 0\;\: \text{on $\partial_P D = \Gamma_1\cup \Gamma_2$.} $$ By the maximum principle for strong solutions, we conclude that $w\geq 0$ in $\overline D$ and thus in particular $$ \tilde v(x_0+s e_N,t_0) \ge \eps z(s e_N,0) \qquad \text{for $s \in (0,\frac{{\delta_1}}{2})$.} $$ Since moreover $$ z(s e_N,0)= e^{-k(s-{\delta_1})^2}-e^{-k{\delta_1}^2} \ge s\, \eps_1\, \max \limits_{(x,t) \in \Gamma_2} z(x,t) \qquad \text{for $s \in (0,\frac{{\delta_1}}{2})$} $$ with a constant $\eps_1 \in (0,\frac{1}{\operatorname{diam}(B)})$ depending only on the function $z$ and on $B$, it follows that $$ \tilde v(x_0+s e_N,t_0) \ge s \eps_1\, \eps \max \limits_{(x,t) \in \Gamma_2} z(x,t) = \eps_1 s \min \limits_{(x,t) \in \Gamma_2} \tilde v(x_0+x,t_0+t) \quad \text{for $s \in (0,\frac{{\delta_1}}{2})$.} $$ By (\ref{eq:17}) and since $x_0 \in \Sigma_1$, $t_0 \in [a+3\delta_2,a+4 \delta_2]$ were chosen arbitrarily, we conclude that $$ v(x,t) \:\ge\: \eps_1 x_N \!\!\!\!\inf_{\genfrac{}{}{0pt}{}{\scriptstyle{y \in \tilde K_{\delta_1}}}{\scriptstyle{\tau \in [a+\frac{5}{2}\delta_2, a+4\delta_2]}}}\!\!\!\!\tilde v(y,\tau) \qquad \left\{ \begin{aligned} &\text{for $x \in B_+$ with $x_N < \frac{{\delta_1}}{2}$}\\ &\text{and $t \in [a+3\delta_2,a+4 \delta_2].$} \end{aligned} \right. $$ By definition of $K_{\delta_1}$ and since $0 \le \eps_1 x_N \le 1$ for $x \in B_+$, the latter estimate holds also without the restriction $x_N < \frac{{\delta_1}}{2}$. Combining this fact with (\ref{eq1:hopf-0}), we obtain that \begin{align*} v(x,t) \ge \kappa_1 \eps_1 x_N \bigg(\int_{Q(\delta_1,\delta_2)} v^p \ dxdt\bigg)^{\frac{1}{p}}\ \ \ \ \text{for $x \in B_+$ and $t \in [a+3\delta_2,a+4 \delta_2],$} \end{align*} so that (\ref{eq:1}) holds with $\kappa:= \kappa_1 \eps_1$. \end{proof} Next, we prove a related but weaker Hopf Lemma for a class of cooperative systems under mixed boundary conditions. The argument is essentially the same as in the scalar case, but we include it for completeness since we could not find the result in this form in the literature. We use the notation of Lemma \ref{hopf:lemma:Neumann}. \begin{lemma}\label{hopf:lemma:Neumann:systems} Let $a,b \in \R$, $a<b$, $I:=(a,b),$ $J:=\{1,2,\ldots,n\}$ for some $n\in\mathbb N,$ and $w=(w_1, w_2,\ldots, w_n)$ with $w_i\in C^{2,1}(\overline{B_+}\times I)\cap C(\overline{B\times I})$ be a classical solution of \begin{equation*} \begin{aligned} (w_i)_t-\mu_i\Delta w_i&=\sum_{j\in J}c_{ij}w_j&&\qquad \text{in $B_+^\circ \times I$,}\\ \frac{\partial w_i}{\partial \nu}&= 0 &&\qquad \text{on $\Sigma_2 \times I$,}\\ \hspace{2.7cm} w_i &= 0 &&\qquad \text{on $\Sigma_1 \times I$,}\\ \hspace{2.7cm} w_i(x,\tau) &\geq 0&&\qquad \text{for $x\in B_+$,} \end{aligned} \end{equation*} for $i\in J$ with coefficient functions $\mu_i, c_{ij}\in L^\infty(B_+\times I)$. Suppose moreover that $\inf\limits_{B_+ \times I}\mu_i>0$ and $\inf\limits_{B_+ \times I}c_{ij}\geq 0$ for $i,j \in J$, $i\neq j$. Then \begin{equation} \label{eq:10} w_i\geq 0 \qquad \text{in $B_+\times I$ for $i\in J$.} \end{equation} Moreover, if $w_i(x,\tau) \not\equiv 0$ for some $i\in J,$ then \begin{equation}\label{hopf:conclusion:systems} w_i>0 \text{ in } B_+\times I\ \ \ \ \text{ and }\ \ \ \ \frac{\partial w_i}{\partial e_N}>0 \text{ on } \Sigma_1\times I. \end{equation} \end{lemma} \begin{proof} To prove (\ref{eq:10}), we fix \mbox{$\lambda > \max \limits_{i\in J} \sum \limits_{j\in J}\|c_{ij}\|_{L^\infty(B_+\times I)}$} and let $\varepsilon>0$. We define $v_i(x,t):=e^{-\lambda t}w_i(x,t)+\varepsilon$ for $x\in \overline{B_+}, t\in \overline{I}$ and $i \in J$. Then \begin{align*} (v_i)_t-\mu_i\Delta v_i-(c_{ii}-\lambda)v_i&>\sum_{j\in J\backslash\{i\}}c_{ij}v_j &&\qquad \text{in ${B_+}^\circ \times I$,}\\ \frac{\partial v_i}{\partial \nu} & \equiv 0 &&\qquad \text{on $\Sigma_2 \times I,\quad$ and}\\ v_i &\ge \varepsilon>0&& \qquad \text{on $\Sigma_1 \times I \;\cup\; B_+ \times \{a\}$.} \end{align*} As in the proof of Lemma \ref{hopf:lemma:Neumann}, we show that $v_i> 0$ in $B_+\times I$ for all $i\in J.$ Suppose by contradiction that $$ \bar t := \sup \{t \in [a,b)\::\: \text{$v_i> 0$ in $B_+\times [a,t)$ for all $i\in J$}\}\;<\;b. $$ By continuity, we have $\bar t>a$, $v_i(\cdot,\bar t)\geq 0$ in $B_+$ for all $i \in J$ and $v_j(\bar x,\bar t)=0$ for some $\bar x\in B_+$ and some $j \in J$. The Neumann boundary conditions on $\Sigma_2 \times I$ and the boundary point lemma (see for example \cite[Lemma 2.8]{lieberman}) then imply that $\bar x\in {B_+}^\circ$. But then \begin{align*} 0\geq (v_j)_t(\bar x,\bar t)-\mu_i\Delta v_i(\bar x,\bar t)-(c_{ii}-\lambda)v_i(\bar x,\bar t)>\sum_{j\in J\backslash\{i\}}c_{ij}v_j(\bar x,\bar t)\geq 0, \end{align*} a contradiction. Therefore $v_i>0 $ in $B_+\times I$ for all $i \in J$. Since $\eps>0$ was chosen arbitrarily, we conclude that (\ref{eq:10}) holds. Consequently, the non-negativity of $c_{ij}$ for $i \not=j$ implies that \begin{align*} (w_i)_t-\mu_i(x,t)\Delta w_i- c_{ii}(x,t)w_i&=\sum_{j\in J\backslash\{i\}} c_{ij}w_j \geq 0 \qquad \text{in $B_+\times I$ for $i\in J.$} \end{align*} Hence (\ref{hopf:conclusion:systems}) follows from Lemma~\ref{hopf:lemma:Neumann}. \end{proof} For the last lemma of this section, we need to fix additional notation. For $e\in \Sn,$ let $\sigma_e:\overline{B}\to \overline{B}$ and $B(e) \subset B$ be defined as in the introduction. We also put \begin{equation} \label{eq:11} \Sigma_1(e) := \{x\in\partial B(e) : x\cdot e=0\} \;\: \text{and}\;\: \Sigma_2(e) := \{x\in\partial B(e) : x\cdot e>0\}. \end{equation} For a subset $I\subset \R$ and a function $v:\overline{B}\times I \to \R$, we define $$ v^e: \overline{B} \times I \to \R, \qquad v^e(x,t):= v(x,t)-v(\sigma_e(x),t). $$ To implement the rotating plane technique for the boundary value problems considered in our main results, we need to analyze under which conditions positivity of $v^e(t,\cdot)$ in $B(e)$ at some time $t \in I$ induces positivity of $v^{e'}(t',\cdot)$ in $B(e')$ for a slightly perturbed direction $e'$ at a later time $t'>t$. The following perturbation lemma is sufficient for our purposes. \begin{lemma}\label{perturbationlemma} Let $I= (0,1)$, let $v \in C^{2,1}(\overline{B\times I})$, and consider a function \mbox{$\chi :[0,\sqrt{1+\operatorname{diam}(B)^2}\,] \to [0,\infty)$} such that $$ \leqno{(E\chi)} \qquad \left\{ \begin{aligned} &\text{$\lim \limits_{\vartheta \to 0} \chi(\vartheta)=0$ and}\\ &|v(x,t)-v(y,s)|+|\nabla v(x,t)-\nabla v(y,s)|\leq \chi(|(x,t)-(y,s)|)\\ &\text{for all $(x,t),(y,s)\in \overline {B}\times I.$} \end{aligned} \right. $$ Moreover, let $d,k,M>0$ be given constants. Then there exists $\rho>0$, depending only on $B$, $d$, $k$, $M,$ and the function $\chi,$ with the following property: If $e \in \Sn$ is such that \begin{enumerate} \item[(i)] the function $v^e$ satisfies \begin{equation*} v^e_t-\mu(x,t)\Delta v^e - c(x,t) v^e \geq 0 \qquad \text{in $B(e) \times I$} \end{equation*} with some coefficient functions $\mu, c$ satisfying \begin{equation*} \frac{1}{M} \leq \mu(x,t)\leq M \quad \text{and}\quad |c(x,t)|\leq M \qquad \text{ for } (x,t)\in B(e) \times I, \end{equation*} and \begin{equation*} \text{$\frac{\partial v^e}{\partial \nu}= 0$ on $\Sigma_2(e) \times I$, $\ \ \; v^e = 0$ on $\Sigma_1(e) \times I,$ $\ \ \; v^e \geq 0$ on $B(e) \times \{0\}$, } \end{equation*} \item[(ii)] $\quad \sup \{v^e(x,\frac{1}{4})\::\: x \in B(e),\: x \cdot e \ge d \} \ge k,$ \end{enumerate} then \begin{equation*} v^{e'}(\cdot,1)>0 \quad \text{in $B(e')\quad$ for all $e'\in \Sn$ with $|e-e'|<\rho$.} \end{equation*} \end{lemma} \begin{remark} \label{sec:harnack-hopf-type} The result obviously remains true if $v^e$ is replaced by $-v^e$, and we will use this fact later on. \end{remark} \begin{proof} Let $e\in \Sn$ be such that $(i)$ and $(ii)$ are satisfied, and let $\kappa>0$ and $p>0$ be the constants given by Lemma~\ref{hopf:lemma:Neumann} applied to $a=0$, $b=1$, $\delta_1=d$ and $\delta_2=\frac{1}{4}.$ We first note that condition $(E\chi)$ and hypothesis $(ii)$ imply that there exists $C_1>0$, depending only on $B,$ $d$, $k$, $M,$ and $\chi,$ such that $$ \kappa\bigg(\int_{Q^e} (v^e)^p\,dx\,dt\bigg)^{\frac{1}{p}} \ge C_1, $$ where $Q^e:= \{(x,t)\::\: x \in B(e),\: x \cdot e \ge d,\: \frac{1}{4} < t < \frac{1}{2} \}$. Then, by Lemma~\ref{hopf:lemma:Neumann}, it follows that \begin{align*} |\nabla v^e(x,1)|= \nabla v^e(x,1) \cdot e \geq C_1\qquad \text{ for all } x\in \Sigma_1(e). \end{align*} By condition $(E\chi)$, there is some $\rho_0>0$, depending only on $B,$ $d$, $k$, $M,$ and $\chi,$ such that \begin{equation}\label{eq2:hopf-new1} |\nabla v^{e'}(x,1)|= \nabla v^{e'}(x,1) \cdot e' \geq \frac{3}{4}C_1 \quad \left\{ \begin{aligned} &\text{for $e'\in \Sn$ with}\\ &\text{$|e-e'|<\rho_0$ and $x \in \Sigma_1(e')$.} \end{aligned} \right. \end{equation} Again by $(E\chi)$, we then find $\rho_1 \in (0,\rho_0)$, depending only on $B,$ $d$, $k$, $M,$ and $\chi,$ such that \begin{equation}\label{eq2:hopf} \nabla v^{e'}(x,1)\cdot e'\geq \frac{C_1}{2} \quad \left\{ \begin{aligned} &\text{for $e'\in \Sn$ and $x\in \overline{B}$ }\\ &\text{ with $|e-e'|<\rho_0$ and $|x \cdot e'|\le \rho_1.$} \end{aligned} \right. \end{equation} By Lemma~\ref{hopf:lemma:Neumann}, there is some $\eta_1>0$ which only depends on $B,$ $d$, $k$, $M,$ and $\chi,$ such that \begin{align*} v^e(x,1)\geq \eta_1 \qquad\text{for $x \in \overline{B(e)}$ with $x \cdot e \ge \frac{\rho_1}{2}$.} \end{align*} Again by $(E\chi),$ we may fix $\rho\in(0,\rho_1)$, depending only on $B,$ $d$, $k$, $M$ and $\chi,$ such that for all $e'\in \Sn$ with $|e-e'|<\rho,$ \begin{align} v^{e'}(x,1) \geq \frac{\eta_1}{2}\qquad \text{for $x \in \overline{B(e')}$ with $x \cdot e' \ge \frac{\rho_1}{2}$.} \label{eq3:hopf_2} \end{align} For fixed $e'\in \Sn$ with $|e-e'|<\rho$, \eqref{eq2:hopf} ensures that $$ v^{e'}(x,1) = v(x,1) - v(\sigma_{e'}(x),1) >0 \qquad \text{for $x \in B(e')$ with $x \cdot e' \le \frac{\rho_1}{2}$.} $$ Combining this with (\ref{eq3:hopf_2}), we find that $$ v^{e'}(x,1) >0 \qquad \text{for $x \in B(e')$,} $$ as claimed. \end{proof} \section{The scalar Neumann problem}\label{results:scalar:equations} This section is devoted to the proof of Theorem \ref{main:theorem:scalar}. Let $u\in C^{2,1}(\overline{B}\times(0,\infty))\cap C(\overline{B}\times[0,\infty))$ be a (possibly sign changing) solution of \eqref{model:scalar} such that the hypothesis (H1)-(H4) and (\ref{eq:15}) of Theorem \ref{main:theorem:scalar} are fulfilled. We first note that \begin{equation* \begin{aligned} u_t-\mu(|x|,t)\Delta u -c(x,t) u&=f(t,|x|,0) &&\qquad \text{in $B \times (0,\infty),$}\\ \partial_\nu u&=0 && \qquad \text{on $\partial B \times (0,\infty)$}\\ \end{aligned} \end{equation*} with $$ c(x,t):= \left \{ \begin{aligned} &\frac{f(t,|x|,u(x,t))-f(t,|x|,0)}{u(x,t)},&&\quad \text{ if } u(x,t) \not=0,\\ &0,&&\quad \text{ if } u(x,t)=0 \end{aligned} \right. $$ for $x \in \overline B$, $t>0$. By (H1) and (\ref{eq:15}) we have $c\in L^\infty(B\times(0,\infty))$, and thus (H2) and Lemma \ref{regularity} imply that the functions \begin{equation} \label{eq:6} \overline {B} \times [0,1] \to \R, \qquad (x,t) \mapsto u(x,\tau+t),\qquad \tau \ge 1 \end{equation} and $\overline {B} \times [0,1] \to \R^N,$ $(x,t) \mapsto \nabla u(x,\tau+t)$, $\tau \ge 1$ are uniformly equicontinuous. Hence there exists a function $\chi:[0,\sqrt{1+\operatorname{diam}(B)^2}\,] \to [0,\infty)$ with $\lim \limits_{\vartheta \to 0}\chi(\vartheta)=0$ and such that $(E\chi)$ of Lemma~\ref{perturbationlemma} holds for all of the functions in (\ref{eq:6}). Next, we set $$ u^e(x,t):=u(x,t)-u(\sigma_e(x),t) \qquad \text{for $x \in \overline B,\ t >0,$ and $e\in \Sn.$} $$ We wish to apply Corollary~\ref{sec:symm-char} to the sets $\cU:= \omega(u)$ and \begin{align*} \cN:=\{e\in \Sn \mid \exists \ \ T>0 \text{ such that } u^e(x,t)>0 \text{ for all }x\in B(e), \ t>T\} \end{align*} With $\cM_\cU$ defined as in (\ref{eq:5}), it is obvious that $\cN \subset \cM_{\cU}.$ We note that the function $u^e$ satisfies \begin{equation* \begin{aligned} u^e_t-\mu(|x|,t)\Delta u^e &=c^e(x,t) u^e && \qquad \text{in $B(e) \times (0,\infty)$},\\ \frac{\partial u^e}{\partial \nu} &= 0 &&\qquad \text{on $\Sigma_2(e) \times (0,\infty)$},\\ u^e &= 0 &&\qquad \text{on $\Sigma_1(e) \times (0,\infty)$},\\ \end{aligned} \end{equation*} with $\Sigma_i(e)$ as defined in (\ref{eq:11}) and $$ c^e(x,t):= \left \{ \begin{aligned} &\frac{f(t,|x|,u(x,t))-f(t,|x|,u(\sigma_e(x),t))}{u^e(x,t)},&&\quad \text{ if } u^e(x,t) \not=0,\\ &0, &&\quad \text{ if } u^e(x,t)=0. \end{aligned} \right. $$ By (H1), there exists $M>0$ with $$ \|c^e\|_{L^\infty(B\times(0,\infty))} \le M \qquad \text{for all $e \in \Sn$.} $$ Moreover, by making $M$ larger if necessary and using (H3), we may also assume that $$ \frac{1}{M} \le \mu(|x|,t) \le M \qquad \text{for all $x \in B$, $t>0$.} $$ By (H4), there exists $\tilde e\in \Sn$ such that $u^{\tilde e}(\cdot,0)\ge 0$, $u^{\tilde e}(\cdot,0)\not \equiv 0$ on $B(\tilde e)$ and thus $u^{\tilde e}(x,t)>0$ in $B(\tilde e)\times(0,\infty)$ by Lemma \ref{hopf:lemma:Neumann}, so that $\tilde e \in \cN$. Moreover, it easily follows from Lemmas~\ref{hopf:lemma:Neumann} and \ref{perturbationlemma} that $\cN$ is a relatively open subset of $\Sn$. By Corollary~\ref{sec:symm-char}, it therefore only remains to prove that $z \le z\circ \sigma_e$ in $B(e)$ for every $z \in \omega(u)$ and $e \in \partial \cN$. We argue by contradiction. Assume there is $\hat e \in \partial \cN$ and $z \in \omega(u)$ such that $z \not \le z\circ \sigma_{\hat e}$ in $B(\hat e)$. Define \begin{align*} z^e:\overline{B}\to\R \qquad by \qquad z^e(x):= z(x)-z(\sigma_e(x)) \end{align*} for $e\in\Sn.$ Then there exist constants $d,k>0$ such that \begin{equation*} \sup\{ z^{\hat e}(x)\::\: x \in B,\: x \cdot \hat e \ge d \} > k \end{equation*} We now let $\rho>0$ be given by Lemma~\ref{perturbationlemma} corresponding to the choices of $d$, $k$, $M$ and $\chi$ made above. By continuity and since $\hat e \in \partial \cN$, there exists $e \in \cN$ such that \begin{align}\label{rho1} |e-\hat e|<\rho \end{align} and \begin{equation} \label{eq:8} \sup\{ z^{e}(x)\::\: x \in B,\: x \cdot e \ge d \} > k \end{equation} Let $(t_n)_{n} \subset (0,\infty)$ be a sequence with $t_n \to \infty$ and $u(t_n,\cdot) \to z$ in $L^\infty(\overline B)$. By (\ref{eq:8}), there exists $n_0 \in \mathbb N$ such that \begin{equation*} \sup\{u^{e}(t_n,x)\::\: x \in B,\: x \cdot e \ge d \} > k \qquad \text{for all $n \ge n_0$.} \end{equation*} Moreover, by the definition of $\cN$ there exists $T>0$ such that $u^{e}(\cdot,t)>0$ in $B(e)$ for $t \ge T$. Next, fixing $n \in \mathbb N$ such that $t_n \ge \max\{T+\frac{1}{4},t_{n_0}\}$ and applying Lemma~\ref{perturbationlemma} to the function $$ \overline{B} \times [0,1]\to \R,\qquad (x,t) \mapsto u(x,t_n-\frac{1}{4}+t), $$ we find, using (\ref{rho1}), that $u^{\hat e}(x,t_n+\frac{3}{4})>0$ for all $x\in B(\hat e)$. Hence $\hat e \in \cN$. Since $\cN$ is relatively open in $\Sn$, this contradicts the fact that $\hat e \in \partial \cN$. The proof of Theorem~\ref{main:theorem:scalar} is thus finished. \section{Proof of the main result for competitive systems} \label{normalization:argument} In this section we will complete the proof of Theorem~\ref{main:theorem:neumann}. For the remainder of this section, let $u_1,u_2\in C^{2,1}(\overline{B}\times(0,\infty))\cap C(\overline{B}\times[0,\infty))$ be functions such that $u=(u_1,u_2)$ solves (\ref{model:competitive:neumann}) and such that assumptions (h0)--(h3), (\ref{eq:12}) from the introduction are fulfilled. A key ingredient of the proof is the following quotient estimate which compares the values of the components of $u$ at different times. Similar estimates were obtained by J. H\'{u}ska, P. Pol\'{a}\v{c}ik, and M. V. Safonov in \cite[Corollary 3.10]{huska:polacik:safonov} for positive solutions of scalar parabolic Dirichlet problems. We point out that the Neumann boundary conditions on $\partial B$ allow to obtain a stronger result in the present setting with a much simpler proof. In the following, for matters of simplicity, we sometimes omit the arguments $(x,t)$ and $(|x|,t)$. \begin{lemma} \label{normalization:lemma} There exists a constant $\eta>1$ such that \begin{equation*} \frac{1}{\eta}\le \frac{u_i}{\|u_i(\cdot,\tau)\|_{L^\infty(B)}} \le \eta \ \ \ \text{in $B\times[\tau -3, \tau+3]$} \end{equation*} for all $\tau\geq 5$ and $i=1,2.$ \end{lemma} \begin{proof} We only prove the estimate for $i=1$, the proof for $i=2$ is the same. For simplicity, we write $u$ in place of $u_1$, and we note that \begin{equation* u_t-\mu_1 \Delta u = c\, u \qquad \text{in $B \times (0,\infty)$} \end{equation*} with $$ c(x,t):= \alpha_1(t,|x|)u_2(x,t) + \left\{ \begin{aligned} &\frac{f_1(t,|x|,u(x,t))}{u(x,t)},&&\quad \text{ if } u(x,t) \not=0,\\ &0,&&\quad \text{ if } u(x,t)=0. \end{aligned} \right. $$ By (h1), (h3), and (\ref{eq:12}), we have that $c \in L^\infty(B\times(0,\infty))$. Let $\tilde u$ denote the extension of $u$ to $\widetilde B$ as defined in (\ref{eq:16}). Then Lemma~\ref{extension:lemma} implies that $\tilde u$ is a strong solution of \begin{equation* \begin{aligned} (\tilde u)_t- \tilde \mu\, \Delta \tilde u-\tilde b\,\partial_r\tilde u =\tilde c\, \tilde u \qquad \text{in $\widetilde B \times (0,\infty)$.} \end{aligned} \end{equation*} Here $\tilde \mu, \tilde b \in L^\infty(\widetilde B \times(0,\infty))$ are defined as in Lemma~\ref{extension:lemma} with $\mu$ replaced by $\mu_1$, and $\tilde c \in L^\infty(\widetilde B \times(0,\infty))$ is defined by \begin{equation*} \tilde c(x,t):= \begin{cases} c(x,t), &\quad x\in B,\ t\in (0,\infty),\\\vspace{.1cm} c(\hat x,t), &\quad x\in \widetilde B \setminus B,\ t\in (0,\infty). \end{cases} \end{equation*} We also note that $\inf \limits_{B \times (0,\infty)} \tilde \mu >0$ as a consequence of $(h2)$. Next, we fix $\tau\geq 5,$ and we apply the Harnack inequality for strong solutions given in \cite[Lemma 3.5]{polacik} (with $p=\infty$, $U=\widetilde B,$ $D=B,$ and $v=\tilde u$). The application yields $\kappa_1>0$ independent of $\tau$ such that \begin{align}\label{1} \inf_{B\times(\tau-3,\tau+3)}u\geq \kappa_1 \|u(\cdot,\tau-4)\|_{L^\infty(B)}, \end{align} since $\tilde u$ coincides with $u$ on $B \times (0,\infty)$. Moreover, by the maximum principle (see for example \cite[Lemma 7.1]{lieberman}) and the uniform bounds on the coefficients, there exists $\kappa_2>\kappa_1$ independent of $\tau$ such that \begin{equation}\label{2} \|u(\cdot,s)\|_{L^\infty(B)}\leq \kappa_2 \|u(\cdot,\tau-4)\|_{L^\infty(B)} \quad \text{for $s \in [\tau-3,\tau+3]$.} \end{equation} Let $x\in B$ and $t\in[\tau-3,\tau+3].$ Then, by \eqref{1} and \eqref{2}, \begin{align*} \frac{u(x,t)}{\|u(\cdot,\tau)\|_{L^\infty(B)}}\geq \frac{\kappa_1\|u(\cdot,\tau-4)\|_{L^\infty(B)}}{\|u(\cdot,\tau)\|_{L^\infty(B)}}\geq \frac{\kappa_1}{\kappa_2}, \end{align*} and \begin{align*} \frac{u(x,t)}{\|u(\cdot,\tau)\|_{L^\infty(B)}} \leq\frac{\kappa_2\|u(\cdot,\tau-4)\|_{L^\infty(B)}}{\|u(\cdot,\tau)\|_{L^\infty(B)}} \leq \frac{\kappa_2}{\kappa_1}. \end{align*} Thus the claim follows with $\eta= \frac{\kappa_2}{\kappa_1}$. \end{proof} Next, we slightly change some notation used in previous sections in order to deal with competitive systems of two equations. For $e\in\Sn,$ a radial domain $B\subset \mathbb R^N$, $I\subset \R$ and a pair $v=(v_1,v_2)$ of functions $v_i: \overline{B}\times I\to\R,$ $i=1,2$, we set \begin{equation}\label{difference:function:definition1} \begin{aligned} v_1^e(x,t)&:=v_1(x,t)-v_1(\sigma_e(x),t),\ x\in \overline{B},\ t>0,\\ v_2^e(x,t)&:=v_2(\sigma_e(x),t)-v_2(x,t),\ x\in \overline{B},\ t>0,\\ \end{aligned} \end{equation} The same notation is used if the functions do not depend on time. More precisely, for a pair $z=(z_1,z_2)$ of functions $z_i: \overline{B} \to\R,$ $i=1,2$, we set \begin{equation}\label{difference:function:definition2} \begin{aligned} z_1^e(x)&:=z_1(x)-z_1(\sigma_e(x)),\ x\in \overline{B},\\ z_2^e(x)&:=z_2(\sigma_e(x))-z_2(x),\ x\in \overline{B}. \end{aligned} \end{equation} Since $u=(u_1,u_2)$ solves (\ref{model:competitive:neumann}), for fixed $e \in \Sn$ we have \begin{equation*} \begin{aligned} (u_1^e)_t-\mu_1\Delta u_1^e - \hat c_1^e(x,t)u_1^e=&\alpha_1 [\hat u_1 \hat u_2 -u_1u_2]= \alpha_1 [u_1 u_2^e - \hat u_2 u_1^e] ,\\ (u_2^e)_t-\mu_2\Delta u_2^e - \hat c_2^e(x,t)u_2^e=&\alpha_2 [u_1 u_2 -\hat u_1 \hat u_2]= \alpha_2 [u_2 u_1^e -\hat u_1 u_2^e] \end{aligned} \end{equation*} in $B \times (0,\infty)$ with $\hat u_i(x,t):= u_i(\sigma_e(x),t))$ and \begin{equation*} \hat c_i^e(x,t):= \left\{ \begin{aligned} &\frac{f_i(t,|x|,u_i(x,t))-f_i(t,|x|,u_i(\sigma_e(x),t))}{u_i(x,t)-u_i(\sigma_e(x),t)},&&\quad \text{ if } u_i^e(x,t) \not=0,\\ &0,&&\quad \text{ if } u_i^e(x,t)=0 \end{aligned} \right. \end{equation*} for $i=1,2$. Setting \begin{align*} c^e_1(x,t)&:=\hat c_1^e(x,t)- \alpha_1(|x|,t) u_2(\sigma_e(x),t)\\ c^e_2(x,t)&:=\hat c_2^e(x,t)- \alpha_2(|x|,t) u_1(\sigma_e(x),t) \end{align*} for $x\in B$, $t>0$, we thus obtain the system \begin{equation}\label{linear:neumann} \begin{aligned} (u^e_1)_t-\mu_1\Delta u_1^e -c^e_1 u_1^e&= \alpha_1 u_1 u_2^e\\ (u^e_2)_t-\mu_2\Delta u_2^e -c^e_2 u_2^e & = \alpha_2 u_2 u_1^e \end{aligned} \qquad \text{in $B(e) \times (0,\infty)$} \end{equation} together with the boundary conditions \begin{equation}\label{linear:neumann-boundary} \frac{\partial u^e_i}{\partial \nu}= 0 \quad \text{on $\Sigma_2(e) \times (0,\infty)$},\qquad u^e_i= 0 \quad \text{on $\Sigma_1(e) \times (0,\infty)$,} \end{equation} where the sets $\Sigma_i(e)$ are given as in (\ref{eq:11}) for $i=1,2$. As a consequence of (h1),(h3), and (\ref{eq:12}), we have \begin{equation}\label{coefficient:estimates} \|c^e_1\|_{L^\infty(B \times (0,\infty))} \le M \quad \text{and}\quad \|c^e_2\|_{L^\infty(B \times (0,\infty))} \le M \qquad \text{for all $e\in \Sn$} \end{equation} with some constant $M>0$. Moreover, by making $M$ larger if necessary and using (h2), we may also assume that \begin{equation} \label{eq:18} \frac{1}{M} \le \mu_i(|x|,t) \le M \qquad \text{for $x \in B$, $t>0,$ and $i=1,2$.} \end{equation} We note that, by (h3) and since $u_1,u_2\geq 0$ in $B \times (0,\infty),$ system \eqref{linear:neumann} is a (weakly coupled) cooperative parabolic system. For these systems a variety of estimates are available (see for example \cite{protter} and \cite{polacik:systems}). In particular, Lemma~\ref{hopf:lemma:Neumann:systems} can be applied to study the boundary value problem (\ref{linear:neumann}),~(\ref{linear:neumann-boundary}). To prove Theorem~\ref{main:theorem:neumann}, we wish to apply Corollary~\ref{sec:symm-char} to the sets \begin{equation} \label{eq:7} \cU:=\omega(u_1) \cup -\omega(u_2) = \{z_1,-z_2\::\: z \in \omega(u)\} \end{equation} and \begin{equation} \label{eq:21} \cN:=\{e\in \Sn\::\: \text{$\exists\ T>0$ s.t. $u^e_i> 0$ in $B(e) \times [T,\infty)$ for $i=1,2$} \}. \end{equation} Note that the equality in (\ref{eq:7}) is a consequence of (\ref{eq:3}). In this case the associated set $\cM_{\cU}$, defined in (\ref{eq:5}), can also be written as $$ \cM_\cU=\{e\in \Sn \::\: z_i^e \ge 0 \quad\text{in $B(e)$ for all $z \in \omega(u)$, $i=1,2$}\}. $$ Thus we obviously have $\cN\subset \cM_{\cU}.$ Moreover, for $e\in \Sn$ as in (h0), we have \begin{align*} u_i^e(\cdot,0)\geq 0,\; u_i^e(\cdot,0)\not\equiv 0 \qquad \text{in $B(e)$ for $i=1,2$,} \end{align*} Lemma~\ref{hopf:lemma:Neumann:systems} then implies that $u_i^e> 0$ in $B(e) \times (0,\infty)$ for $i=1,2$, so that $e \in \cN$ and thus $\cN$ is nonempty. We also note the following. \begin{lemma} \label{M:open} $\cN$ is relatively open in $\Sn$. \end{lemma} \begin{proof} Let $e\in \cN.$ Then $(u_1^e,u_2^e)$ is a solution of \eqref{linear:neumann}, and there is $T>0$ such that $u^e_1$ and $u^e_2$ are positive in $B(e)\times(T,\infty).$ Thus \begin{align*} (u_1^e)_t-\mu_1\Delta u_1^e- c_1^e u_1^e &=\alpha_1 u_1 u_2^e\geq 0,\ \ x\in B(e),\ t>T,\\ (u_2^e)_t-\mu_2\Delta u_2^e- c_2^e u_2^e &=\alpha_2 u_2 u^e_1\geq 0,\ \ x\in B(e),\ t>T, \end{align*} since $\alpha_1$ and $\alpha_2$ are non-negative by hypothesis (h3). Applying Lemma \ref{perturbationlemma} and Remark~\ref{sec:harnack-hopf-type} to the functions $$ \overline {B} \times [0,1] \to \R, \qquad (x,t) \mapsto u_i(x,T+t),\qquad i=1,2, $$ we find that there exists $\rho>0$ such that $u_i^{e'}(\cdot,T+1)>0$ in $B(e')$ for $e'\in\Sn$ with $|e'-e|<\rho.$ Hence, by Lemma \ref{hopf:lemma:Neumann:systems}, $e'\in \cN$ for $e'\in\Sn$ with $|e'-e|<\rho$, and thus $\cN$ is open. \end{proof} In order to apply Corollary~\ref{sec:symm-char}, it now suffices to prove the following. \begin{lemma}\label{normalization:lemma:2} For every $e\in \partial \cN$ and every $z \in \omega(u)$ we have $z^e_1 \equiv z^e_2 \equiv 0$ in $B(e)$. \end{lemma} \begin{proof} Let $z=(z_1,z_2) \in\omega(u)$, and consider an increasing sequence $t_n\to\infty$ with $t_1>5$ and such that $u_i(\cdot,t_n)\to z_i$ uniformly in $\overline B$ for $i=1,2.$ We will only show that $z_2^e \equiv 0$ in $B(e)$ for all $e \in \partial \cN$, since the same argument shows that $z_1^e\equiv 0$ in $B(e)$ for all $e \in \partial \cN$. Since, as noted in Remark \ref{equicontinuity}, $u_2$ and its first derivatives satisfy the H{\"o}lder condition (\ref{equicontinuity:h}), there exists a function $\chi:[0,\sqrt{1+\operatorname{diam(B)}^2}\,] \to [0,\infty)$ with $\lim \limits_{\vartheta \to 0}\chi(\vartheta)=0$ and such that the equicontinuity condition $(E\chi)$ of Lemma~\ref{perturbationlemma} holds for all of the functions \begin{equation} \label{eq:13} \overline B \times [0,1] \to \R, \qquad (x,t) \mapsto u_2(x,\tau +t),\qquad \tau \ge 1. \end{equation} Arguing by contradiction, we now assume that $z_2^{\hat e}\not\equiv 0$ in $B(\hat e)$ for some $\hat e \in \partial \cN$. By the equicontinuity of the functions in (\ref{eq:13}), there are $\zeta \in(0,\frac{1}{4})$, a nonempty open subset $\Omega \subset \subset B(\hat e),$ and $k_1>0$ such that, after passing to a subsequence, \begin{equation}\label{contradiction:hypothesis} u_2^{\hat e} \ge k_1 \quad \text{on $\Omega \times [t_n-\zeta,t_n+\zeta]\quad$ for all $n \in \mathbb N$.} \end{equation} We now apply a normalization procedure for $u_1$, since we cannot exclude the possibility that $u_1(\cdot,t_n) \to 0$ as $n \to \infty$. Define, for $n\in\mathbb N,$ \begin{equation*} I_n:=[t_n-2,t_n+2] \subset \R,\qquad \beta_n:=\|u_1(\cdot,t_n)\|_{L^\infty(B)} \end{equation*} and the functions $$ v_n : \overline{B} \times I_n \to \R, \qquad v_n(x,t)= \frac{u_1(x,t)}{\beta_n}. $$ By Lemma \ref{normalization:lemma}, there exists $\eta>1$ such that \begin{equation} \frac{1}{\eta} \:\le\: v_n \: \leq\: \eta \quad \text{in $B\times I_n$}\qquad \text{for all $n\in\mathbb N.$} \label{eta} \end{equation} Moreover, we have \begin{align}\label{hoeldercontinuos:1} \sup_{\genfrac{}{}{0pt}{}{\scriptstyle{x,\bar x\in \overline{B},\, t,\bar t\in[s,s+1],}}{{\scriptstyle{x \not= \bar x,\, t \not= \bar t,\, s\in[-1,1]}}} } \frac{|v_n(x,t_n+t)-v_n(\bar x,t_n+\bar t)|}{|x-\bar x|^\gamma+|t-\bar t|^{\frac{\gamma}{2}}}< K, \end{align} and \begin{align* \sup_{\genfrac{}{}{0pt}{}{\scriptstyle{x,\bar x\in \overline{B},\, t,\bar t\in[s,s+1],}}{{\scriptstyle{x \not= \bar x,\, t \not= \bar t,\, s\in[-1,1]}}} } \frac{|\nabla v_n(x,t_n+t)-\nabla v_n(\bar x,t_n+\bar t)|}{|x-\bar x|^\gamma+|t-\bar t|^{\frac{\gamma}{2}}}< K \end{align*} for all $n\in\mathbb N$ with positive constants $\gamma$ and $K$. This follows from Lemma~\ref{regularity} and the fact that $v_n$ satisfies \begin{equation*} \begin{aligned} (v_n)_t-\mu_1\Delta v_n &= c v_n - \alpha_1 v_n u_2 &&\qquad \text{in $B \times I_n$},\\ \partial_\nu v_n&=0 &&\qquad \text{on $\partial B \times I_n$} \end{aligned} \end{equation*} with $$ c \in L^\infty(B\times (0,\infty)),\qquad c(x,t):= \left\{ \begin{aligned} &\frac{f_1(t,|x|,u_1(t,x))}{u_1(t,x)},&&\quad \text{ if }u_1(t,x) \not=0,\\ &0,&&\quad \text{ if }u_1(t,x)=0. \end{aligned} \right. $$ As a consequence, by adjusting the function $\chi$ above, we may also assume that all of the functions \begin{equation*} \overline B \times [0,1] \to \R, \; (x,t) \mapsto v_n(x,\tau+t),\quad \text{$|t_n - \tau|\le 1$ for some $n \in \mathbb N$} \end{equation*} satisfy the equicontinuity condition $(E\chi)$ of Lemma~\ref{perturbationlemma}. For $e \in \Sn$, $n \in \mathbb N$ we also consider $$ v_n^e : \overline{B(e)} \times I_n \to \R,\qquad v_n^e(x,t):= v_n(x,t)-v_n(\sigma_e(x),t), $$ and we note that \begin{equation}\label{normalized:equations} \begin{aligned} (v_n^e)_t-\mu_1\Delta v_n^e-c_1^e v_n^e &=\alpha_1 v_n u_2^e &&\qquad \text{in $B(e) \times I_n$,}\\ (u_2^e)_t-\mu_2\Delta u_2^e- c_2^e u_2^e &=\alpha_2 \beta_n u_2 v^e_n &&\qquad \text{in $B(e) \times I_n$,}\\ \partial_\nu v_n^e=\partial_\nu u_2^e &= 0 &&\qquad \text{on $\Sigma_2(e) \times I_n$,}\\ v^e_n(x,t)=u_2^e(x,t) &= 0 &&\qquad \text{on $\Sigma_1(e) \times I_n$} \end{aligned} \end{equation} with $\Sigma_i(e)$ as defined in (\ref{eq:11}). We now distinguish two cases. \begin{align* \hspace{-4cm}\text{ \underline{Case 1:} }\ \ \ \limsup_{n\to\infty}\|v_n^{\hat e}\|_{L^\infty(B(\hat e)\times [t_n-\zeta,t_n+\zeta])}>0. \end{align*} In this case, by \eqref{hoeldercontinuos:1}, there are $d \in (0,1)$, $k_2>0,$ and $t^*\in[-\zeta,\zeta]$ such that, after passing to a subsequence, $$ \sup \{v^{\hat e}_n(x,t_n + t^*) : x \in B({\hat e}),\: x\cdot {\hat e} \ge d\} \ge k_2 \qquad \text{for $n \in \mathbb N$.} $$ Without loss, we may assume that $d< \min \{x \cdot {\hat e}\::\: x \in \Omega\}$, so that also $$ \sup \{u^{\hat e}_2(x,t_n + t^*) : x \in B({\hat e}),\: x\cdot {\hat e} \ge d\} \ge k_1 \qquad \text{for $n \in \mathbb N$} $$ by (\ref{contradiction:hypothesis}). Next, let $k:= \frac{1}{2}\min\{k_1,k_2\},$ and let $\rho>0$ be the constant given by Lemma~\ref{perturbationlemma} for $M$ satisfying (\ref{coefficient:estimates}),~(\ref{eq:18}) and $d$, $k$, $\chi$ as chosen above. Since $\hat e \in \partial \cN$, there exists $e \in {\cal N}$ such that $|e-\hat e|<\frac{\rho}{2}$ and, by equicontinuity, \begin{align*} &\sup \{v^e_n(x,t_n + t^*) : x \in B(e),\: x\cdot e \ge d\} \ge k,\\ &\sup \{u^e_2(x,t_n + t^*) : x \in B(e),\: x\cdot e \ge d\} \ge k \end{align*} for all $n \in \mathbb N$. Since $e\in{\cal N}$ we can fix $n\in\mathbb N$ such that \begin{align*} v_n^e(x,t_n+t^*-\frac{1}{4})\geq 0,\quad u_2^e(x,t_n+t^*-\frac{1}{4})\geq 0 \qquad \text{for all }x\in B(e). \end{align*} Then applying Lemma~\ref{perturbationlemma} to the functions $$ \overline B \times [0,1] \to \R, \qquad (x,t) \mapsto u_2(x,t_n +t^*-\frac{1}{4} +t),\qquad (x,t) \mapsto v_n(x,t_n +t^*-\frac{1}{4} +t), $$ we conclude that $$ u_{2}^{\bar e}(\cdot,t_{n}+t^*+\frac{3}{4})>0 \quad \text{and}\quad v_{n}^{\bar e}(\cdot,t_{n}+t^*+\frac{3}{4})>0 \qquad \text{in $B(\bar e)$} $$ for all $\bar e\in \Sn$ with $|\bar e-e|<\rho$, and thus in particular for $\bar e = \hat e$. This yields $u_i^{\hat e}(\cdot,t_{n}+t^*+\frac{3}{4})>0$ in $B(\hat e)$ for $i=1,2$, and thus $\hat e \in \cN$ by Lemma \ref{hopf:lemma:Neumann:systems}. Since $\cN \subset \Sn$ is relatively open by Lemma \ref{M:open}, this contradicts the hypothesis that $\hat e \in \partial \cN$. \begin{align}\label{normalization:goes:to:zero} \hspace{-4cm}\text{ \underline{Case 2:} }\ \ \ \lim_{n\to\infty}\|v_n^{\hat e}\|_{L^\infty(B({\hat e})\times [t_n-\zeta,t_n+\zeta])}=0. \end{align} In this case we fix a nonnegative function $\varphi\in C_c^\infty(B({\hat e}) \times (-\zeta,\zeta))$ with $\varphi \equiv 1$ on $\Omega \times (-\frac{\zeta}{2},\frac{\zeta}{2})$. Moreover, we let $$ \Omega_n:= B({\hat e}) \times (t_n-\zeta,t_n+\zeta)\qquad \text{and}\qquad \varphi_n \in C_c^\infty(\Omega_n),\quad \varphi_n(x,t):= \varphi(x,t_n+t). $$ Setting $(u_2^{\hat e})^+:=\max\{u_2^{\hat e},0\}$ and $(u_2^{\hat e})^-:=-\min\{u_2^{\hat e},0\},$ we find by (h3), (\ref{contradiction:hypothesis}) and \eqref{eta} that \begin{align*} A_n:= \int_{\Omega_n} &\alpha_1 v_n u_2^{\hat e}\varphi_n d(x,t) =\int_{\Omega_n} \alpha_1 v_n [(u_2^{\hat e})^+-(u_2^{\hat e})^-]\varphi_n d(x,t)\nonumber\\ &\geq \frac{\alpha_*}{\eta} \int_{\Omega_n} (u_2^{\hat e})^+\varphi_n d(x,t)- \alpha^* \eta \,\|(u_2^{\hat e})^-\|_{L^\infty(\Omega_n)}\,\|\varphi_n\|_{L^1(\Omega_n)},\nonumber\\ &\geq \frac{\alpha_*}{\eta} k_1 |\Omega| \zeta \;-\;\alpha^* \eta\, \|(u_2^{\hat e})^-\|_{L^\infty(\Omega_n)}\,\|\varphi\|_{L^1(\Omega \times (-\zeta,\zeta))} \end{align*} for $n \in \mathbb N$, whereas $\lim \limits_{n \to \infty}\|(u_2^{\hat e})^-\|_{L^\infty(\Omega_n)}=0$ since ${\hat e}\in \partial \cN.$ Hence $\liminf \limits_{n \to \infty} A_n>0$. On the other hand, integrating by parts, we have by \eqref{normalized:equations} that \begin{align*} A_n &=\int_{\Omega_n}\!\![(v_n^{\hat e})_t-\mu_1\Delta v_n^{\hat e}- c_1^{\hat e} v_n^{\hat e}]\varphi_n d(x,t)\\ &=-\int_{\Omega_n}\!\![v_n^{\hat e}(\varphi_n)_t +v_n^{\hat e} \Delta (\mu_1\varphi_n) + c_1^{\hat e} v_n^{\hat e}\varphi_n ]d(x,t)\\ &\leq \|v^{\hat e}_n\|_{L^\infty(\Omega_n)} \int_{\Omega_n}\Bigl(|(\varphi_n)_t |+ |\Delta(\mu_1 \varphi_n)| + M \varphi_n\Bigr) d(x,t) \end{align*} for $n \in \mathbb N$. Invoking (h2) and \eqref{normalization:goes:to:zero}, we conclude that $\limsup \limits_{n\to\infty} A_n\le0.$ So we have obtained a contradiction again, and thus the claim follows. \end{proof} \begin{proof}[Proof of Theorem~\ref{main:theorem:neumann} (completed)] By Lemmas~\ref{M:open} and \ref{normalization:lemma:2} and the remarks before Lemma~\ref{M:open}, the assumptions of Corollary~\ref{sec:symm-char} are satisfied with $\cU$ and $\cN$ as defined in (\ref{eq:7}) and (\ref{eq:21}). Consequently, there exists $p \in \Sn$ such that every $z \in \cU$ is foliated Schwarz symmetric with respect to $p$. By definition of $\cU$, this implies that every $z=(z_1,z_2) \in \omega(u)$ has the property that $z_1$ is foliated Schwarz symmetric with respect to $p$ and $z_2$ is foliated Schwarz symmetric with respect to $-p$. \end{proof} \section{The cooperative case and other problems}\label{sec:other:problems} In this section we first complete the \begin{proof}[Proof of Theorem~\ref{main:theorem:neumann-cooperative}] Let $u_1,u_2\in C^{2,1}(\overline{B}\times(0,\infty))\cap C(\overline{B}\times[0,\infty))$ be functions such that $u=(u_1,u_2)$ solves \eqref{model:cooperative:neumann}, and suppose that $(h0)'$, $(h1)$--$(h3)$ and (\ref{eq:12}) are satisfied. The proof is almost exactly the same as the one of Theorem~\ref{main:theorem:neumann} with only two changes. The first change concerns the definitions of $v_2^e$ and $z_2^e$ in (\ref{difference:function:definition1}) and (\ref{difference:function:definition2}). More precisely, we now set $v_i^e(x,t)= v_i(x,t)-v_i(\sigma_e(x),t)$ and $z_i^e(x)= z_i(x)-z_i(\sigma_e(x))$ for $i=1,2$. With this change, we again arrive at the linearized system~(\ref{linear:neumann}). Considering now the sets \begin{equation*} \cU:=\omega(u_1) \cup \omega(u_2) = \{z_1,z_2\::\: z \in \omega(u)\} \end{equation*} in place of (\ref{eq:7}) and \begin{equation*} \cN:=\{e\in \Sn\::\: \text{$\exists\ T>0$ s.t. $u^e_i> 0$ in $B(e) \times [T,\infty)$ for $i=1,2$} \}, \end{equation*} we may now validate the assumptions of Corollary~\ref{sec:symm-char} in exactly the same way as in Section~\ref{normalization:argument}. Hence the proof is complete. \end{proof} \begin{remark} (i) Note that both in Theorem~\ref{main:theorem:neumann} and in Theorem~\ref{main:theorem:neumann-cooperative} we assume that the components $u_i$ are non-negative, and this assumption is essential for the cooperativity of the linearized system~(\ref{linear:neumann}). Without the sign restriction, systems~(\ref{model:competitive:neumann}) and (\ref{model:cooperative:neumann}) arise from each other by replacing $u_i$ by $-u_i$ for $i=1,2$ and adjusting $f$ accordingly.\\ (ii) As a further example, we wish to mention the cubic system \begin{equation}\label{cubic:system} \begin{aligned} (u_1)_t-\Delta u_1 &=\lambda_1u_1+\gamma_1u_1^3-\alpha_1u^{2}_2 u_1 &&\qquad \text{in $B \times (0,\infty)$,}\\ (u_2)_t-\Delta u_2 &=\lambda_2u_2+\gamma_2u_2^3-\alpha_2u_1^2 u_2&&\qquad \text{in $B \times (0,\infty)$,}\\ \partial_\nu u_1&=\partial_\nu u_2=0&&\qquad \text{on $\partial B \times (0,\infty)$},\\ u_i(x,0)&=u_{0,i}(x) \ge 0&& \qquad \text{for $x\in B$, $i=1,2$,} \end{aligned} \end{equation} where $\lambda_i,\gamma_i,$ and $\alpha_i$ are positive constants. The elliptic counterpart of this system is being studied extensively due to its relevance in the study of binary mixtures of Bose-Einstein condensates, see \cite{esry}. The asymptotic symmetry of uniformly bounded classical solutions of this problem satisfying the initial reflection inequality condition $(h0)$ can be characterized in the same way as in Theorem~\ref{main:theorem:neumann}. To see this, minor adjustments are needed in the proof of Theorem~\ref{main:theorem:neumann} to deal with a slightly different linearized system. Details will be given in \cite{saldana-phd}. Symmetry aspects of the elliptic counterpart of (\ref{cubic:system}) have been studied in \cite{weth:tavares}.\\ (iii) Our method breaks down if the coupling term has different signs in the components, as e.g. in a predator-prey type system \begin{equation*} \begin{aligned} (u_1)_t-\mu_1\Delta u_1 &=f_1(t,|x|,u_1)+\alpha_1 u_1u_2&&\qquad \text{in $B \times (0,\infty)$},\\ (u_2)_t-\mu_2\Delta u_2 &=f_2(t,|x|,u_2)-\alpha_2 u_1u_2&&\qquad \text{in $B \times (0,\infty)$}. \end{aligned} \end{equation*} In this case, there seems to be no way to derive a cooperative linearized system of the type (\ref{linear:neumann}) for difference functions related to hyperplane reflections. The asymptotic shape of solutions for this system (satisfying Dirichlet or Neumann boundary conditions) remains an interesting open problem.\\ (iv) Consider general systems of the form \begin{equation}\label{general:cooperative:model} \begin{aligned} (u_i)_t-\Delta u_i &=f_i(t,|x|,u)&&\qquad \text{in $B \times (0,\infty)$},\\ \partial_{\nu} u_i&=0&&\qquad \text{on $\partial B \times (0,\infty)$},\\ \end{aligned} \end{equation} for $i=1,2$, where the nonlinearities $f_i:[0,\infty)\times I_B \times \mathbb R^2 \to \mathbb R$ are locally Lipschitz in $u=(u_1,u_2)$ uniformly with respect to $r\in I_B$ and $t>0$. We call (\ref{general:cooperative:model}) an {\em irreducible cooperative system} if for every $m>0$ there is a constant $\sigma>0$ such that $$ \frac{\partial f_{i}(t,r,u)}{\partial u_j} \: \geq \: \sigma\quad \left \{ \begin{aligned} &\text{for every $i,j \in \{1,2\}$, $i \not=j$, $r\in I_B$, $t>0,$ $|u|\le m$}\\ &\text{such that the derivative exists.} \end{aligned} \right. $$ For this class of systems a symmetry result similar to Theorem~\ref{main:theorem:neumann-cooperative} can be derived {\em even for sign changing solutions}, and in fact the proof is simpler. The precise statement and detailed arguments are given in \cite{saldana-phd}, while we only discuss the key aspects here. We first note that, for a given uniformly bounded classical solution $u=(u_1,u_2)$ of (\ref{general:cooperative:model}) and $e \in \Sn$, we can use the Hadamard formulas as in \cite{polacik:systems} to derive a cooperative system for the functions $(x,t) \mapsto u^e_i(x,t):=u_i(x,t)-u_i(\sigma_e(x),t)$. This system has the form \begin{align*} (u_i^e)_t-\Delta u_i^e = \sum_{j=1}^2 c^e_{ij} u^e_j \qquad \text{in $B(e)\times (0,\infty)$} \end{align*} with functions $c^e_{ij} \in L^\infty(B\times (0,\infty))$, $i,j=1,2$ such that $$ \inf_{B(e) \times (0,\infty)}c^e_{ij}>0 \qquad \text{for $i\neq j$.} $$ With the help of the latter property, one can prove that for every sequence of positive times $t_n$ with $t_n \to \infty$ and every $e \in \Sn$ we have the equivalence $$ \lim_{n \to \infty}\|u_1^e(\cdot,t_n)\|_{L^\infty(B(e))}= 0 \qquad \Longleftrightarrow \qquad \lim_{n \to \infty}\|u_2^e(\cdot,t_n)\|_{L^\infty(B(e))}= 0. $$ As a consequence, semitrivial limit profiles $(z_1,0), (0,z_2) \in \omega(u)$ have the property that the nontrivial component must be a radial function, and hence no normalization procedure as in Section~\ref{normalization:argument} is needed to deal with these profiles. This is the reason why the positivity of components is not needed in this case. Details are given in \cite{saldana-phd}. Note that the cooperative system (\ref{model:cooperative:neumann}) is {\em not} irreducible. \\ (v) The arguments and results for irreducible cooperative systems sketched in (iv) also apply to a corresponding system with $n\geq 3$ equations. On the other hand, one may also consider cooperative systems of the form \begin{equation}\label{model:cooperative:neumann_1} (u_i)_t-\mu_i(|x|,t)\Delta u_i =f_i(t,|x|,u_i)+\sum_{\stackrel{j=1}{j \not=i}}^n \alpha_{ij}(|x|,t) u_iu_j, \quad i=1,\dots,n. \end{equation} with $n \ge 3$ equations which are not irreducible. Assume that $(h1)$ and $(h_2)$ hold for $f_i$ and $\mu_i$, $i=1,\dots,n$, and that $\alpha_{ij} \in L^\infty(I_B \times (0,\infty))$ are nonnegative functions for $i,j=1,\dots,n$, $i \not = j$. It is then an open question which additional positivity assumptions on the coefficients $\alpha_{ij}$ are required for the corresponding generalization of Theorem~\ref{main:theorem:neumann-cooperative}. Similar arguments as in Section 5 apply in the case where $\alpha_{ij} \ge \alpha_*>0$ for $i,j=1,\dots,n$, $i \not = j$, but we do not think that this assumption is optimal. We thank the referee for pointing out this question. \end{remark} \section{Appendix} Here we show the existence of positive solutions of the elliptic system~(\ref{Lotka:Volterra:system-elliptic}) without foliated Schwarz symmetric components. More precisely, we have the following result. \begin{theo}\label{thm:local:maxima:system} Let $k\in\mathbb N$. Then there exists $\eps, \lambda >0$ such that (\ref{Lotka:Volterra:system-elliptic}) admits a positive classical solution $(u_1,u_2)$ in $B:= B_\eps= \{x \in \R^2\::\: 1 - \eps < |x| <1\} \subset \R^2$ such that the angular derivatives $\frac{\partial u_i}{\partial \theta}$ of the components change sign at least $k$ times on every circle contained in $\overline B_\eps$. \end{theo} \begin{proof} We apply a classical bifurcation result of Crandall and Rabinowitz, see \cite[Lemma 1.1]{crandall.rabinowitz}. Let $\tilde Y$ denote the space of functions $u \in C(\overline{B_\eps})$ which are symmetric with respect to reflection at the $x_1$-axis and $\tilde X$ the space of all $u \in \tilde Y \cap C^2(\overline {B_\eps})$ with $\partial_\nu u=0$ on $\partial B_\eps$. Then $\tilde X$ and $\tilde Y$ are Banach spaces with respect to the norms of $C^2(\overline{B_\eps})$, $C(\overline {B_\eps})$, respectively. Let $X:= \tilde X \times \tilde X$, $Y:= \tilde Y \times \tilde Y$, and let $F: (0,\infty) \times X \to Y$ be given by $$ F(\lambda, u)= {\Delta u_1 + \lambda(u_1+\lambda) - (u_1+\lambda)(u_2+\lambda) \choose \Delta u_2+ \lambda(u_2+\lambda) - (u_1+\lambda)(u_2+\lambda)} = {\Delta u_1 -(u_1+\lambda) u_2 \choose \Delta u_2 - (u_2+\lambda)u_1} $$ Then we have $F(\lambda,0)= 0$ for all $\lambda >0$. Moreover, $u=(u_1,u_2) \in X$ solves (\ref{Lotka:Volterra:system-elliptic}) if and only if $F(\lambda, u_1-\lambda,u_2-\lambda) = 0$. We consider the partial derivative $$ \partial_u F : (0,\infty) \times X \to \cL(X,Y),\qquad \partial_u F(\lambda,u)v= { \Delta v_1 -u_2 v_1 - (u_1+\lambda)v_2 \choose \Delta v_2 - u_1v_2 - (u_2+\lambda) v_1}. $$ For $\lambda>0$ we put $$ A_\lambda:= \partial_u F(\lambda,0) \in \cL(X,Y),\qquad A_\lambda v = {\Delta v_1-\lambda v_2 \choose \Delta v_2 - \lambda v_1}, $$ and we let $N(A_\lambda)$ resp. $R(A_\lambda)$ denote the kernel and the image of $A_\lambda$, respectively. If $v \in N(A_\lambda)$, then $c:= v_1+v_2$ satisfies $-\Delta c +\lambda c=0$ in $B_\eps$ and $\partial_\nu c= 0$ on $\partial B_\eps$, which easily implies that $c \equiv 0$ since $\lambda > 0$. Consequently, $v \in N(A_\lambda)$ if and only if $v_2=-v_1$ and $$ -\Delta v_1 = \lambda v_1 \quad \text{in $B_\eps$,} \qquad \partial_\nu v_1= 0 \quad \text{on $\partial B_\eps$.} $$ By separation of variables, there exists $k \in {\mathbb {N}} \cup \{0\}$ such that in polar coordinates we have $v_1(r,\theta)=\varphi(r) \cos(k \theta)$, where $\varphi \in C^2([1-\eps,1])$ satisfies \begin{equation} \label{eq:23} -\Delta_r \varphi +\frac{k^2}{r^2} \varphi= \lambda \varphi \quad \text{in $(1-\eps,1)$,} \qquad \partial_r \varphi(1-\eps)= \partial_r \varphi(1)=0 \end{equation} with $\Delta_r= \partial_{rr} + \frac{1}{r}\partial_r$. Let $\lambda_j(k, \eps) \ge 0$ denote the $j$-th eigenvalue of (\ref{eq:23}), counted with multiplicity in increasing order. By Sturm-Liouville theory, these eigenvalues are simple. It is easy to see that, for fixed $k \in {\mathbb{N}} \cup \{0\}$, we have $\lambda_1(k, \eps) \to k^2$ and $\lambda_{j}(k,\eps) \to \infty$ for $j \ge 2$ as $\eps \to 0$. Moreover, $\lambda_j(k,\eps)$ is strictly increasing in $k$ for fixed $\eps>0$. We now fix $k \in {\mathbb{N}}$, and we choose $\eps= \eps(k)>0$ small enough such that $\lambda_2(0,\eps) >\lambda_1(k,\eps)$. We then set $\lambda_*= \lambda_1(k,\eps)>0$, and we let $\varphi$ denote the unique positive eigenfunction of (\ref{eq:23}) for $\lambda= \lambda_*$ with $\|\varphi\|_\infty=1$. It then follows that $N(A_{\lambda_*})$ is spanned by $(\psi,-\psi) \in X$ with $\psi(r,\theta)= \varphi(r) \cos(k \theta)$. Moreover, it easily follows from integration by parts that $\int_{B_\eps} \psi (v_1 - v_2)\,dx = 0$ for every $v=(v_1,v_2) \in R(A_{\lambda_*})$. Since $A_{\lambda_*}$ is a Fredholm operator of index zero, we thus conclude that $$ R(A_{\lambda_*})= \Bigl \{v \in Y\::\: \int_{B_\eps} \psi (v_1 - v_2)\,dx = 0 \Bigr\}. $$ In particular, since $\frac{d}{d \lambda} A_\lambda \,v = {-v_2 \choose -v_1}$ for $v \in X$ and $\lambda>0$, we find that $\frac{d}{d \lambda} A_\lambda \Big|_{\lambda=\lambda_*}(\psi,-\psi)=(-\psi,\psi) \not \in R(A_{\lambda_*})$. Hence the assumptions of \cite[Lemma 1.1]{crandall.rabinowitz} are satisfied, and thus there exists $\delta>0$ and $C^1$-functions $\lambda: (-\delta,\delta) \to (0,\infty)$ and $u: (-\delta,\delta) \to X$ such that $\lambda(0)= \lambda_*$, $F(\lambda(t),u(t))=0$ for all $t \in (-\delta,\delta)$ and $u(t)= t (\psi,-\psi) + o(t)$ in $X$. Hence, fixing $t \in (-\delta,\delta) \setminus \{0\}$ sufficiently close to zero and considering $\lambda:= \lambda(t)$, we find that $u=(u_1,u_2)$ with $u_1= u_1(t)+ \lambda(t)$, $u_2=u_2(t)+\lambda(t)$ is a positive solution of (\ref{Lotka:Volterra:system-elliptic}) such that the angular derivatives $\frac{\partial u_i}{\partial \theta}$ of the components change sign at least $k$ times on every circle contained in $\overline B_\eps$. \end{proof}
1,116,691,501,104
arxiv
\section{Introduction}\label{Sec:intro} \input{01-tacnuc-intro.tex} \section{Example 1: Concept lattices and poset bicompletions}\label{Sec:FCA} \input{02-tacnuc-FCA.tex} \section{Example 2: Nuclei in linear algebra}\label{Sec:lin} \input{03-tacnuc-lin.tex} \section{Example 3: Nuclear Chu spaces}\label{Sec:chu} \input{04-tacnuc-chu.tex} \section{Example $\infty$: Nuclear adjunctions, monads, comonads} \label{Sec:cat} \input{05-tacnuc-cat.tex} \section{Theorem}\label{Sec:Theorem} \input{06-tacnuc-thm.tex} \section{Propositions}\label{Sec:props} \input{07-tacnuc-props.tex} \section{Simple nucleus}\label{Sec:simple} \input{08-tacnuc-simple.tex} \section{Little nucleus}\label{Sec:little} \input{09-tacnuc-little.tex} \section{Example 0: The Kan adjunction}\label{Sec:HT} \input{10-tacnuc-kan.tex} \section{What?} \label{Sec:what} \input{11-tacnuc-what.tex} \bibliographystyle{plain \subsection{Nuclear adjunctions and the adjunction nuclei} \subsubsection{Definition.} We say that an adjunction $F = (\adj F:{\mathbb B}\to {\mathbb A})$ is \emph{nuclear}\/ when the right adjoint $\radj F$ is monadic and the left adjoint $\ladj F$ is comonadic. This means that the categories ${\mathbb A}$ and ${\mathbb B}$ determine one another, and can be reconstructed from each other: \begin{itemize} \item $\radj F$ is monadic when ${\mathbb B}$ is equivalent to the category $\Emm {\mathbb A} F$ of algebras for the monad $\lft F = \radj F \ladj F:{\mathbb A}\to {\mathbb A}$, whereas \item $\ladj F$ is comonadic when ${\mathbb A}$ is equivalent to the category $\Emc {\mathbb B} F$ of coalgebras for the comonad $\rgt F = \ladj F \radj F:{\mathbb B}\to {\mathbb B}$. \end{itemize} The situation is reminiscent of Maurits Escher's \emph{``Drawing hands''} in Fig.\ref{Fig:escher}. \begin{figure}[!ht] \begin{center} \hspace{-2em} \raisebox{2cm}{$\begin{tikzar}[row sep=2.5cm,column sep=3cm] {\mathbb A} \arrow[phantom]{d}[description]{\dashv} \arrow[loop, out = 135, in = 45, looseness = 4]{}[swap]{\lft F} \arrow[bend right = 13]{d}[swap]{\ladj F} \arrow{r}[description]{\mbox{\Huge$\simeq$}} \& \Emc{\mathbb B} F \arrow[phantom]{d}[description]{{\dashv}} \arrow[bend right = 13]{d}[swap]{\lnadj{F}} \\ {\mathbb B} \arrow[loop, out = -45, in=-135, looseness = 6]{}[swap]{\rgt F} \arrow{r}[description]{\mbox{\Huge$\simeq$}} \arrow[bend right = 13]{u}[swap]{\radj F} \& \Emm {\mathbb A} F \arrow[bend right = 13]{u}[swap]{\rnadj{F}} \end{tikzar}$} \hspace{4em} \includegraphics[height=4cm,angle=90,origin=c]{escher-hands.eps} \caption{An adjunction $\left(\adj F\right)$ is nuclear when ${\mathbb A} \simeq \Emc {\mathbb B} F$ and ${\mathbb B} \simeq \Emm {\mathbb A} F$. } \label{Fig:escher} \end{center} \end{figure} \subsubsection{Result} The nucleus construction $\NucL$ extracts from any adjunction $F$ its nucleus $\NucL F$ \begin{equation}\label{eq:NucL} \prooftree F = (\adj F:{\mathbb B}\to {\mathbb A}) \justifies \NucL F = \left(\nadj F\colon \Emm {\mathbb A} F \to \Emc {\mathbb B} F \right) \endprooftree \end{equation} The functor $\rnadj F$ is formed by composing the forgetful functor $\Emm {\mathbb A} F \to {\mathbb A}$ with the comparison functor ${\mathbb A} \to \Emc {\mathbb B} F$, whereas $\lnadj F$ is the composite of the forgetful functor $\Emc {\mathbb B} F \to {\mathbb B}$ with the comparison ${\mathbb B}\to \Emm {\mathbb A} F$. Hence the left-hand square in Fig.~\ref{Fig:nucadj}. \begin{figure}[!ht] \[\begin{tikzar}[row sep=2.5cm,column sep=3cm] {\mathbb A} \arrow[phantom]{d}[description]{\dashv} \arrow[loop, out = 135, in = 45, looseness = 4]{}[swap]{\lft F} \arrow[bend right = 13]{d}[swap]{\ladj F} \arrow{r \& \Emc{\mathbb B} F \arrow[loop, out = 135, in = 45, looseness = 2.5]{}[swap]{\Lft F} \arrow[phantom]{d}[description]{{\dashv}} \arrow[bend right = 13]{d}[swap]{\lnadj{F}} \arrow{r}[description]{\mbox{\Huge$\simeq$}} \& \left(\Emm{\mathbb A} F\right)^{\Rgt F} \arrow[bend right = 13]{d}[swap]{\lnnadj F} \arrow[phantom]{d}[description]{\dashv} \\ {\mathbb B} \arrow[loop, out = -45, in=-135, looseness = 6]{}[swap]{\rgt F} \arrow{r \arrow[bend right = 13]{u}[swap]{\radj F} \& \Emm {\mathbb A} F \arrow[loop, out = -45, in=-135, looseness = 6]{}[swap]{\Rgt F} \arrow[bend right = 13]{u}[swap]{\rnadj{F}} \arrow{r}[description]{\mbox{\Huge$\simeq$}} \& \left(\Emc{\mathbb B} F\right)^{\Lft F}\arrow[bend right = 13]{u}[swap]{\rnnadj F} \end{tikzar}\] \caption{The nucleus construction induces an idempotent monad on adjunctions.} \label{Fig:nucadj} \end{figure} We show that the functors $\lnadj F$ and $\rnadj F$ are adjoint, which means that we can iterate the nucleus construction $\NucL$ in \eqref{eq:NucL} and induce a tower of adjunctions \begin{equation}\label{eq:towadj} F\ \to\ \NucL F\ \to\ \NucL \NucL F\ \to\ \NucL\NucL\NucL F\ \to\ \cdots \end{equation} We show that $\NucL F = \left(\nadj F\right)$ is a \emph{nuclear}\/ adjunction, which means that the right-hand square in Fig.~\ref{Fig:nucadj} is an equivalence of adjunctions. The tower in \eqref{eq:towadj} thus settles at the second step. The $\NucL$-construction is an \emph{idempotent monad}\/ on adjunctions. Since the adjunctions form a 2-category, $\NucL$ is a 2-monad. We emphasize that its idempotence is \emph{strict}, i.e. (up to a natural family of equivalences), and not \emph{lax} (up to a natural family of adjunctions). While lax idempotence is frequently encountered and well-studied in categorical algebra \cite{Kelly-Lack:property,KockA:zoeberlein,StreetR:fib-bicat,Zoeberlein}\footnote{Monads over 2-categories and bicategories have been called \emph{doctrines} \cite{LawvereFW:doctrine}, and the lax idempotent ones are often called the \emph{Kock-Z\"oberlein doctrines}\/ \cite{StreetR:fib-bicat}.}, strictly idempotent categorical constructions are relatively rare, and occur mostly in the context of absolute completions. The nucleus construction suggests a reason \cite{PavlovicD:bicompletions}. \subsubsection{Upshot} The fact that the the adjunction $\nadj F$ in nuclear means that, for any adjunction $F = \left(\adj F\right)$, the category of algebras $\Emm {\mathbb A} F$ and the category of coalgebras $\Emc {\mathbb B} F$ can be reconstructed from one another: $\Emm {\mathbb A} F$ as a category of algebras over $\Emc {\mathbb B} F$, and $\Emc {\mathbb B} F$ as a category of coalgebras over $\Emm {\mathbb A} F$. They are always an instance of the Escher situation in Fig.~\ref{Fig:escher}. Simplifying these mutual reconstructions provides a new view of the final resolutions of monads and comonads, complementing the original Eilenberg-Moore construction \cite{Eilenberg-Moore}. It was described in \cite{PavlovicD:LICS17} as a programming tool, and was used as a mathematical tool in \cite{PavlovicD:bicompletions}. Presenting algebras and coalgebras as idempotents provides a rational reconstruction of monadicity (and comonadicity) in terms of idempotent splitting, echoing Par\'e's explanations in terms of absolute colimits \cite{PareR:absolute-coeq,PareR:absolute}, and contrasting with Beck's fascinating but somewhat mysterious proof of his fundamental theorem in terms of split coequalizers \cite{BarrM:ttt,BeckJ:thesis}. Concrete applications of the nuclei spread in many directions, some of which are indicated in the examples, which had to be trimmed, in some cases radically. \subsubsection{Background} Nuclear adjunctions have been studied since the early days of category theory, albeit without a name. The problem of characterizing situations when the left adjoint of a monadic functor is comonadic is the topic of Michael Barr's paper in the proceedings of the legendary Battelle conference \cite{BarrM:algCoalg}. From a different direction, in his seminal work on the formal theory of monads, Ross Street identified the 2-adjunction between the 2-categories of monads and of comonads \cite[Sec.~4]{StreetR:monads}. This adjunction leads to a formal view of the nucleus construction on either side, as a 2-monad. We show that this construction is idempotent in the strong sense. On the side of applications, the quest for comonadic adjoints of monadic functors continued in descent theory, and an important step towards characterizing them was made by Mesablishvili in \cite{mesablishvili2006monads}. Coalgebras over algebras, and algebras over coalgebras, have also been regularly used for a variety of modeling purposes in semantics of computation (see e.g. \cite{KurzA:algcoalg,jacobs1994coalgebras,JacobsB:bases}, and the references therein). As the vanishing point of monadic descent, nuclear adjunctions arise in many branches of geometry, tacitly or explicitly. In abstract homotopy theory, they are tacitly in \cite{KanD:adj,QuillenD:book}, and explicitly in \cite{Applegate-Tierney:models}. There are, however, different ways in which monad-comonad couplings may arise. In \cite{Applegate-Tierney:models}, Applegate and Tierney formed such couplings on the two sides of comparison functors and their adjoints, and they found that such monad-comonad couplings generally induce further monad-comonad couplings along the further comparison functors, and may form towers of transfinite length. We describe this in more detail in Sec.~\ref{Sec:HT}. Confusingly, the Applegate-Tierney towers of monad-comonad couplings \emph{formed by comparison functor adjunctions}\/ left a false impression that the monad-comonad couplings \emph{formed by the adjunctions between categories of algebras over coalgebras, of coalgebras over algebras, etc.}\/ also lead to towers of transfinite length. This impression blended into folklore, and the towers of alternating monads over coalgebras and comonads over algebras, extending out of sight, persist in categorical literature.\footnote{There is an interesting exception outside the categorical literature. In a fax message sent to Paul Taylor on 9/9/99 \cite{Lack-Taylor}, a copy of which was kindly provided after the present paper appeared on arxiv, Steve Lack set out to determine the conditions under which the tower of coalgebras over algebras, which "a priori continues indefinitely", settles to equivalence at a finite stage. Within 7 pages of diagrams, the question was reduced to splitting a certain idempotent. While the argument is succinct, it does seem to prove a claim which, together with its dual, implies our Prop.~\ref{Prop:two}. The claim was, however, not pursued in further work. This amusing episode from the early life of the nucleus underscores its message: that a concept is technically within reach whenever there is an adjunction, but it does need to be spelled out and applied to be recognized.} \subsubsection{Terminology} Despite all of their roles and avatars, adjunctions where the right adjoint is monadic and the left adjoint is comonadic were not given a name. We call them nuclear because of the link with nuclear operators on Banach spaces, which generalize the spectral decomposition of hermitians and the singular value decomposition of matrices and lift them all the way to linear operators on topological vector spaces. This was the subject of Grothendieck's thesis, where the terminology was introduced \cite{GrothendieckA:memAMS}. We describe this conceptual link in Sec.~\ref{Sec:lin}, for the very special case of finite-dimensional Hilbert spaces. \subsubsection{Schema} Fig.~\ref{Fig:Nuc} maps the paths that lead to the nucleus. \begin{figure}[!ht] \begin{center} \begin{tikzar}[row sep = 4em,column sep = 1em] \mbox{\textit{\textbf{matrices}}} \&\& \mbox{\textit{\textbf{extensions}}} \& \mbox{\textit{\textbf{localizations}}} \& \mbox{\textit{\textbf{nuclei}}} \\[-7.5ex] \&\&\& {\mathcal M}{\mathcal N}{\mathcal D} \ar[bend right=15,tail]{dl}[swap]{{\sf EM}} \ar[bend left=15,two heads]{dr}{{\sf MN}} \ar[phantom]{dr}[rotate = -45]{\top} \\ \mathsf{Mat} \a {rr}{{\sf MA}} \&\& {\mathcal A}{\mathcal D}{\mathcal J} \ar[bend right=15,two heads]{ur}[swap]{{\sf AM}} \ar[phantom]{ur}[rotate = 45]{\top} \ar[bend left=15,two heads]{dr}{{\sf AC}} \ar[phantom]{dr}[rotate = -45]{\top} \&\& \mathsf{Nuc} \ar[bend left=15,tail]{ul}{{\sf NM}} \ar[bend right=15,tail]{dl}[swap]{{\sf NC}} \\ \&\&\& {\mathcal C}{\mathcal M}{\mathcal N} \ar[bend left=15,tail]{ul}{{\sf KC}} \ar[bend right=15,two heads]{ur}[swap]{{\sf CN}} \ar[phantom]{ur}[rotate = 45]{\top} \end{tikzar} \caption{The nucleus setting} \label{Fig:Nuc} \end{center} \end{figure} We will follow it as an itinerary, first through familiar examples and special cases in Sections~\ref{Sec:FCA}--\ref{Sec:chu}, and then as a general pattern. Most definitions are in Sec.~\ref{Sec:cat}. Some readers may wish to skip the rest of the present section, have a look at the examples, and come back as needed. For others we provide here an informal overview of the terminology, mostly just naming names. \para{Who is who.} While the production line of mathematical tools is normally directed from theory to applications, ideas often flow in the opposite direction. The idea of the nucleus is familiar, in fact central, in data mining and concept analysis, albeit without a name, but has remained elusive in general \cite{PavlovicD:CALCO15}. Data analysis usually begins from data \emph{matrices}, which we view as objects of an abstract category $\mathsf{Mat}$. To be analyzed, data matrices are usually completed or \emph{extended}\/ into some sort of \emph{adjunctions}, which we view as objects of an abstract category ${\mathcal A}{\mathcal D}{\mathcal J}$. The functor ${\sf MA}:\mathsf{Mat}\to{\mathcal A}{\mathcal D}{\mathcal J}$ represents this extension. The adjunctions are then \emph{localized}\/ along the functors \mbox{${\sf AM}:{\mathcal A}{\mathcal D}{\mathcal J} \to {\mathcal M}{\mathcal N}{\mathcal D}$} and \mbox{${\sf AC}:{\mathcal A}{\mathcal D}{\mathcal J}\to {\mathcal C}{\mathcal M}{\mathcal N}$} at \emph{monads}\/ and \emph{comonads}, which form categories ${\mathcal M}{\mathcal N}{\mathcal D}$ and ${\mathcal C}{\mathcal M}{\mathcal N}$. In some areas and periods of category theory, a functor is called a localization when it has a full and faithful adjoint. The functors ${\sf AM}$ and ${\sf AC}$ in Fig.~\ref{Fig:Nuc} have both left and right adjoints, both full and faithful. We display only the right adjoint ${\sf EM}:{\mathcal M}{\mathcal N}{\mathcal D}\to {\mathcal A}{\mathcal D}{\mathcal J}$ of ${\sf AM}$, which maps a monad to the adjunction induced by its (Eilenberg-Moore) category of algebras, and the left adjoint \mbox{${\sf KC}:{\mathcal C}{\mathcal M}{\mathcal N}\to {\mathcal A}{\mathcal D}{\mathcal J}$}, which maps a comonad to the adjunction induced by its (Kleisli) category of cofree coalgebras. The nucleus construction is composed of such couplings. Alternatively, it can be composed of the left adjoint \mbox{${\sf KM}:{\mathcal M}{\mathcal N}{\mathcal D}\to {\mathcal A}{\mathcal D}{\mathcal J}$} of ${\sf AM}$ and the right adjoint \mbox{${\sf EC}:{\mathcal C}{\mathcal M}{\mathcal N}\to {\mathcal A}{\mathcal D}{\mathcal J}$} of ${\sf AC}$. There is, in general, an entire gamut of different adjunctions localized along \mbox{${\sf AM}:{\mathcal A}{\mathcal D}{\mathcal J} \to {\mathcal M}{\mathcal N}{\mathcal D}$} at the same monad. We call them the \emph{resolutions}\footnote{This terminology was proposed by Jim Lambek. Although it does not seem to have caught on, it is convenient in the present context, and naturally extends from its roots in algebra.} of the monad. Dually, the adjunctions localized along \mbox{${\sf AC}:{\mathcal A}{\mathcal D}{\mathcal J} \to {\mathcal C}{\mathcal M}{\mathcal N}$} at the same comonad are the resolutions of that comonad. For readers unfamiliar with monads and comonads, we note that monads over posets are called closure operators, whereas comonads over posets are the interior operators. In general, the (Kleisli) cofree coalgebra construction \mbox{${\sf KC}:{\mathcal C}{\mathcal M}{\mathcal N}\to {\mathcal A}{\mathcal D}{\mathcal J}$} in Fig.~\ref{Fig:Nuc} (and the free algebra construction \mbox{${\sf KM}:{\mathcal M}{\mathcal N}{\mathcal D}\to {\mathcal A}{\mathcal D}{\mathcal J}$} that is not displayed) captures the \emph{initial resolutions}\/ of comonads (resp. monads); whereas the (Eilenberg-Moore) algebra construction ${\sf EM}:{\mathcal M}{\mathcal N}{\mathcal D}\to {\mathcal A}{\mathcal D}{\mathcal J}$ (and the coalgebra construction ${\sf EC}:{\mathcal C}{\mathcal M}{\mathcal N}\to {\mathcal A}{\mathcal D}{\mathcal J}$ that is not displayed) captures the \emph{final resolutions}\/ of monads (resp. comonads). For closure operators and interior operators over posets, and more generally for idempotent monads and comonads over categories, the initial and the final resolutions coincide. In any case, the categories ${\mathcal M}{\mathcal N}{\mathcal D}$ and ${\mathcal C}{\mathcal M}{\mathcal N}$ are embedded in ${\mathcal A}{\mathcal D}{\mathcal J}$ fully and faithfully; idempotent monads and comonads are mapped to their unique resolutions, whereas monads, in general, are embedded in two extremal ways, with a gamut of resolutions in-between. The composites of these extremal resolution functors from ${\mathcal M}{\mathcal N}{\mathcal D}$ and ${\mathcal C}{\mathcal M}{\mathcal N}$ to ${\mathcal A}{\mathcal D}{\mathcal J}$ with the localizations from ${\mathcal A}{\mathcal D}{\mathcal J}$ to ${\mathcal M}{\mathcal N}{\mathcal D}$ and ${\mathcal C}{\mathcal M}{\mathcal N}$ induce the idempotent monad ${\lft{\sf EM}} = {\sf EM}\circ{\sf AM}$ over ${\mathcal A}{\mathcal D}{\mathcal J}$ which maps any adjunction to the Eilenberg-Moore resolution of the induced monad, and the idempotent comonad ${\rgt{\sf KC}} = {\sf KC}\circ {\sf AC}$, still over ${\mathcal A}{\mathcal D}{\mathcal J}$, which maps any adjunction to the Kleisli resolution of the induced comonad. Just as there is a category of categories, there is thus a monad of monads, and a comonad of comonads; and both happen to be idempotent. Since the subcategories fixed by idempotent monads or comonads, in general, are usually viewed as localizations, we view monads and comonads as localizations of adjunctions; and we call all the adjunctions that induce a given monad (or comonad) its resolutions. The resolution functors not displayed in Fig.~\ref{Fig:Nuc} induce a comonad ${\rgt{\sf KM}} = {\sf KM}\circ {\sf AM}$, mapping adjunctions to the Kleisli resolutions of the induced monads, and a monad ${\lft{\sf EC}} = {\sf EC}\circ {\sf AC}$, mapping adjunctions to the Eilenberg-Moore resolutions of the induced comonads. They are all spelled out in Sec.~\ref{Sec:cat}. The category $\mathsf{Nuc}$ of nuclei is the intersection of ${\mathcal M}{\mathcal N}{\mathcal D}$ and ${\mathcal C}{\mathcal M}{\mathcal N}$, as embedded into ${\mathcal A}{\mathcal D}{\mathcal J}$ along their resolutions in Fig.~\ref{Fig:Nuc}. However, we will see in Sec.~\ref{Sec:props} that any other resolutions will do, as long as the last one is final. The nucleus of an adjunction can thus be viewed as the joint resolution of the induced monad and comonad. \subsection{The Street monad} The composites $\ladj \CmnL = {\sf AM}\circ {\sf KC}$ and $\radj \CmnL = {\sf AC}\circ {\sf EM}$ in Fig.~\ref{Fig:Nuc} are adjoint to one another, and thus form a monad $\lft\CmnL = \radj \CmnL \circ \ladj \CmnL$ on the category ${\mathcal C}{\mathcal M}{\mathcal N}$ of comonads, and a comonad $\rgt\CmnL = \ladj \CmnL \circ \radj \CmnL$ on the category ${\mathcal M}{\mathcal N}{\mathcal D}$ of monads. The initial (Kleisli) resolution ${\sf KM}$ of the monads and the final (Eilenberg-Moore) resolution ${\sf EC}$ of comonads give the adjoints $\ladj \MndL = {\sf AC}\circ {\sf KM}$ and $\radj \MndL = {\sf AM}\circ {\sf EC}$, which form a monad $\lft\MndL = \radj \MndL \circ \ladj \MndL$ on the category ${\mathcal M}{\mathcal N}{\mathcal D}$ of monads, and a comonad $\rgt\MndL = \ladj \MndL \circ \radj \MndL$ on the category ${\mathcal C}{\mathcal M}{\mathcal N}$ of comonads. All is summarized in Figures \ref{Fig:AdjMndCmn} and \ref{Fig:street}. In Ross Street's paper on the \emph{Formal theory of monads}, the latter adjunction between monads and comonads is spelled out directly \cite[Thm.~11]{StreetR:monads}. This was the main result of that seminal analysis, and remains the central theorem of the theory. We prove that Street's monad is strictly idempotent. This added wrinkle steers the theory into the practice: the adjunctions, and their monads and comonads, are not just the foundation of the categorical analysis, but also a convenient tool for concept mining from it. The nucleus of a monad, or of a comonad, displays its conceptual content. \subsection{A simplifying assumption} The claimed results have been verified for the general 2-categories of adjunctions, monads and comonads, and the earlier drafts of this paper attempted to present the claims in full generality. The present version presents them under the simplifying assumption that \textbf{\emph{the 2-cell components of the morphisms of adjunctions, monads, and comonads are invertible}}. This restriction cuts the length of the paper by half. While suppressing the general 2-cells simplifies some of the verifications, it does not eliminate or modify any of the presented structures, since all 2-categorical equipment of the nucleus construction already comes with invertible 2-cells. The general 2-categorical theory of nuclei is thus a \emph{conservative}\/ extension of the simplified theory presented here: it does not introduce any additional structure or side-conditions, but only a more general domain of validity, and verification. The suppressed 2-cell chasing is, of course, interesting and important on its own; yet it does not seem to provide any information specific to the nucleus construction itself. Our efforts to present the result in its full 2-categorical generality therefore seemed to be at the expense of the main message. We hope that suitable presentation tools under development\footnote{Our hopes have been vested in the framework of string diagrams for 2-categories, where the 2-cells are the vertices, the 1-cells are the edges, and the 0-cells are the faces of the underlying graphs. The project of drawing a sufficient supply of diagrams for the present paper remained beyond our reach, but it might soon come within reach \cite{Hinze-Marsden}.} will soon make the results of this kind communicable with a more rational communication overhead. \subsection{Overview of the paper} We begin with simple and familiar examples of the nucleus, and progress towards the general construction. In the posetal case, the nucleus construction boils down to the fixed points of a Galois connection. It is familiar and intuitive as the posetal method of Formal Concept Analysis, which is presented in Sec.~\ref{Sec:FCA}. The spectral methods of concept analysis, based on Singular Value Decomposition of linear operators, are perhaps even more widely known from their broad applications on the web. They also subsume under the nucleus construction, this time in linear algebra. This is the content of Sec.~\ref{Sec:lin}. Sec.~\ref{Sec:chu} pops up to the level of an abstract categorical version of the nucleus, that emerged in the framework of $\ast$-autonomous categories and semantics of linear logic, as the separated-extensional core of the Chu construction. We discuss a modification that combines the separated-extensional core with the spectral decomposition of matrices and refers back to the conceptual roots in early studies of topological vector spaces. In Sec.~\ref{Sec:cat}, we introduce the general categorical framework for the nucleus of adjoint functors, and we state the main theorem in Sec.~\ref{Sec:Theorem}. The proof of the main theorem is built in Sec.~\ref{Sec:props}, through a series of lemmas, propositions, and corollaries. As the main corollary, Sec.~\ref{Sec:simple} presents a simplified version of the nucleus, which provides alternative presentations of categories of algebras for a monad as algebras for a corresponding comonad; and analogously of coalgebras for a comonad as arising from a corresponding monad. These presentations are used in Sec.~\ref{Sec:little} to present a weaker version of the nucleus construction, obtained by applying the Kleisli construction at the last step, where the Eilenberg-Moore construction is applied in the stronger version. Although the resulting weak nuclei are equivalent to strong nuclei only in degenerate cases, the categories of strong nuclei and of weak nuclei turn out to be equivalent. In Sec.~\ref{Sec:HT} we discuss how the nucleus approach compares and contrasts with the standard localization-based approaches to homotopy theory, from which the entire conceptual apparatus of adjunctions, extensions, and localizations originally emerged. In the final section of the paper, we discuss the problems that it leaves open. \subsection{From context matrices to concept lattices, intuitively} Consider a market with $A$ sellers and $B$ buyers. Their interactions are recorded in an adjacency matrix $A\times B \tto \Phi 2$, where $2$ is the set $\{0,1\}$, and the entry $\Phi_{ab}$ is 1 if the seller $a\in A$ at some point sold goods to the buyer $b\in B$; otherwise it is 0. Equivalently, a matrix $A\times B \tto \Phi 2$ can be viewed as the binary relation $\widehat \Phi = \{<a,b>\in A\times B\ |\ \Phi_{ab} = 1\}$, in which case we write $a\widehat \Phi b$ instead of $\Phi_{ab} = 1$. In Formal Concept Analysis \cite{Carpineto-Romano:book,FCA-book,FCA-foundations}, such matrices or relations are called \emph{contexts}, and used to extract some relevant \emph{concepts}. The idea is illustrated in Fig.~\ref{Fig:FCA}. \begin{figure}[!ht] \newcommand{\Phi}{\Phi} \newcommand{}{} \newcommand{}{} \newcommand{}{} \newcommand{}{} \newcommand{}{} \newcommand{}{} \newcommand{}{} \newcommand{}{} \newcommand{}{} \newcommand{}{} \newcommand{}{} \newcommand{}{} \newcommand{}{} \newcommand{}{} \newcommand{}{} \newcommand{}{} \newcommand{}%$A$}{ \newcommand{}%$B$}{ \newcommand{a_0}{a_0} \newcommand{a_1}{a_1} \newcommand{a_2}{a_2} \newcommand{a_3}{a_3} \newcommand{a_4}{a_4} \newcommand{b_0}{b_0} \newcommand{b_1}{b_1} \newcommand{b_2}{b_2} \newcommand{b_3}{b_3} \newcommand{$\Cut \Phi$}{$\Cut \Phi$} \def0.8{.3} \begin{center} \begin{minipage}{.3\linewidth} \begin{center} \input{PIC/trust-net} \end{center} \end{minipage} \vspace{.5\baselineskip} \begin{minipage}{.3\linewidth} \begin{center} \input{PIC/trust-net-1} \end{center} \end{minipage} \hspace{.05\linewidth} \begin{minipage}{.3\linewidth} \begin{center} \input{PIC/trust-net-2} \end{center} \end{minipage} \begin{minipage}{.3\linewidth} \begin{center} \input{PIC/trust-net-3} \end{center} \end{minipage} \hspace{.05\linewidth} \begin{minipage}{.3\linewidth} \begin{center} \input{PIC/trust-net-4} \end{center} \end{minipage} \vspace{2\baselineskip} \begin{minipage}{.3\linewidth} \begin{center} \input{PIC/trust-FCA-decomp} \end{center} \end{minipage} \caption{A context $\Phi$, its four concepts, and their concept lattice} \label{Fig:FCA} \end{center} \end{figure} The binary relation $\widehat \Phi \subseteq A\times B$ is displayed as a bipartite graph. If buyers $a_0$ and $a_4$ have farms, and sellers $b_1$, $b_2$ and $b_3$ sell farming equipment, but seller $b_0$ does not, then the sets $X = \{a_0, a_4\}$ and $Y = \{b_1, b_2, b_3\}$ form a complete subgraph $<X,Y>$ of the bipartite graph $\Phi$, which corresponds to the concept \emph{"farming"}. If the buyers from the set $X'=\{a_0, a_1,a_2,a_3\}$ have cars, but the buyer $a_4$ does not, and the sellers $Y' = \{b_0, b_1, b_2\}$ sell car accessories, but the seller $b_3$ does not then $<X',Y'>$ is another complete subgraph, corresponding to the concept \emph{"car"}. The idea is thus that a context is viewed as a bipartite graph, and the concepts are then extracted as its complete bipartite subgraphs. \subsection{Formalizing concept analysis} A pair $<U,V>\in \mbox{\Large $\wp$} A\times \mbox{\Large $\wp$} B$ forms a complete subgraph of a bipartite graph $\widehat \Phi \subseteq A\times B$ if \[ U \ = \ \bigcap_{v\in V} \{x\in A\ |\ x\widehat \Phi v\} \qquad\qquad \qquad V = \bigcap_{u\in U} \{y\in B\ |\ u\widehat \Phi y\}\] It is easy to see that such pairs are ordered by the relation \begin{eqnarray}\label{eq:cutorder} <U,V>\leq <U',V'> & \iff & U\subseteq U' \ \wedge \ V\supseteq V' \end{eqnarray} and that they in fact form a lattice, which is a retract of the lattice $\Wp A\times \Wpp B$, where $\Wp A$ is the set of subsets of $A$ ordered by the inclusion $\subseteq$, while $\Wpp B$ is the set of subsets of $B$ ordered by reverse inclusion $\supseteq$. This is the \emph{concept lattice}\/ $\Cut\Phi$ induced by the \emph{context matrix} $\widehat \Phi \subseteq A\times B$, along the lines of Fig.~\ref{Fig:Nuc}. In general, the sets $A$ and $B$ may already carry partial orders, e.g.\/ from earlier concept analyses. The category of context matrices is thus \begin{eqnarray}\label{eq:matrp} |\mathsf{Mat}_0 | & = & \coprod_{A,B\in \mathsf{Pos}} \mathsf{Pos}(A^{o} \times B, [1] \\[1.5ex] \mathsf{Mat}_0(\Phi, \Psi) & = & \{<h,k>\in \mathsf{Pos}(A,C)\times \mathsf{Pos}(B,D) \ |\ \Phi(a, b) = \Psi (ha,kb)\}\notag \end{eqnarray} where $\Phi \in \mathsf{Pos}(A^{o} \times B, [1])$ and $\Psi \in \mathsf{Pos}(C^{o} \times D, [1])$ are matrices with entries from the poset $[1]= \{0\lt1\}$. When working with matrices in general, it is often necessary or convenient to use their \emph{comprehensions}, i.e. to move along the correspondence \begin{eqnarray}\label{eq:compreh-pos} \mathsf{Pos} (A^{o} \times B , [1]) & \begin{tikzar}[row sep = 4em]\hspace{.1ex} \ar[bend left]{r}{\eh{(-)}} \ar[phantom]{r}[description]{\cong} \& \hspace{.1ex} \ar[bend left]{l}{\chi} \end{tikzar} & \Sub\diagup A\times B^{o} \\[1ex] \Phi & \mapsto & \eh{\Phi} = \{<x,y> \in A\times B^{o} \ |\ \Phi(x,y) = 1\} \notag \\[2ex] \notag \chi_S(x,y) = {\scriptstyle \left.\begin{cases} 1 & \mbox{ if } <x,y>\in S\\ 0 & \mbox{ otherwise}\end{cases} \right\}} &\mathrel{\reflectbox{\ensuremath{\mapsto}}} & \Big(S\subseteq A\times B^{o}\Big) \end{eqnarray} A comprehension $\eh \Phi$ of a matrix $\Phi$ is thus lower-closed in the first component, and upper-closed in the second: \begin{eqnarray}\label{eq:monotone} a \leq a' \ \wedge\ a' \widehat \Phi b'\ \wedge\ b' \leq b & \implies & a\widehat \Phi b \end{eqnarray} To extract the concepts from a context $\widehat \Phi \subseteq A\times B$, we thus need to explore the candidate lower-closed subsets of $A$, and the upper-closed subsets of $B$, which form complete semilattices $(\Do A, \bigvee )$ and $(\Up B, \bigwedge)$, where \begin{eqnarray} \Do A & = & \{L\subseteq A\ |\ a \leq a' \in L \implies a \in L\} \label{eq:DoA-pos}\\ \Up B & = & \{U\subseteq B\ |\ U\ni b' \leq b \implies U \ni b\} \label{eq:UpB-pos} \end{eqnarray} so that $\bigvee$ in $\Do A$ and $\bigwedge$ in $\Up B$ are both set union. It is easy to see that the embedding $A\tto \blacktriangledown}%{\curlyvee}%{\nabla \Do A$, mapping $a\in A$ into the lower set $\blacktriangledown}%{\curlyvee}%{\nabla a = \{x\in A\ |\ x\leq a\}$, is the join completion of the poset $A$, whereas $B\tto\blacktriangle}%{\curlywedge}%{{\rm \Delta} \Up B$, mapping $b\in B$ into the upper set $\blacktriangle}%{\curlywedge}%{{\rm \Delta} b = \{y\in B\ |\ b\leq y\}$, is the meet completion of the poset $B$. These semilattice completions support the context matrix extension $\overline \Phi\subseteq \Do A \times \Up B$ defined by \begin{eqnarray} L\overline \Phi U & \iff & \forall a\in L\ \forall b \in U.\ a\widehat \Phi b \end{eqnarray} As a matrix between complete semilattices, $\overline \Phi$ is representable in the form \begin{equation}\label{eq:adjunctions-pos} \ladj \Phi L \subseteq U\ \ \iff L\overline \Phi U \ \ \iff \ \ L \supseteq \radj \Phi U\end{equation} where the adjoints now capture the \emph{complete-bipartite-subgraph}\/ idea from Fig.~\ref{Fig:FCA}: \begin{equation}\label{eq:galois-pos} \begin{tikzar}{} L \ar[mapsto]{dd} \& \Do A \ar[bend right=15]{dd}[swap]{\ladj \Phi}\ar[phantom]{dd}[description]\dashv \& \displaystyle \bigcap_{y\in U}\ _\bullet \Phi y \\ \\ \displaystyle \bigcap_{x\in L} \ x \Phi_\bullet \& \Up B \ar[bend right=15]{uu}[swap]{\radj \Phi} \& U \ar[mapsto]{uu} \end{tikzar} \end{equation} Here $_\bullet \Phi y =\{x\in A\ |\ x \Phi y\}$ and $x \Phi_\bullet = \{y\in B\ |\ x\widehat \Phi y\}$ define the transposes $_\bullet \Phi :B\to \Do A$ and $ \Phi_\bullet : A\to \Up B$ of $\Phi:A^{o} \times B \to [1]$. Poset adjunctions like \eqref{eq:galois-pos} are often also called \emph{Galois connections}. They form the category \begin{eqnarray}\label{eq:adjp} |{\mathcal A}{\mathcal D}{\mathcal J}_0 | & = & \coprod_{A,B\in \mathsf{Pos}} \left\{<\ladj \Phi, \radj \Phi> \in \mathsf{Pos}(A, B) \times \mathsf{Pos}(B,A)\ |\ \ladj \Phi x \leq y \iff x\leq \radj \Phi y \right\} \\[1.5ex] {\mathcal A}{\mathcal D}{\mathcal J}_0(\Phi, \Psi) & = & \{<H,K>\in \mathsf{Pos}(A,C)\times \mathsf{Pos}(B,D) \ |\ K\ladj \Phi = \ladj \Psi H\ \wedge\ H \radj \Phi = \radj \Psi H\}\notag \end{eqnarray} The first step of concept analysis is thus the matrix extension \begin{eqnarray} \MA_0 : \mathsf{Mat}_0 & \to & {\mathcal A}{\mathcal D}{\mathcal J}_0\\ \Phi & \mapsto & \left(\adj \Phi :\Up B\to \Do A\right) \mbox{ as in \eqref{eq:galois-pos}}\notag \end{eqnarray} To complete the process of concept analysis, we use the full subcategories of ${\mathcal A}{\mathcal D}{\mathcal J}_0$ spanned by the closure and the interior operators, respectively: \begin{eqnarray}\label{eq:mndp} {\mathcal M}{\mathcal N}{\mathcal D}_0 & = & \{\left(\adj \Phi\right) \in {\mathcal A}{\mathcal D}{\mathcal J}_0\ |\ \ladj\Phi \radj \Phi = \mathrm{id}\} \\[1.5ex] {\mathcal C}{\mathcal M}{\mathcal N}_0 & = & \{\left(\adj \Phi\right) \in {\mathcal A}{\mathcal D}{\mathcal J}_0\ |\ \radj\Phi \ladj \Phi = \mathrm{id}\} \label{eq:cmnp} \end{eqnarray} It is easy to see that \begin{itemize} \item ${\mathcal M}{\mathcal N}{\mathcal D}_0$ is equivalent with the category of posets $A$ equipped with closure operators, i.e. monotone maps $A\tto{\lft \Phi} A$ such that $x \leq \lft \Phi x = \lft \Phi\lft \Phi x$, for $\lft \Phi = \radj\Phi \ladj \Phi$; while \item ${\mathcal C}{\mathcal M}{\mathcal N}_0$ is equivalent with the category of posets $B$ equipped with interior operators, i.e. monotone maps $B\tto{\rgt \Phi} B$ such that $y \geq \rgt \Phi y = \rgt \Phi\rgt \Phi y$, for $\rgt \Phi = \ladj\Phi \radj \Phi$.\end{itemize} The functors $\AM_0: {\mathcal A}{\mathcal D}{\mathcal J}_0 \twoheadrightarrow {\mathcal M}{\mathcal N}{\mathcal D}_0$ and $\AC_0: {\mathcal A}{\mathcal D}{\mathcal J}_0 \twoheadrightarrow {\mathcal C}{\mathcal M}{\mathcal N}_0$ are thus inclusions, and their resolutions are \begin{eqnarray} \EM_0 : {\mathcal M}{\mathcal N}{\mathcal D}_0 & \rightarrowtail & {\mathcal A}{\mathcal D}{\mathcal J}_0 \\%[-.2ex] \Bigg(A\tto{\lft \Phi} A\Bigg) & \mapsto & \Bigg( \begin{tikzar}{} \Do A \ar[bend right=20,two heads]{r \ar[phantom]{r}[description]{\top} \& \Emm{\Do A}\Phi \ar[bend right=20,hook]{l} \end{tikzar}\Bigg)\notag \\ && \hspace{2em}\mbox{where}\hspace{0.8em} \Emm{\Do A}\Phi = \{U \in \Do A\ |\ U = \lft \Phi U\}\notag\\[3ex] \KC_0 :{\mathcal C}{\mathcal M}{\mathcal N}_0 & \rightarrowtail & {\mathcal A}{\mathcal D}{\mathcal J}_0 \\%[-.2ex] \Bigg(B\tto{\rgt \Phi} B\Bigg) & \mapsto & \Bigg(\begin{tikzar}{} \Emc{\Up B}\Phi \ar[bend right=20,hook]{r} \ar[phantom]{r}[description]{\top} \& \Up B \ar[bend right=20,two heads]{l \end{tikzar}\Bigg) \notag \\ && \hspace{2em}\mbox{where}\hspace{0.8em} \Emc{\Up B}\Phi = \{V \in \Up B\ |\ \rgt \Phi V = V\}\notag \end{eqnarray} ${\mathcal M}{\mathcal N}{\mathcal D}_0$ thus turns out to be a reflective subcategory of ${\mathcal A}{\mathcal D}{\mathcal J}_0$, and ${\mathcal C}{\mathcal M}{\mathcal N}_0$ coreflective. The category $\mathsf{Nuc}_0$ of concept lattices is their intersection, thus is coreflective in ${\mathcal M}{\mathcal N}{\mathcal D}_0$ and reflective in ${\mathcal C}{\mathcal M}{\mathcal N}_0$. In fact, these posetal resolutions turn out to be adjoint to the inclusions both on the left and on the right; but that is a peculiarity of the posetal case. Another posetal quirk is that the category $\mathsf{Nuc}_0$ boils down to the category $\mathsf{Pos}$ of posets, because an operator that is both a closure and an interior must be an identity. That will not happen in general. \subsection{Summary} Going from left to right through Fig.~\ref{Fig:Nuc} with the categories defined in \eqref{eq:matrp}, \eqref{eq:adjp}, \eqref{eq:mndp} and \eqref{eq:cmnp}, and reflecting everything back into ${\mathcal A}{\mathcal D}{\mathcal J}_0$, we made the following steps \begin{equation}\label{eq:steps} \prooftree \prooftree \prooftree \Phi : A^{o} \times B \to [1] \justifies \AdjL\Phi =\MA_0\Phi = \Bigg( \begin{tikzar}{} \Do A \ar[bend right=20]{r}[swap]{\ladj \Phi} \ar[phantom]{r}[description]{\top} \& \Up B \ar[bend right=20]{l}[swap]{\radj \Phi} \end{tikzar}\Bigg) \endprooftree \justifies {\lft{\sf EM}}_0\AdjL\Phi = \Bigg( \begin{tikzar}{} \Do A \ar[bend right=20,two heads]{r \ar[phantom]{r}[description]{\top} \& \Emm{\Do A}\Phi \ar[bend right=20,hook]{l} \end{tikzar}\Bigg) \qquad \qquad \qquad \qquad {\rgt{\sf KC}}_0\AdjL\Phi = \Bigg( \begin{tikzar}{} \Emc{\Up B}\Phi \ar[bend right=20,hook]{r} \ar[phantom]{r}[description]{\top} \& \Up B \ar[bend right=20,two heads]{l \end{tikzar}\Bigg) \endprooftree \justifies \NucL_0 \Phi= \Bigg( \begin{tikzar}{} \Up B^{\rgt \Phi} \ar[bend right=20,tail,two heads]{r}[swap]{\lnadj \Phi} \ar[phantom]{r}[description]{\cong} \& \Do A^{\lft \Phi} \ar[bend right=20,tail,two heads]{l}[swap]{\rnadj \Phi} \end{tikzar}\Bigg) \endprooftree \end{equation} where ${\lft{\sf EM}}_0 = \EM_0\circ \AM_0$, and ${\rgt{\sf KC}}_0 = \KC_0 \circ \AC_0$, and $\NucL_0$ defines the poset nucleus (which will be subsumed under the general definition in Sec.~\ref{Sec:Theorem}). For posets, the final step happens to be trivial, because of the order isomorphisms \begin{equation}\label{eq:ordis} \Do A^{\lft \Phi} \ \cong \ \Cut \Phi \ \cong \ \Up B^{\rgt \Phi} \end{equation} where $\Cut \Phi$ \begin{eqnarray}\label{eq:nucone} \Cut \Phi & = & \{<L,U> \in \Do A\times \Up B\ |\ L = \radj \Phi U \ \wedge\ \ladj \Phi L = U\} \end{eqnarray} is the familiar lattice of \emph{Dedekind cuts}. The images of the context $\Phi$ in ${\mathcal M}{\mathcal N}{\mathcal D}_0$, ${\mathcal C}{\mathcal M}{\mathcal N}_0$ and $\mathsf{Nuc}_0$ thus give three isomorphic views of the concept lattice. But this is a degenerate case. \para{Comment.} The situation when the two resolutions of an adjunction (the one in ${\mathcal M}{\mathcal N}{\mathcal D}$ and the one in ${\mathcal C}{\mathcal M}{\mathcal N}$) are isomorphic is very special. E.g., when $A = B = {\mathbb Q}$ is the field of rational numbers, and $\Phi = (\leq)$ is their partial order, then $\ladj \MA_1 \Phi$ is the set of pairs $<L,U>$, where $L$ is an open and closed lower interval, $U$ is an open or closed upper interval, and $L\leq U$. The resolutions eliminate the rational points between $L$ and $U$, by requiring that $L$ contains all lower bounds of $U$ and $U$ all upper bounds of $L$. The nucleus then comprises the Dedekind cuts. But any Dedekind cut $<L,U>$ is also completely determined by $L$ alone, and by $U$ alone. Hence the isomorphisms \eqref{eq:ordis}. The same generalizes when $A = B$ is a partial order, and the nucleus yields its Dedekind-MacNeille completion: it adjoins all joins and meets that are missing while preserving those that already exist. When $A$ and $B$ are different posets, and $\Phi$ is a nontrivial context between them, we are in the business of concept analysis, and generate the concept lattice --- with similar generation and preservation requirements like for the Dedekind-MacNeille completion. In a sense, the posets $A$ and $B$ are "glued together" along the context $\widehat \Phi\subseteq A\times B$ into the joint completion $\Cut\Phi$, where the joins are generated from $A$, and the meets from $B$. On the other hand, any meets that may have existed in $A$ are preserved in $\Cut \Phi$; as are any joins that may have existed in $B$. It is a remarkable fact of category theory that no such tight bicompletion exists in general, when the poset $P$ is generalized to a category \cite{LambekJ:completions,IsbellJ:no-lambek}. It also is well known that this phenomenon is closely related to the idempotent monads induced by adjunctions, and by profunctors in general \cite{Applegate-Tierney:models}. The phenomenon is, however, quite general, and in a sense, hides in plain sight. \subsection{Matrices and linear operators} The nucleus examples in this section take us back to undergraduate linear algebra. The first part is in fact even more basic. To begin, we consider matrices $\fin A\times \fin B\to R$, where $R$ is an arbitrary ring, and $\fin A, \fin B$ are \emph{finite}\/ sets. We denote the category of all sets by ${\sf Set}$, its full subcategory spanned by finite sets by $\fin{\mathsf S}\mathsf{et}$, and generally use the dot to mark finiteness, so that $\fin A, \fin B \in \fin{\mathsf S}\mathsf{et}\subset {\sf Set}$. Viewing both finite sets $\fin A, \fin B$ and the ring $R$ together in the category of sets, we define \begin{eqnarray}\label{eq:Matone} |\mathsf{Mat}_1| & = & \coprod_{\fin A, \fin B \in \fin{\mathsf S}\mathsf{et}}{\sf Set}(\fin{A}\times \fin{B}, R)\\ \mathsf{Mat}_1(\Phi,\Psi) & = & \left\{<H,K>\in \vsp{\fin A \times \fin C} \times \vsp{\fin B \times \fin D}\ | \ K\Phi = \Psi H \right\} \notag \end{eqnarray} where $\vsp{\fin A\times \fin C}$ abbreviates ${\sf Set}(\fin A\times \fin C, R)$, and ditto $\vsp{\fin B\times \fin D}$. The matrix composition is written left to right \begin{eqnarray*} \vsp{\fin X\times \fin Y} \times \vsp{\fin Y\times \fin Z} & \tto{\ \ \ \ \ \ } & \vsp{\fin X\times \fin Z} \\ <F,G> & \mapsto & \left(GF\right)_{ik} \ =\ \sum_{j\in B} F_{ij}\cdot G_{jk} \end{eqnarray*} When $R$ is a field, $\mathsf{Mat}_1$ is the arrow category of finite-dimensional $R$-vector spaces with chosen bases. When $R$ is a general ring, $\mathsf{Mat}_1$ is the arrow category free $R$-modules with finite generators. When $R$ is not even a ring, but say the \emph{rig} ("a ri\emph{n}\/g without the \emph{n}\/egatives") ${\mathbb N}$ of natural numbers, then $\mathsf{Mat}_1$ is the arrow category of free commutative monoids. Sec.~\ref{Sec:rank} applies to all these cases, and Sec.~\ref{Sec:diag} applies to real closed fields. Since the goal of this part of the paper is to recall familiar examples of the nucleus construction, we can just as well assume that $R$ is the field of real numbers. The full generality of the construction will emerge in the end. \subsection{Nucleus as an automorphism of the rank space of a linear operator}\label{Sec:rank} Since finite-dimensional vector spaces always carry a separable inner product, the category $\mathsf{Mat}_1$ over the field of real numbers $R$ is equivalent to the arrow category over finite-dimensional real Hilbert spaces \emph{with chosen bases}. This assumption yields a canonical matrix representation for each linear operator. Starting, on the other hand, from the category $\fin{\mathsf H}\mathsf{ilb}$ of finite-dimensional Hilbert spaces \emph{without}\/ chosen bases, we define the category ${\mathcal A}{\mathcal D}{\mathcal J}_1$ as the arrow category $\fin{\mathsf H}\mathsf{ilb}\diagup \fin{\mathsf H}\mathsf{ilb}$ of linear operators and their commutative squares, i.e. \begin{eqnarray}\label{eq:hlb} |{\mathcal A}{\mathcal D}{\mathcal J}_1| & = & \coprod_{{\mathbb A},{\mathbb B}\in \fin{\mathsf H}\mathsf{ilb}} \fin{\mathsf H}\mathsf{ilb}({\mathbb A},{\mathbb B}) \\ {\mathcal A}{\mathcal D}{\mathcal J}_1(\Phi,\Psi) & = & \left\{<H,K>\in \fin{\mathsf H}\mathsf{ilb}({\mathbb A},{\mathbb C})\times \fin{\mathsf H}\mathsf{ilb}({\mathbb B},{\mathbb D})\ |\ K \Phi = \Psi H\right\}\notag \end{eqnarray} The finite-dimensional Hilbert spaces ${\mathbb A}$ and ${\mathbb B}$ are still isomorphic to $R^{\fin A}$ and $R^{\fin B}$ for some finite spaces $\fin A$ and $\fin B$ of basis vectors; but the particular isomorphisms would choose a standard basis for each of them, so now we are not given such isomorphisms. This means that the linear operators like $H$ and $K$ in \eqref{eq:hlb} do not have standard matrix representations, but are given as linear functions between the entire spaces. The categories ${\mathcal M}{\mathcal N}{\mathcal D}_1$ and ${\mathcal C}{\mathcal M}{\mathcal N}_1$ will be the full subcategories of ${\mathcal A}{\mathcal D}{\mathcal J}_1$ spanned by \begin{eqnarray} {\mathcal M}{\mathcal N}{\mathcal D}_1 & = & \big\{\Phi\in{\mathcal A}{\mathcal D}{\mathcal J}_1 \ |\ \Phi \ \mbox{ is surjective } \big\} \label{eq:mndone} \\ {\mathcal C}{\mathcal M}{\mathcal N}_1 & = & \big\{\Phi\in{\mathcal A}{\mathcal D}{\mathcal J}_1 \ |\ \Phi^\ddag \mbox{ is surjective } \big\}\label{eq:cmnone} \end{eqnarray} where $\Phi^\ddag$ is the adjoint of $\Phi\in \fin{\mathsf H}\mathsf{ilb}({\mathbb A},{\mathbb B})$, i.e. the operator $\Phi^\ddag\in \fin{\mathsf H}\mathsf{ilb}({\mathbb B},{\mathbb A})$ satisfying \begin{eqnarray*} <b\ |\ \Phi a>_{\mathbb B} & = & <\Phi^\ddag b\ |\ a>_{\mathbb A} \end{eqnarray*} where $\,<-|->_{\mathbb H}\,$ denotes the inner product on the space ${\mathbb H}$. \subsubsection{Hilbert space adjoints: Notation and construction} In the presence of inner products\footnote{If $R$ were not a \emph{real}\/ closed field, the inner product would involve a conjugate in the first argument. Although this is for most people the more familiar situation, the adjunctions here do not depend on conjugations, so we omit them.} $<-|->:{\mathbb A} \times {\mathbb A} \to R$, it is often more convenient to use the bra-ket notation, where a vector $\vec a \in {\mathbb A}$ is written as a "bra" $|a>$, and the corresponding linear functional $\vec a^\ddag = \left<\vec a |-\right> \in {\mathbb A}^\ast$ is written as the "ket" $<a|$. If ${\mathbb A}$ is the ${\fin A}$-dimensional space $R^{\fin A}$, then the basis vectors $\vec e_i$, $i = 1,2,\ldots, {\fin A}$ are written $|1>, |2>,\ldots,|{\fin A}>$, whereas the basis vectors of ${\mathbb A}^\ast$ are $<1|, <2|,\ldots, <{\fin A}|$, and the base decompositions become \begin{itemize} \item $|a> =\sum_{i=1}^{\fin A} |i><i|a>$ instead of $\vec a = \sum_{i=1}^{\fin A} a_i \vec e_i$, and \item $<a| =\sum_{i=1}^{\fin A} <a|i><i|$ instead of $\vec a^\ddag = \sum_{i=1}^{\fin A} a_i \vec e^\ddag_i$. \end{itemize} For convenience, here we assume that the finite sets $\fin A, \fin B,\ldots \in \fin{\mathsf S}\mathsf{et}$ are ordered, i.e. reduce $\fin{\mathsf S}\mathsf{et}$ to ${\mathbb N}$. In practice, the difference between ${\mathbb A}$ and ${\mathbb A}^\ast$ is often ignored, because any basis induces a linear isomorphism ${\mathbb A}^\ast \cong {\mathbb A}$, and is uniquely determined by it \cite{PavlovicD:MSCS13}; but it creeps from under the carpet when vector spaces are combined or aligned with other structures, as we will see further on. Writing $<j|\Phi|i>$ for the entries $\Phi_{ji}$ of a matrix $\Phi = \left(\Phi_{ji}\right)_{n\times {\fin A}}$ gives \begin{itemize} \item $<j|\Phi |a> = \sum_{i=1}^{\fin A}<j|\Phi |i><i|a>$ instead of $\left(\Phi \vec a\right)_j = \sum_{i=1}^{\fin A} \Phi_{ji} a_i$, \item $<b|\Phi|i> =\sum_{j=1}^{\fin B} <b|j><j|\Phi|i>$ instead of $\left(\vec b^\ddag \Phi\right)_i = \sum_{j=1}^{\fin B} b_j \Phi_{ji}$, and \item $<b|\Phi|a> =\sum_{i=1}^{\fin A} \sum_{j=1}^{\fin B} <b|j><j|\Phi|i><i|a>$ instead of $\vec b^\ddag \Phi \vec a = \sum_{i=1}^{\fin A} \sum_{j=1}^{\fin B} b_j \Phi_{ji}a_i$ \end{itemize} and hence the inner-product adjunction \begin{equation}\label{eq:adjop} <b|\Phi a>_{\mathbb B} \ \ =\ \ <b|\Phi|a>\ \ =\ \ <\Phi^\ddag b| a>_{\mathbb A} \end{equation} where we adhere to the usual abuse of notation, and denote both the matrix and the induced linear operator by $\Phi$. The dual matrix and the induced adjoint operator are $\Phi^\ddag$. If \eqref{eq:adjop} is the Hilbert space version of \eqref{eq:adjunctions-pos}, then \eqref{eq:galois-pos} becomes \begin{equation}\label{eq:galois-lin} \begin{tikzar}{} |a> \ar[mapsto]{dd} \& R^{{\fin A}} \ar[bend right=15]{dd}[swap]{\Phi \& \displaystyle \sum_{j=1}^{{\fin B}}\ <b|j><j|\Phi_\bullet \\ \\ \displaystyle \sum_{i=1}^{{\fin A}}\ _\bullet \Phi|i><i|a> \& R^{{\fin B}} \ar[bend right=15]{uu}[swap]{\Phi^\ddag} \& <b| \ar[mapsto]{uu} \end{tikzar} \end{equation} Here $_\bullet \Phi|i> = \sum_{j=1}^{\fin B} <j|<j|\Phi|i>$ is the $i$-th column of $\Phi$, transposed into a row, whereas $<j|\Phi _\bullet = \sum_{i=1}^{\fin A} <j|\Phi|i>|i>$ is its $j$-th row vector, transposed into a column. \subsubsection{Factorizations} The maps in \eqref{eq:galois-lin} induce the functor $\MA_1: \mathsf{Mat}_1\to {\mathcal A}{\mathcal D}{\mathcal J}_1$, for ${\mathbb A}=R^{\fin A}$ and ${\mathbb B} = R^{\fin B}$. This functor is, of course, tacit in the practice of representing linear operators by matrices, and identifying them notationally. The functors $\AM_1: {\mathcal A}{\mathcal D}{\mathcal J}_1\to {\mathcal M}{\mathcal N}{\mathcal D}_1$ and $\AC_1: {\mathcal A}{\mathcal D}{\mathcal J}_1\to {\mathcal C}{\mathcal M}{\mathcal N}_1$, on the other hand, require factoring linear operators through their rank spaces: \begin{equation}\label{eq:facop} \begin{tikzar}[row sep = 2em,column sep = 5em] \& {\mathbb A} \ar[bend right=22]{dd}[swap]{\Phi}\ar[phantom]{dd}[description]\dashv\vdash \ \ar[two heads]{ddl}[swap]{\AM_1 (\Phi } \& \Emc{\mathbb B}\Phi \ar[hookrightarrow]{l}[swap]{U} \\ \\ \Emm{\mathbb A}\Phi \ar[hookrightarrow]{r}[swap]{V} \& {\mathbb B} \ar[bend right=22]{uu}[swap]{\Phi^\ddag} \ar[two heads]{uur}[swap] \AC_1 \left(\Phi\right)^\ddag} \end{tikzar} \end{equation} where we define \begin{eqnarray*} \Emc{\mathbb B}\Phi & = & \{\Phi^\ddag|b>\ |\ |b>\in {\mathbb B}\}\ \ \mbox{ with } \ \ <x|y>_{\Emc{\mathbb B}\Phi} = <Ux|Uy>_{\mathbb A}\\ \Emm{\mathbb A}\Phi & = & \{\Phi |a>\ |\ |a>\in {\mathbb A}\} \ \ \ \mbox{ with } \ \ <x|y>_{\Emm{\mathbb A}\Phi} = <Vx|Vy>_{\mathbb B} \end{eqnarray*} It is easy to see that the adjoints $\EM_1: {\mathcal M}{\mathcal N}{\mathcal D}_1\to {\mathcal A}{\mathcal D}{\mathcal J}_1$ and $\KC_1: {\mathcal C}{\mathcal M}{\mathcal N}_1\to {\mathcal A}{\mathcal D}{\mathcal J}_1$ can be viewed as inclusions. To define $\MN_1: {\mathcal M}{\mathcal N}{\mathcal D}_1\to \mathsf{Nuc}_1$ and $\CN_1: {\mathcal C}{\mathcal M}{\mathcal N}_1\to \mathsf{Nuc}_1$, note that \[ <U^\ddag Ux\ |\ y>_{\Emc{\mathbb B}\Phi}\ \ =\ \ <Ux\ |\ Uy>_{\mathbb A}\ \ =\ \ <x\ |\ y>_{\Emc{\mathbb B}\Phi} \] Since finite-dimensional Hilbert spaces are separable, this implies that $U^\ddag U = \mathrm{id}$ and that $U^\ddag$ is thus a surjection. So we have two factorizations of $\Phi$ \begin{equation}\label{eq:facopthree} \begin{tikzar}[row sep = 2.5em,column sep = 6em] {\mathbb A} \ar[two heads]{dd}[swap]{\AC_1 (\Phi)} \ar[two heads]{r}{U^\ddag} \& \Emc{\mathbb B}\Phi \ar[dashed,two heads, tail]{ddl}[description]{\begin{minipage}{2cm}$\scriptstyle \CN_1\circ\AC_1(\Phi) = $\\[-.75ex] $\scriptstyle \MN_1\circ\AM_1(\Phi)$\end{minipage}} \ar[tail]{dd}{\AM_1(\Phi)} \\ \\ \Emm{\mathbb A}\Phi \ar[tail]{r}[swap]{V} \& {\mathbb B} \end{tikzar} \end{equation} The definitions of $\CN_1$ and $\MN_1$ for general objects of ${\mathcal C}{\mathcal M}{\mathcal N}_1$ and ${\mathcal M}{\mathcal N}{\mathcal D}_1$ proceed similarly, by factoring the adjoints. \subsection{Nucleus as matrix diagonalization}\label{Sec:diag} When the field $R$ supports spectral decomposition, the above factorizations can be performed directly on matrices. The nucleus of a matrix then arises as its diagonal form. In linear algebra, the process of the nucleus extraction thus boils down to the Singular Value Decomposition (SVD) of a matrix \cite[Sec.~2.4]{Golub-vanLoan}, which is yet another tool of concept analysis \cite{Azar,LSA}. To set up this version of the nucleus setting we take ${\mathcal A}{\mathcal D}{\mathcal J}_2 = \mathsf{Mat}_2}%{\FMod \KK = \mathsf{Mat}_1$ and let \mbox{$\MA_2 : \mathsf{Mat}_2}%{\FMod \KK \to {\mathcal A}{\mathcal D}{\mathcal J}_2$} be the identity. The categories ${\mathcal M}{\mathcal N}{\mathcal D}_2$ and ${\mathcal C}{\mathcal M}{\mathcal N}_2$ will again be full subcategories of ${\mathcal A}{\mathcal D}{\mathcal J}_2$, this time spanned by \begin{eqnarray} {\mathcal M}{\mathcal N}{\mathcal D}_2 & = & \big\{\Phi\in{\sf Set}(\fin A \times \fin B, R) \ |\ < k |\rgt \Phi | \ell > = \lambda_k<k|\ell > \big\}\label{eq:mndtwo}\\ {\mathcal C}{\mathcal M}{\mathcal N}_2 & = & \big\{\Phi\in{\sf Set}(\fin A \times \fin B, R) \ |\ < i |\lft \Phi | j> = \lambda_j <i|j> \big\} \label{eq:cmntwo} \end{eqnarray} where \begin{itemize} \item $\rgt \Phi = \Phi\Phi^\ddag$ and $\lft \Phi = \Phi^\ddag \Phi$, with the entries $< k |\rgt \Phi | \ell> = \rgt \Phi_{k\ell}$ $< i |\lft \Phi | j> = \lft \Phi_{ij}$, \item $<i|j> = \left.\begin{cases} 1 & \mbox{ if } i=j\\ 0 & \mbox{ otherwise}\end{cases}\right\}$, and \item $\lambda_k$ and $\lambda_j$ are scalars. \end{itemize} In the theory of Banach spaces, operators that yield to this type of representation have been called nuclear since \cite{GrothendieckA:memAMS}. Hence our terminology. For finite-dimensional spaces, definitions (\ref{eq:mndtwo}-\ref{eq:cmntwo}) say that for a matrix $\Phi\in \mathsf{Mat}_2}%{\FMod \KK$ holds that \begin{eqnarray*} \Phi \in {\mathcal M}{\mathcal N}{\mathcal D}_2& \iff & \rgt \Phi \mbox{ is diagonal} \\ \Phi \in {\mathcal C}{\mathcal M}{\mathcal N}_2 & \iff & \lft \Phi \mbox{ is diagonal} \end{eqnarray*} Since both $\lft \Phi$ and $\rgt \Phi$ are self-adjoint: \begin{alignat*}{7} <\Phi^\ddag\Phi a\ | \ a'>\ &= \ <\Phi a\ |\ \Phi a'> & = \ \ \ <\Phi^{\ddag\ddag} a\ |\ \Phi a'>\ \ \ & =\ <a\ |\ \Phi^\ddag\Phi a'> \\ < b\ | \ \Phi\Phi^\ddag b'>\ & = \ <\Phi^\ddag b\ |\ \Phi^\ddag b'>\ \ & =\ <\Phi^{\ddag} b\ |\ \Phi^{\ddag\ddag\ddag} b'>\ & =\ <\Phi^{\ddag\ddag}\Phi^{\ddag} b\ |\ b'> \ & =\ <\Phi\Phi^{\ddag} b\ |\ b'> \end{alignat*} their spectral decompositions yield real eigenvalues $\lambda$. Assuming for simplicity that each of their eigenvalues has a one-dimensional eigenspace, we define \begin{eqnarray} \Emm {\fin A}\Phi & = & \{|v >\in \vsp{\fin B}\ |\ <v |v > = 1\wedge \exists \lambda_v.\ \rgt \Phi|v > = \lambda_v |v >\}\\ \Emc {\fin B}\Phi & = & \{|u>\in \vsp{\fin B}\ |\ <u|u> = 1\wedge \exists \lambda_u.\ \lft \Phi|u> = \lambda_u|u>\} \end{eqnarray} Hence the matrices \begin{alignat*}{3} \Emc {\fin B}\Phi \times {\fin A} &\hspace{1em}\tto{\hspace{2em} \displaystyle U \hspace{1em}} & \hspace{2em} R \hspace{2em} & \oot{\hspace{2em} \displaystyle V \hspace{1.5em}} &\hspace{1em} \Emm {\fin A}\Phi \times \fin B\\[2ex] \Big<|u>, i \Big> & \hspace{2em}\longmapsto &\hspace{1em} u_i \hspace{3em} v_\ell & \hspace{2em}\mathrel{\reflectbox{\ensuremath{\longmapsto}}} & \Big<|v>, \ell \Big> \end{alignat*} which isometrically embed $\Emc {\fin B}\Phi$ into ${\mathbb A}=R^{\fin A}$ and $\Emm A\Phi$ into ${\mathbb B}=R^{\fin B}$. It is now straightforward to show that \mbox{$\AM_2: {\mathcal A}{\mathcal D}{\mathcal J}_2\to {\mathcal M}{\mathcal N}{\mathcal D}_2$} and \mbox{$\AC_2: {\mathcal A}{\mathcal D}{\mathcal J}_2\to {\mathcal C}{\mathcal M}{\mathcal N}_2$} are still given according to the schema in \eqref{eq:facop}, i.e.{} by \begin{eqnarray} \check\Phi\ =\ \AM_2(\Phi) & = & V^\ddag \Phi \\ \hat\Phi\ =\ \AC_2(\Phi) & = & \Phi U \end{eqnarray} They satisfy not only the requirements that $\check\Phi^\dag \check \Phi$ and $\hat \Phi \hat \Phi^\ddag$ be diagonal, as required by \eqref{eq:mndtwo} and \eqref{eq:cmntwo}, but also that \[ \check \Phi \check\Phi^\dag = \Phi \Phi^\dag = \lft \Phi \qquad \qquad\qquad \hat\Phi^\dag \hat\Phi = \Phi^\dag \Phi = \rgt \Phi\] Repeating the diagonalization process on each of them leads to the following refinement of {\eqref{eq:facop}: \begin{equation}\label{eq:faccop} \begin{tikzar}[row sep = 3em,column sep = 5em] {\fin A} \ar{dd}[swap,pos=.61]{\Phi} \ar[two heads]{ddr}[swap,pos=.8]{\check \Phi} \& \Emc {\fin B}\Phi \ar[two heads,tail]{ddr}[swap,pos=.8]{\hat{\check \Phi}} \ar[dashed,two heads,tail,""{name=DI,right}]{dd}[description]{\begin{minipage}{1.5cm}\centering $\scriptstyle \MN_2(\hat\Phi)$ \\[-1ex] $=$ \\[-1ex] $\scriptstyle \CN_2(\check \Phi)$\end{minipage}} \ar[hookrightarrow]{l}[swap]{U} \ar[tail]{ddl}[swap,pos=.25]{\hat \Phi} \ar[leftrightarrow]{r}[description]{\sim} \& \left(\Emm {\fin A}\Phi\right)^{\rgt \Phi} \ar[two heads,tail,""{name =DIAG,left}]{dd} \ar[two heads,tail]{ddl}[swap,pos=.25]{\check{\hat \Phi}} \\ \\ \fin B \ar[two heads]{r}[swap]{V^\ddag} \& \Emm {\fin A}\Phi \ar[leftrightarrow]{r}[description]{\sim} \& \left(\Emc {\fin B}\Phi\right)^{\lft \Phi} \end{tikzar} \end{equation} This diagram displays a bijection between the eigenvertors in $\Emc {\fin B}\Phi$ and $\Emm A\Phi$. The diagonal matrix between them is the nucleus of $\Phi$. The singular values along its diagonal measure, in a certain sense, how much the operators $\lft \Phi$ and $\rgt \Phi$, induced by composing $\Phi$ and $\Phi^\ddag$, deviate from being projectors onto the respective rank spaces. \subsection{Summary} The path from a matrix to its nucleus can now be summarized by \[ \prooftree \prooftree \prooftree \Phi: \fin A\times \fin B\to R \justifies \begin{tikzar}[column sep = 3em] \vsp{\fin A} \ar[bend right=20]{r}[swap]{\Phi} \& \vsp{\fin B} \ar[bend right=20]{l}[swap]{\Phi^{\ddag}} \end{tikzar} \endprooftree \justifies \begin{tikzar}[column sep = 3em] \vsp{\fin A} \ar[bend right=20,two heads,thin]{r \& \Emm{\fin A}\Phi \ar[bend right=20,tail]{l}[swap]{U = \MndL_2\Phi} \end{tikzar} \qquad \qquad \qquad \qquad \begin{tikzar}{} \Emc{\fin B}\Phi \ar[bend left=20,tail]{r}{V = \CmnL_2\Phi} \& \vsp{\fin B} \ar[bend left=20,two heads,thin]{l \end{tikzar}\endprooftree \justifies \begin{tikzar}[column sep = 4em,row sep = 2em] \fin B^{\rgt \Phi} \ar[tail,two heads]{r}{\NucL_2\Phi} \& \fin A^{\lft \Phi} \ar[tail,thin]{d}{V} \\ \vsp{\fin A} \ar[thin]{r}[swap]{\Phi} \ar[thin,two heads]{u}{U^\ddag} \&\vsp{\fin B} \end{tikzar} \endprooftree \] Note that the isomorphisms from \eqref{eq:ordis} are now replaced by the diagonal matrix $\NucL_1\Phi: \begin{tikzar}[column sep = 1.5em] \fin B^{\rgt \Phi} \ar[thin,tail,two heads]{r} \& \fin A^{\lft \Phi}\end{tikzar}$, wich is still invertible as a linear operator, and provides a bijection between the bases $\fin B^{\rgt \Phi}$ and $\fin A^{\lft \Phi}$ of the rank spaces of $\Phi$ and of $\Phi^{\ddag}$, respectively. But the singular values along the diagonal of $\NucL_1\Phi$ quantify the relationships between the corresponding elements of $\fin B^{\rgt \Phi}$ and $\fin A^{\lft \Phi}$. This is, on the one hand, the essence of the concept analysis by singular value decomposition \cite{landauer1997solution}. Even richer conceptual correspondences will, on the other hand, emerge in further examples. \subsection{Abstract matrices}\label{Sec:chu-mat} So far we have considered matrices in specific frameworks, first of posets, then of Hilbert spaces. In this section, we broaden the view, and study an abstract framework of matrices. Suppose that ${\mathcal S}$ is a category with finite products, $R\in {\mathcal S}$ is an object, and $\fin {\mathcal S}\subseteq {\mathcal S}$ is a full subcategory. The objects of $\fin {\mathcal S}$ are also marked by a dot, and are thus written $\fin A, \fin B,\ldots, \fin X\in \fin {\mathcal S}$. Now consider the following variation on the theme of \eqref{eq:matrp} and \eqref{eq:Matone}: \begin{eqnarray}\label{eq:Matthree} |\mathsf{Mat}_3}%{\FMod \KK| & = & \coprod_{\fin A, \fin B \in \fin {\mathcal S}}{\mathcal S}(\fin{A}\times \fin{B}, R)\\ \mathsf{Mat}_3}%{\FMod \KK(\Phi,\Psi) & = & \left\{<\ladj f, \radj f>\in \fin{\mathcal S}(\fin A, \fin C) \times \fin{\mathcal S}(\fin D, \fin B)\ | \ \Phi(a, \radj f d) = \Psi(\ladj f a,d) \right\} \notag \end{eqnarray} \begin{figure}[!ht] \begin{center} \begin{tikzar}[column sep = 1em, row sep = 1.8em] \& \fin A \times \fin D \ar[thin]{dl}[swap]{\fin A\times \radj f} \ar[thin]{dr}{\ladj f \times \fin D}\\[1ex] \fin A\times \fin B\ar{ddr}[description]{\Phi} \&\tto{ \ f \ } \& \fin C\times \fin D\ar{ddl}[description]{\Psi}\\ \\ \& R \end{tikzar} \caption{A Chu-morphism $f = <\ladj f, \radj f>:\Phi\to \Psi$ in $\mathsf{Mat}_3}%{\FMod \KK$ } \label{Fig:chumorph} \end{center} \end{figure}% where $\Psi \in {\mathcal S}(\fin{C}\times \fin{D}, R)$, as illustrated in Fig.~\ref{Fig:chumorph}. We consider a couple of examples. \subsubsection{Posets}\label{Sec:chu-mat-pos} Let the category ${\mathcal S}= \fin {\mathcal S}$ be the category $\mathsf{Pos}$ of posets, and let $R$ be the poset $[1] = \{0\lt1\}$. The poset matrices in $\mathsf{Mat}_3}%{\FMod \KK^\mathsf{Pos}$ then differ from those in $\mathsf{Mat}_0$ by the fact that they are covariant in both arguments, i.e. they satisfy $a' \widehat \Phi b'\ \wedge\ a' \leq a \ \wedge\ b' \leq b \ \implies \ a\widehat \Phi b $ instead of \eqref{eq:monotone}. Any poset $A$ is represented both in $\mathsf{Mat}_0$ and in $\mathsf{Mat}_3}%{\FMod \KK^\mathsf{Pos}$ by the matrix $\big({\stackrel A\leq}\big): A^{o} \times A \to [1]$. But they are quite different objects in the different categories. If $\big( {\stackrel B\leq}\big): B^{o} \times B \to [1]$ is another such matrix, then \begin{itemize} \item in $\mathsf{Mat}_0$, a morphism in the form $<h,k>$ is required to satisfy $x \stackrel A\leq x' \iff hx \stackrel B\leq kx'$ for all $x,x'\in A$, whereas \item in $\mathsf{Mat}_3}%{\FMod \KK^\mathsf{Pos}$, a morphism in the form $<\ladj f, \radj f>$ is required to satisfy $x\leq \radj f y \iff \ladj f x\leq y$ for all $x\in A$ and $y\in B$. \end{itemize} The $\mathsf{Mat}_3}%{\FMod \KK^\mathsf{Pos}$ isomorphisms are thus the poset adjunctions (a.k.a.{} Galois connections), whereas the $\mathsf{Mat}_0$-morphisms in the form $<h,h>$ are the order isomorphisms. \subsubsection{Linear spaces}\label{Sec:chu-mat-lin} Let ${\mathcal S}$ be the category ${\sf Set}$ of sets, $\fin {\mathcal S}$ the category $\fin{\mathsf S}\mathsf{et}$ of finite sets, and let $R$ be the set of real numbers. Then the objects of $\mathsf{Mat}_3}%{\FMod \KK^\mathsf{Lin}$ are the real matrices, just like in $\mathsf{Mat}_1$, but the morphisms in $\mathsf{Mat}_3}%{\FMod \KK^\mathsf{Lin}$ are a very special case of those in $\mathsf{Mat}_1$. A $\mathsf{Mat}_1$-morphism $<H,K>$ from \eqref{eq:Matone} boils down to a pair of functions $<\ladj f, \radj f>$ from \eqref{eq:Matthree} precisely when the matrices $H$ and $K$ comprise of 0s, except that $H$ has precisely one 1 in every row, and $K$ has precisely one 1 in every column. With such constrained morphisms, $\mathsf{Mat}_3}%{\FMod \KK^\mathsf{Lin}$ does not support the factorizations on which the constructions in $\mathsf{Mat}_1$ were based. The completions will afford it more flexible morphisms. $\mathsf{Mat}_1$'s morphisms are already complete matrices, which is why we were able to take ${\mathcal A}{\mathcal D}{\mathcal J}_2 = \mathsf{Mat}_2}%{\FMod \KK = \mathsf{Mat}_1$. \subsubsection{Categories}\label{Sec:chu-mat-cat} Let ${\mathcal S}$ be the category $\mathsf{CAT}$ of categories, small or large; let $R$ be the category ${\sf Set}$ of sets; and let $\fin {\mathcal S}$ be the category $\mathsf{Cat}$ of small categories. The matrices in $\mathsf{Mat}_3}%{\FMod \KK^\mathsf{CAT}$ are then distributors \cite[Vol. I, Sec.~7.8]{BorceuxF:handbook}, also also called profunctors, or bimodules. The $\mathsf{Mat}_3}%{\FMod \KK^\mathsf{CAT}$-morphisms are generalized adjunctions, as discussed in \cite{PavlovicD:CALCO15}. Any small category $\fin{\mathbb A}$ occurs as the matrix $\hom_{\fin{\mathbb A}} \in \mathsf{CAT}(\fin{\mathbb A}^{o}\times \fin{\mathbb A}, {\sf Set})$ in $\mathsf{Mat}_3}%{\FMod \KK^\mathsf{CAT}$. The $\mathsf{Mat}_3}%{\FMod \KK^\mathsf{CAT}$-morphisms between the matrices in the form $\hom_{\fin {\mathbb A}}$ and $\hom_{\fin{\mathbb B}}$ are precisely the adjunctions between the categories $\fin{\mathbb A}$ and $\fin{\mathbb B}$. \subsection{Representability and completions}\label{Sec:representability} A matrix $\Phi:\fin A\times \fin B \to R$ is said to be \emph{representable}\/ when there are matrices ${\mathbb A}:\fin A\times \fin A \to R$ and ${\mathbb B}:\fin B\times \fin B \to R$ and a morphism $f = <\ladj f, \radj f> \in \mathsf{Mat}_3}%{\FMod \KK({\mathbb A}, {\mathbb B})$ such that $\Phi = {\mathbb A}\circ(\fin A\times \radj f) = {\mathbb B}(\ladj f\times \fin B)$. \begin{figure}[!ht] \begin{center} \begin{tikzar}[column sep = .5em, row sep = 2.5em] \&\& \fin A \times \fin B \ar[thin]{dll}[swap]{\mathrm{id}\times \radj f} \ar[thin]{d}[description]{\mathrm{id} \times \mathrm{id}} \ar[thin]{drr}{\ladj f \times \mathrm{id}}\\ \fin A\times \fin A\ar{ddrr}[description]{{\mathbb A}} \&\eepi{<\mathrm{id},\radj f>} \&\fin A \times \fin B\ar{dd}[description]{\Phi}\&\mmono{<\ladj f,\mathrm{id}>}\& \fin B \times \fin B\ar{ddll}[description]{{\mathbb B}}\\ \\ \&\& R \end{tikzar} \caption{A matrix $\Phi$ representable in $\mathsf{Mat}_3}%{\FMod \KK$ by factoring $<\ladj f, \radj f> = \left({\mathbb A} \tto{<\mathrm{id}, \radj f>} \Phi \tto{<\ladj f, \mathrm{id}>}{\mathbb B}\right)$} \label{Fig:representable} \end{center} \end{figure} Inside the category $\mathsf{Mat}_3}%{\FMod \KK$, this means that the morphism $f$ can be factorized through $\Phi$, as displayed in Fig.~\ref{Fig:representable}. Inside $\mathsf{Mat}_3}%{\FMod \KK^\mathsf{CAT}$, a distributor $\Phi:\fin {\mathbb A}^{o} \times {\mathbb B}\to {\sf Set}$ is representable if and only if there is an adjunction $\adj F:{\mathbb B}\to {\mathbb A}$ such that ${\mathbb A}(x, \radj F y) = \Phi(x,y) = {\mathbb B}(\ladj Fx, y)$. \subsection{Abstract adjunctions} In the category of adjunctions ${\mathcal A}{\mathcal D}{\mathcal J}_3$, all matrices from $\mathsf{Mat}_3}%{\FMod \KK$ become representable. This is achieved by dropping the "finiteness" requirement $\fin A, \fin B, \fin C, \fin D\in \fin {\mathcal S}$ from $\mathsf{Mat}_3}%{\FMod \KK$, and defining \begin{eqnarray}\label{eq:Adjthree} |{\mathcal A}{\mathcal D}{\mathcal J}_3 | & = & \coprod_{A,B\in {\mathcal S}} {\mathcal S}(A\times B, R)\\ {\mathcal A}{\mathcal D}{\mathcal J}_3(\Phi, \Psi) & = & \{<\ladj f,\radj f>\in {\mathcal S}(A,C)\times {\mathcal S}(D,B) \ |\ \Psi(\ladj f a, d) = \Phi (a, \radj f d) \} \notag \end{eqnarray} \subsubsection{The Chu-construction}\label{Sec:SE} The readers familiar with the Chu-construction will recognize ${\mathcal A}{\mathcal D}{\mathcal J}_3$ as $\mathsf{Chu}({\mathcal S},R)$. The Chu-construction is a universal embedding of monoidal categories with a chosen dualizing object into $\ast$-autonomous categories. It was spelled out by Barr and his student Chu \cite{BarrM:staac}, and extensively studied in topological duality theory and in semantics of linear logic \cite{BarrM:star-linlog,BarrM:chu96,BarrM:separated,BarrM:chu-history,HughesD:chu,MackeyG:duality,PavlovicD:chuI,PrattV:chu}. Its conceptual roots go back to the early studies of infinite-dimensional vector spaces \cite{MackeyG:duality}. Our category $\mathsf{Mat}_3}%{\FMod \KK$ can be viewed as a "finitary" part of a Chu-category, where an abstract notion of "finiteness" is imposed by requiring that the matrices are sized by a "finite" category $\fin {\mathcal S} \subset {\mathcal S}$. \subsubsection{Representing matrices as adjunctions} The functor $\MA_3 :\mathsf{Mat}_3}%{\FMod \KK \to {\mathcal A}{\mathcal D}{\mathcal J}_3$ will be the obvious embedding. When $\fin {\mathcal S} = {\mathcal S}$, it boils down to the identity. The difference between \eqref{eq:Matthree} and \eqref{eq:Adjthree} is technically, of course, a minor wrinkle. But when the object $R$ is exponentiable, in the sense that there is a functor $\du{(-)} : \fin {\mathcal S}^{o} \to {\mathcal S}$ such that \begin{eqnarray}\label{eq:dual} {\mathcal S}(\fin A\times \fin B, R) & \cong & {\mathcal S}(\fin A, \du{\fin B}) \end{eqnarray} holds naturally in $\fin A$ and $\fin B$, then the $\mathsf{Mat}_3}%{\FMod \KK$-matrices can be represented as ${\mathcal A}{\mathcal D}{\mathcal J}_3$-morphisms. Each matrix appears in four avatars \begin{alignat}{5} {\mathcal S}({\fin A}, \du {\fin B}) &\ \cong\ \ & {\mathcal S}({\fin A}\times {\fin B}, K) &\ \cong\ \ & {\mathcal S}({\fin B}\times {\fin A}, K) &\ \cong\ & {\mathcal S}({\fin B}, \du {\fin A}) \notag \\[-.5ex] \mathbin{\rotatebox[origin=c]{90}{$\in$}}\hspace{1.5em} && \mathbin{\rotatebox[origin=c]{90}{$\in$}} \hspace{1.5em}&& \mathbin{\rotatebox[origin=c]{90}{$\in$}} \hspace{1.8em}&& \mathbin{\rotatebox[origin=c]{90}{$\in$}} \hspace{2em} \label{eq:avatars}\\[-.5ex] \ladj \Phi \hspace{1em}&& \Phi\hspace{1.5em} && \Phi^\ddag \hspace{1.2em} && \radj \Phi \hspace{1.5em}\notag \end{alignat} and the leftmost and the rightmost represent it as the abstract adjunction in Fig.~\ref{Fig:ni}. \begin{figure}[!ht] \begin{center} \begin{tikzar}[column sep = .2em, row sep = 1.8em] \& \fin A \times \fin B \ar{ddd}[description,pos=0.7]{\Phi} \ar[thin]{dl}[swap]{\fin A\times \radj \Phi} \ar[thin]{dr}{\ladj \Phi \times \fin B}\\ \fin A\times \du{\fin A}\ar{ddr}[description]{\in} \&\& \du {\fin B}\times \fin B\ar{ddl}[description]{\ni}\\ \\ \& R \end{tikzar} \caption{The adjunction $\left(\adj \Phi \right) \in {\mathcal A}{\mathcal D}{\mathcal J}_3(\in_{\fin A}, \ni_{\fin B})$ representing the matrix $\Phi: \fin A\times \fin B\to R$ from $\mathsf{Mat}_3}%{\FMod \KK$ } \label{Fig:ni} \end{center} \end{figure} The objects $\du{\fin A}$ and $\du{\fin B}$, that live in ${\mathcal S}$ but not in $\fin {\mathcal S}$ will play a similar role to $\Do A$ and $\Up B$ in Sec.~\ref{Sec:FCA}, and to the eponymous Hilbert spaces Sec.~\ref{Sec:lin}. They are the abstract "completions". We come back to this in Sec.~\ref{Sec:diagchu}. \subsubsection{Separated and extensional adjunctions} {}The correspondences in \eqref{eq:avatars} assert that any matrix \mbox{$\Phi:A\times B\to R$} can be viewed as \begin{itemize} \item a map $A\tto{\ladj \Phi} \du B$, assigning a "matrix row" $\ladj \Phi(a)$ to each basis element $a\in A$; \item a map $B\tto{\radj \Phi} \du A$, assigning a "matrix column" $\radj \Phi(b)$ to each basis element $b\in B$. \end{itemize} The elements $a$ and $a'$ are indistinguishable for $\Phi$ if $\ladj\Phi(a) = \ladj\Phi(a')$; and the elements $b$ and $b'$ are distinguishable for $\Phi$ if $\radj\Phi(b) = \radj\Phi(b')$. The idea of Barr's \emph{separated-extensional}\ Chu construction \cite{BarrM:star-linlog,BarrM:separated} is to quotient out any indistinguishable elements. A Chu space is called \begin{itemize} \item \emph{separated}\/ if $\ladj\Phi(a) = \ladj\Phi(a')\ \Rightarrow\ a=a'$, and \item \emph{extensional}\/ if $\radj\Phi(b) = \radj\Phi(b')\ \Rightarrow\ b=b'$. \end{itemize} To formalize this idea, we assume the category ${\mathcal S}$ is given with a family ${\mathcal M}$ of abstract monics, so that $\Phi$ is separated if $\ladj \Phi\in {\mathcal M}$ and extensional if $\radj \Phi \in {\mathcal M}$. To extract such an ${\mathcal M}$-separated-extensional nucleus from any given $\Phi$, the family ${\mathcal M}$ is given as a part of a \emph{factorization system}\/ ${\mathcal E}\wr {\mathcal M}$, such that $\du{\mathcal E} \subseteq {\mathcal M}$. For convenience, an overview of factorization systems is given in Appendix~\ref{appendix:factorizations}. The construction yields an instance of Fig.~\ref{Fig:Nuc} for the full subcategories of ${\mathcal A}{\mathcal D}{\mathcal J}_3$ defined by \begin{alignat}{3} {\mathcal M}{\mathcal N}{\mathcal D}_3 = & \big\{\Phi\in{\mathcal A}{\mathcal D}{\mathcal J}_3 \ |\ \ladj \Phi \in {\mathcal M} \big\} &=\ \mathsf{Chu}_{s}({\mathcal S},R)\label{eq:mndthree}\\ {\mathcal C}{\mathcal M}{\mathcal N}_3 = & \big\{\Phi\in{\mathcal A}{\mathcal D}{\mathcal J}_3 \ |\ \radj \Phi \in {\mathcal M} \big\} &=\ \mathsf{Chu}_{e}({\mathcal S},R) \label{eq:cmnthree}\\ \mathsf{Nuc}_3 = & \big\{\Phi\in{\mathcal A}{\mathcal D}{\mathcal J}_3 \ |\ \ladj \Phi, \radj \Phi \in {\mathcal M} \big\}\ \ &=\ \mathsf{Chu}_{se}({\mathcal S},R) \label{eq:nucthree} \end{alignat} where $\mathsf{Chu}_{s}({\mathcal S},R)$ and $\mathsf{Chu}_{e}({\mathcal S},R)$ are the full subcategories of $\mathsf{Chu}({\mathcal S},R)$ spanned, respectively, by the separated and the extensional Chu spaces, as constructed in \cite{BarrM:star-linlog,BarrM:separated}. The reflections and coreflections, induced by the factorization, have been analyzed in detail there. The separated-extensional nucleus of a matrix is constructed through the factorizations displayed in Fig.~\ref{Fig:sefac}, where we use Barr's notation. The functor $\AM_3$ corresponds to Barr's $\mathsf{Chu}_s$, the functor $\AC_3$ to $\mathsf{Chu}_e$. \begin{figure}[htbp] \begin{center} \[ \prooftree \prooftree \prooftree \begin{tikzar}{}A\times B \ar{r}{\Phi}\& R\end{tikzar} \justifies \begin{tikzar}[column sep = 4em] A \ar{r}{{\ladj \Phi}}\& \du B\end{tikzar} \qquad \quad\begin{tikzar}[column sep = 4em]B \ar{r}{{\radj \Phi}}\& \du A\end{tikzar} \endprooftree \justifies \begin{tikzar}[column sep = 4em] A\ar[two heads]{r}{{\mathcal E}({\ladj \Phi})}\&A' \ar[tail]{r}{\mathsf{Chu}_s(\Phi)} \& \du B\end{tikzar} \qquad \quad \begin{tikzar}[column sep = 4em] B\ar[two heads]{r}{{\mathcal E}({\radj \Phi})}\&B' \ar[tail]{r}{\mathsf{Chu}_e(\Phi)} \& \du A\end{tikzar} \endprooftree \justifies \begin{tikzar}[column sep = 4em] B\ar[two heads]{r}{{\mathcal E}(\mathsf{Chu}_s(\Phi))}\& B'' \ar[tail]{r}{\mathsf{Chu}_{s\!e}(\Phi)} \& \du{A'} \end{tikzar} \qquad \quad \begin{tikzar}[column sep = 4em] A\ar[two heads]{r}{{\mathcal E}(\mathsf{Chu}_e(\Phi))}\& A'' \ar[tail]{r}{\mathsf{Chu}_{e\!s}(\Phi)} \&\du{B'} \end{tikzar} \endprooftree \] \caption{Overview of the separated-extensional Chu construction} \label{Fig:sefac} \end{center} \end{figure} Proving that $A'\cong A"$ and $B'\cong B"$ gives the nucleus $\mathsf{Chu}_{se}(\Phi) = \mathsf{Chu}_{es}(\Phi)$ in $\mathsf{Nuc}_3$. \subsection{What does the separated-extensional nucleus capture in examples \ref{Sec:chu-mat}?} \subsubsection{Posets} Restricted to the poset matrices in the form $A^{o} \times B\tto\Phi [1]$, as explained in Sec.~\ref{Sec:chu-mat-pos}, the separated-extensional nucleus construction gives the same output as the concept lattice construction in Sec.~\ref{Sec:FCA}. The factorizations $\mathsf{Chu}_s$ and $\mathsf{Chu}_e$ in Fig.~\ref{Fig:sefac} correspond to the extensions $\ladj \Phi$ and $\radj \Phi$ in \eqref{eq:galois-pos}. \subsubsection{Linear spaces} Extended from finite bases to the entire spaces generated by them, the Chu view of the linear algebra example in \ref{Sec:chu-mat-lin} captures the rank space factorization and $\mathsf{Nuc}_1$, but the spectral decomposition into $\mathsf{Nuc}_2$ requires a suitable completeness assumption on $R$. \subsubsection{Categories} The separated-extensional nucleus construction does not seem applicable to the categorical example in \ref{Sec:chu-mat-cat} directly, as none of the familiar functor factorization systems satisfy the requirement $\du{\mathcal E}\subseteq {\mathcal M}$. This provides an opportunity to explore the role of factorizations in extracting the nuclei. In Sec.~\ref{Sec:diagchu} we explore a variation on the theme of the factorization-based nucleus. In Sec.~\ref{Sec:churan} we spell out a modified version of the separated-extensional nucleus construction that does apply to the categorical example in \ref{Sec:chu-mat-cat}. \subsection{Discussion: Combining factorization-based approaches}\label{Sec:diagchu} Some factorization-based nuclei, in the situations when the requirement $\du{\mathcal E}\subseteq {\mathcal M}$ is not satisfied, arise from a combination of the separated-extensional construction from Sec.~\ref{Sec:SE} and the diagonalization factoring from Sec.~\ref{Sec:lin}. \subsubsection{How nuclei depend on factorizations?} As explained in the Appendix, every factorization system ${\mathcal E}\wr {\mathcal M}$ in any category ${\mathcal S}$ can be viewed as an algebra for the $\Arr$-monad, where $\Arr({\mathcal S})= {\mathcal S}\diagup {\mathcal S}$ is the category consisting of the ${\mathcal S}$-arrows as objects, and the pairs of arrows forming commutative squares as the morphisms. An arbitrary factorization system ${\mathcal E}\wr {\mathcal M}$ on ${\mathcal S}$ thus corresponds to an algebra $\mbox{\Large$\wr$}:{\mathcal S}\diagup {\mathcal S} \to {\mathcal S}$; and a factorization system that satisfies the requirements for the separated-extensional Chu construction lifts to an algebra $\mbox{\Large$\wr$}:{\mathcal A}{\mathcal D}{\mathcal J}_3\diagup {\mathcal A}{\mathcal D}{\mathcal J}_3 \to {\mathcal A}{\mathcal D}{\mathcal J}_3$. To see this, note the natural bijection ${\mathcal S}(A\times B, R) \cong {\mathcal S}(A, \du B)$ induces an isomorphism of ${\mathcal A}{\mathcal D}{\mathcal J}_3 = \mathsf{Chu}({\mathcal S},R)$ with the comma category $\SSSR={\mathcal S}\diagup \du{(-)}$, whose arrows are in the form \begin{equation}\label{eq:squares} \begin{tikzar}{} A \ar{r}{\ladj f} \ar{d}[swap]{\varphi} \& C \ar{d}{\psi}\\ \du B \ar{r}{\du{\radj f}} \& \du D\\[-3ex] B \& \ar{l}{f} D \end{tikzar} \end{equation} Such squares permit ${\mathcal E}\wr{\mathcal M}$-factorization whenever $\du{\mathcal E}\subseteq {\mathcal M}$. If we now set \begin{eqnarray} \mathsf{Mat}_4}%{\FMod \KK & = & {\mathcal A}{\mathcal D}{\mathcal J}_3\\ {\mathcal A}{\mathcal D}{\mathcal J}_4 & = & {\mathcal A}{\mathcal D}{\mathcal J}_3 \diagup {\mathcal A}{\mathcal D}{\mathcal J}_3 \label{eq:Adjfour} \end{eqnarray} then the isomorphism ${\mathcal A}{\mathcal D}{\mathcal J}_3 \cong \SSSR$ liefts to of ${\mathcal A}{\mathcal D}{\mathcal J}_4 \cong \SSSR \diagup \SSSR$. The objects of ${\mathcal A}{\mathcal D}{\mathcal J}_4$ can thus be viewed as the squares in the form \eqref{eq:squares}, and the object part of the abstract completion functor $\MA_4:\mathsf{Mat}_4}%{\FMod \KK \to {\mathcal A}{\mathcal D}{\mathcal J}_4$ can be defined as in Fig.~\ref{Fig:lifting}. \begin{figure}[!ht] \begin{center} \begin{eqnarray} \mathsf{Mat}_4}%{\FMod \KK \hspace{4em} & \begin{tikzar}[column sep = 3em] \hspace{.2em} \ar{r}{\MA_4} \& \hspace{.2em} \end{tikzar} & \hspace{3em} {\mathcal A}{\mathcal D}{\mathcal J}_4 \\[1ex] \begin{tikzar}[column sep = .2em, row sep = 1em] \& A \times D \ar[thin]{dl}[swap]{A\times \radj f} \ar[thin]{dr}{\ladj f \times D}\\ A\times B\ar{dddr}[description]{\Phi} \&\tto{ \ f \ } \& C\times D\ar{dddl}[description]{\Psi}\\ \\ \\ \& R \end{tikzar}& \begin{tikzar}[column sep = 1em] \hspace{.1em} \ar[mapsto]{r} \& \hspace{.1em} \end{tikzar} & \begin{tikzar}[row sep=2ex,column sep = 2em] \du{\du{ A}} \ar[thin]{rrrr}{\du{\du{\ladj f}}} \ar{dddd}[swap]{\du{\radj \Phi}}\&\&\& \&\du{\du{C}} \ar{dddd}{\du{\radj \Psi}} \\ \& A \ar{ul}[swap]{\eta} \ar[thin]{rr}{\ladj f} \ar{dd}{\ladj \Phi} \&\& C \ar{ur}{\eta} \ar{dd}[swap]{\ladj \Psi} \\ \ar[phantom]{r}[description]{\MA_4 (\Phi)}\&\hspace{.1ex}\& \tto{ \ \MA_4 f \ } \& \ar[phantom]{r}[description]{\MA_4 (\Psi)}\& \hspace{.1ex} \\ \& \du B \ar{dl}{\mathrm{id}} \ar[thin]{rr}[swap]{\du{\radj f}} \&\& \du D \ar{dr}[swap]{\mathrm{id}} \\ \du B \ar[thin]{rrrr}[swap]{\du{\radj f}}\&\&\&\& \du D\end{tikzar} \end{eqnarray*} \caption{The abstract completion functor $\MA_4: \mathsf{Mat}_4}%{\FMod \KK\to {\mathcal A}{\mathcal D}{\mathcal J}_4$ } \label{Fig:lifting} \end{center} \end{figure} One immediate consequence is that the two factorization steps of the two-step separated-and-extensional construction $\NucL_{3} = \mathsf{Chu}_{se}$, summarized in Fig.~\ref{Fig:sefac}, can now be obtained in a single sweep, by directly composing the completion with the factorization \begin{eqnarray}\label{eq:nuc-sweep} \NucL_{3} & = & \Big({\mathcal A}{\mathcal D}{\mathcal J}_3 \tto{\MA_4} {\mathcal A}{\mathcal D}{\mathcal J}_3 \diagup {\mathcal A}{\mathcal D}{\mathcal J}_3 \tto{\mbox{\Large$\wr$}} {\mathcal A}{\mathcal D}{\mathcal J}_3\Big) \end{eqnarray} The fixed points of this functor are just the separated-extensional nuclei. This is, of course, just another presentation of the same thing; and perhaps a wrongheaded one, as it folds the two steps of the nucleus construction into one. These two steps are displayed as the two paths from left to right through Fig.~\ref{Fig:Nuc}, corresponding to the two orders in which the steps can be taken; and of course as the separate part and the extensional part of the separate-extensional Chu-construction. The commutativity of the two steps is, in a sense, the heart of the matter. However, packaging a nucleus construction into one step allows packaging two such constructions into one. What might that be useful for? When ${\mathcal S}$ is, say, a category of topological spaces, and ${\mathcal E}\wr {\mathcal M}$ the the dense-closed factorization, then it may happen that the separated-extensional nucleus of a space is much bigget than the original space. If the nucleus ${\NucL_3 \Phi}:A'\times B'\toR$ of a matrix $\Phi:A\times B\to R$ is constructed by factoring $A\tto{\ladj \Phi}\du B$ and $B\tto{\radj\Phi}\du A$ into \[\begin{tikzar}[column sep = 1.5em] A\ar[two heads]{r}\& A' \ar[tail]{rr}{\ladj{\NucL_3\Phi}} \&\&\du{B'} \ar[tail]{r}\& \du B \end{tikzar}\qquad \qquad \begin{tikzar}[column sep = 1.5em] B\ar[two heads]{r}\& B' \ar[tail]{rr}{\radj{\NucL_3\Phi} }\& \& \du{A'} \ar[tail]{r} \&\du A \end{tikzar} \] as in Fig.~\ref{Fig:sefac}, then $A$ and $B$ can be dense spaces of rational numbers, and $A'$ and $B'$ can be their closures in the space of real numbers, representable within both $\du A$ and $\du B$ for a cogenerator $R$. The same effect occurs if we take ${\mathcal S}$ to be posets, and in many other situations where the ${\mathcal E}$-maps are not quotients. One way to sidestep the problem might be to strengthen the requirements. \subsubsection{Exercise} Given a matrix $A\times B\tto\Phi R$, find a nucleus $A'\times B'\tto{L\Phi}R$ such that \begin{enumerate}[(a)] \item $A\twoheadrightarrow A'$ and $B\twoheadrightarrow B'$ are quotients, whereas \item $A'\mmono{\ladj \Phi} \du{B'}$ and $B'\mmono{\ladj \Phi} \du{A'}$ are closed embeddings. \end{enumerate} Requirement (b) is from the separated-extensional construction in Sec.~\ref{Sec:SE}, whereas requirement (a) is from the diagonalization factoring in Sec.~\ref{Sec:lin}). \subsubsection{Workout}\label{Sec:se-double} Suppose that category ${\mathcal S}$ supports two factorization systems: \begin{itemize} \item ${\mathcal E}\wr {\mathcal M}^{\bullet}$, where ${\mathcal M}^{\bullet}\subseteq {\mathcal M}$ are the regular monics (embeddings, equalizers), and \item ${\mathcal E}^{\bullet}\wr {\mathcal M}$, where ${\mathcal E}^{\bullet}\subseteq {\mathcal E}$ are the regular epis (quotients, coequalizers). \end{itemize} In balanced categories, these factorizations would coincide, because ${\mathcal M}^{\bullet}={\mathcal M}$ and ${\mathcal E}^{\bullet}= {\mathcal E}$, and we would be back to the situation where the separated-extensional construction applies. In general, the two factorizations can be quite different, like in the category of topological spaces. Nevertheless, since homming into the exponentiable object $R$ is a contravariant right adjoint functor, it maps coequalizers to equalizers. Assuming that $R$ is an injective cogenerator, it also maps general epis to monics, and vice versa. So we have \begin{equation} \du{{\mathcal E}^{\bullet}} \subseteq {\mathcal M}^{\bullet} \qquad\qquad \qquad \du{{\mathcal E}}\subseteq {\mathcal M} \qquad\qquad \qquad \du{{\mathcal M}}\subseteq {\mathcal E} \end{equation} However, ${\mathcal E}^{\bullet}$ and ${\mathcal M}^{\bullet}$ generally do not form a factorization system, because there are maps that do not have a quotient-embedding decomposition; and ${\mathcal E}$ and ${\mathcal M}$ do not form a factorization system because there are maps whose epi-mono decomposition is not unique. The factorization ${\mathcal E}^{\bullet}\wr {\mathcal E}$ does satisfy $\du{{\mathcal E}^{\bullet}}\subseteq {\mathcal M}$, but does not lift from ${\mathcal S}\diagup {\mathcal S}\to {\mathcal S}$ to $\mathsf{Chu}\diagup \mathsf{Chu}\to \mathsf{Chu}$. Our next nucleus setting will be full subcategories again: \begin{eqnarray} {\mathcal M}{\mathcal N}{\mathcal D}_4 & = & \big\{<\ladj f, \radj f> \in{\mathcal A}{\mathcal D}{\mathcal J}_4 \ |\ \ladj f \in {\mathcal M}, \radj f \in {\mathcal E} \big\}\label{eq:mndfour}\\ {\mathcal C}{\mathcal M}{\mathcal N}_4 & = & \big\{<\ladj f, \radj f> \in{\mathcal A}{\mathcal D}{\mathcal J}_4 \ |\ \ladj f \in {\mathcal E}, \radj f \in {\mathcal M} \big\} \label{eq:cmnfour} \end{eqnarray} These two categories are dual, just like ${\mathcal M}{\mathcal N}{\mathcal D}_1$ and ${\mathcal C}{\mathcal M}{\mathcal N}_1$ were dual. In both cases, they are in fact the same category, since switching between $\Phi$ and $\Phi^{\ddag}$ in (\ref{eq:mndone}-\ref{eq:cmnone}) and between $\ladj f$ and $\radj f$ in (\ref{eq:mndfour}-\ref{eq:cmnfour}) is a matter of notation. But distinguishing the two copies of the category on the two ends of the duality makes it easier to define one as a reflexive and the other one as a coreflexive subcategory of the category of adjunctions. The functors $\EM_4: {\mathcal M}{\mathcal N}{\mathcal D}_4\hookrightarrow {\mathcal A}{\mathcal D}{\mathcal J}_4$ and $\KC_4:{\mathcal C}{\mathcal M}{\mathcal N}_4\hookrightarrow {\mathcal A}{\mathcal D}{\mathcal J}_4$ are again the obvious inclusions. The reflection $\AM_4:{\mathcal A}{\mathcal D}{\mathcal J}_4\twoheadrightarrow {\mathcal M}{\mathcal N}{\mathcal D}_4 $ and the coreflection $\AC_4:{\mathcal A}{\mathcal D}{\mathcal J}_4 \twoheadrightarrow {\mathcal C}{\mathcal M}{\mathcal N}_4$ are constructed in Fig.~\ref{Fig:mndcmnfour}. \begin{figure}[!ht] \begin{center} \begin{tikzar}[row sep = 5.5ex,column sep = 2ex] A \ar{dddd}[swap]{\Phi} \ar{rrrrrrr}{\ladj f} \ar[two heads]{rrd}[description]{{\mathcal E}^{\bullet}(\ladj f)} \&\&\&\&\&\&\& C \ar{dddd}{\Psi} \\ \&\& A' \ar[tail]{rrrrru}[description]{{\mathcal M}(\ladj f)} \ar[dashed]{dd} \\ \&\&\&\&\& \hspace{-.25em}{\AM_4(f)} \hspace{-.25em} \\ \&\& \du{B'} \ar[tail]{drrrrr}[description]{\du{{\mathcal E}(\radj f)}} \\ \du B \ar{rru}[description]{\du{{\mathcal M}^{\bullet}(\radj f)}} \ar{rrrrrrr}[swap]{\du{\radj f}} \&\& \&\&\&\&\& \du D \\ \&\& B'\ar[tail]{lld}[description]{{\mathcal M}^{\bullet}(\radj f)} \\ B \&\& \&\&\&\&\& D\ar[two heads]{lllllu}[description]{{\mathcal E}(\radj f)} \ar{lllllll}{\radj f} \end{tikzar} \hspace{2cm} \begin{tikzar}[row sep = 5.5ex,column sep = 2ex] A \ar{dddd}[swap]{\Phi} \ar{rrrrrrr}{\ladj f} \ar[two heads]{rrrrrd}[description]{{\mathcal E}(\ladj f)} \&\&\&\&\&\&\& C \ar{dddd}{\Psi} \\ \&\&\&\&\& C' \ar[tail]{rru}[description]{{\mathcal M}^{\bullet}(\ladj f)} \ar[tail,dashed]{dd} \\ \&\& \hspace{-.25em}{\AC_4(f)}\hspace{-.25em} \\ \&\&\&\&\& \du{D'} \ar[tail]{drr}[description]{\du{{\mathcal E}^{\bullet}(\radj f)}} \\ \du B \ar[two heads]{rrrrru}[description]{\du{{\mathcal M}(\radj f)}} \ar{rrrrrrr}[swap]{\du{\radj f}} \& \&\&\&\&\&\& \du D \\ \&\&\&\&\& D'\ar[tail]{llllld}[description]{{\mathcal M}(\radj f)} \\ B \& \&\&\&\&\&\& D\ar[two heads]{llu}[description]{{\mathcal E}^{\bullet}(\radj f)} \ar{lllllll}{\radj f} \end{tikzar} \caption{The object parts of the functors $\AM_4: {\mathcal A}{\mathcal D}{\mathcal J}_4\twoheadrightarrow {\mathcal M}{\mathcal N}{\mathcal D}_4$ and $\AC_4:{\mathcal A}{\mathcal D}{\mathcal J}_4\twoheadrightarrow {\mathcal C}{\mathcal M}{\mathcal N}_4$ } \label{Fig:mndcmnfour} \end{center} \end{figure} The factoring triangles on are related in a similar way to the two factoring triangles in \eqref{eq:facop}. The nucleus is obtained by composing them, in either order. More precisely, the coreflection $\NM_4:{\mathcal M}{\mathcal N}{\mathcal D}_4 \twoheadrightarrow \mathsf{Nuc}_4$ is obtained by restricting the coreflection $\AC_4:{\mathcal A}{\mathcal D}{\mathcal J}_4 \twoheadrightarrow {\mathcal C}{\mathcal M}{\mathcal N}_4$ along the inclusion $\EM_4:{\mathcal M}{\mathcal N}{\mathcal D}_4\hookrightarrow {\mathcal A}{\mathcal D}{\mathcal J}_4$; the reflection $\NC_4:{\mathcal C}{\mathcal M}{\mathcal N}_4 \twoheadrightarrow \mathsf{Nuc}_4$ is obtained by restricting $\AM_4:{\mathcal A}{\mathcal D}{\mathcal J}_4 \twoheadrightarrow {\mathcal M}{\mathcal N}{\mathcal D}_4$ along the inclusion $\KC_4:{\mathcal M}{\mathcal N}{\mathcal D}_4\hookrightarrow {\mathcal A}{\mathcal D}{\mathcal J}_4$. The outcome is in Fig.~\ref{Fig:chunuc}. \begin{figure}[!ht] \begin{center} \begin{tikzar}[row sep = 5.5ex,column sep = 2.5ex] A \ar{dddd}[swap]{\eta} \ar{rrrrrr}{\ladj \Phi} \ar[two heads]{rd}[description]{{\mathcal E}^{\bullet}(\ladj \Phi)} \&\&\&\&\&\& \du B \ar{dddd}{\mathrm{id}} \\ \& A' \ar[tail,two heads]{rrrr}{{\mathcal E}{\mathcal M}(\ladj\Phi)} \ar[tail,dashed]{dd}[swap]{\Phi'} \&\&\&\& A'' \ar[tail]{ru}[description]{{\mathcal M}^{\bullet}(\ladj \Phi)} \ar[tail,dashed]{dd}{\Phi''} \\ \&\&\& \NucL_{4}\Phi\\ \& \du{B'} \ar[tail,two heads]{rrrr}{\du{{\mathcal E}{\mathcal M}(\radj\Phi)}} \&\&\&\& \du{B''} \ar[tail]{dr}[description]{\du{{\mathcal E}(\radj \Phi)}} \\ \du {\du A} \ar{ru}[description]{\du{{\mathcal M}(\radj \Phi)}} \ar{rrrrrr}[swap]{\du{\radj \Phi}} \& \&\&\&\&\& \du B \\ \& B'\ar[tail]{ld}[description]{{\mathcal M}^{\bullet}(\radj \Phi)} \&\&\&\& B''\ar[tail,two heads]{llll}[swap]{{\mathcal E}{\mathcal M}(\radj\Phi)} \\ \du A \& \&\&\&\&\& B\ar[two heads]{lu}[description]{{\mathcal E}^{\bullet}(\radj \Phi)} \ar{llllll}{\radj \Phi} \end{tikzar} \caption{The Chu-nucleus of the matrix $\Phi:A\times B\to R$} \label{Fig:chunuc} \end{center} \end{figure} The category of nuclear Chu spaces is thus the full subcategory spanned by \begin{eqnarray} \mathsf{Nuc}_4 & = & \big\{<\ladj f, \radj f> \in{\mathcal A}{\mathcal D}{\mathcal J}_4 \ |\ \ladj f, \radj f \in {\mathcal E}\cap{\mathcal M} \big\} \label{eq:nucfour} \end{eqnarray} If a factorization does not support the separated-extensional Chu-construction because it is not stable under dualizing, but if it is dual with another factorization, like e.g. the isometric-diagonal factorization in the category if finite-dimensional Hilbert spaces in Sec.~\ref{Sec:lin}, then the nucleus can still be constructed, albeit not as a subcategory of the original category, but of its arrow category. While the original separated-extensional Chu-construction yields a full subcategory $\mathsf{Chu}_{{se}} \subseteq \mathsf{Chu}$, here we get the Chu-nucleus as a full subcategory $\NucL_{4}\subseteq \mathsf{Chu}\diagup \mathsf{Chu}$. A Chu-nucleus is thus an arrow $\left<{\mathcal E}{\mathcal M}(\ladj\Phi), {\mathcal E}{\mathcal M}(\radj\Phi)\right>\in \mathsf{Chu}(\Phi', \Phi'')$, as seen in Fig.~\ref{Fig:chunuc}, such that \begin{enumerate}[(a)] \item $A\twoheadrightarrow A'$ and $B\twoheadrightarrow B''$ are in ${\mathcal E}^{\bullet}$, \item $B' \iinclusion{\ \widetilde\Phi'}\du{A'}$ and $A''\iinclusion{\ \, \Phi''} \du{B''}$ are in ${\mathcal M}^{\bullet}$, \item $A' \mmono{\Phi'}\du{B'}$ and $B''\mmono{\ \, \widetilde\Phi''} \du{A''}$ are in ${\mathcal M}$, \item ${\mathcal E}{\mathcal M}(\ladj\Phi)$ and ${\mathcal E}{\mathcal M}(\radj\Phi)$ are in ${\mathcal E}\cap{\mathcal M}$. \end{enumerate} where $B' \tto{\widetilde\Phi'}\du{A'}$ is the transpose of $A' \tto{\Phi'}\du{B'}$, and $B''\tto{\widetilde\Phi''} \du{A''}$ is the transpose of $A'' \tto{\Phi''} \du{B''}$. According to (d), Chu spaces ${\mathcal E}{\mathcal M}(\ladj\Phi)$ and ${\mathcal E}{\mathcal M}(\radj\Phi)$ are thus monics in one factorization system and epis in another one, like the diagonalizations were in diagram \eqref{eq:facopthree} in Sec.~\ref{Sec:lin}. According to (a) and (b), ${\mathcal E}{\mathcal M}(\ladj\Phi)$ and ${\mathcal E}{\mathcal M}(\radj\Phi)$ are moreover the best such approximations of $\ladj\Phi$ and $\radj\Phi$, as their largest quotients and embeddings, like the diagonalizations were, according to \eqref{eq:facop} and \eqref{eq:faccop}. The difference between the current situation and the one in one in Sec.~\ref{Sec:lin}, is that the diagonal nucleus there was self-dual, whereas ${\mathcal E}{\mathcal M}(\ladj\Phi)$ and ${\mathcal E}{\mathcal M}(\radj\Phi)$ are not, but they are rather dual to one another. It also transposes $\Phi'$ and $\Phi''$, and the transposition does not preserve regularity, but in this case it switches the ${\mathcal M}^{\bullet}$-map with the ${\mathcal M}$-map. Intuitively, the nucleus $\NucL_{4}\Phi$ can thus be thought as the best approximation of a diagonalization, in situations when the spectra of the two self-adjoints induced by a matrix are not the same; or the best approximation of a separated-extensional core when $\mathsf{Chu}_{{se}}$ and $\mathsf{Chu}_{{es}}$ do not coincide. \subsection{Towards the categorical nucleus} \label{Sec:churan} Although the categorical example \ref{Sec:chu-mat-cat} does not yield to the separated-extensional nucleus construction, a suitable modification of the example suggests the suitable modification of the construction. Consider a distributor $\Phi:{\mathbb A}^{o}\times{\mathbb B}\to {\sf Set}$, representable in the form ${\mathbb A}(x, \radj F y) = \Phi(x,y) = {\mathbb B}(\ladj Fx, y)$ for some adjunction $\adj F:{\mathbb B}\to {\mathbb A}$. The factorization of representable matrices displayed in Fig.~\ref{Fig:representable} induces in ${\mathcal A}{\mathcal D}{\mathcal J}_3$ the diagrams in Fig.~\ref{Fig:repres-chu}. \begin{figure}[!ht] \begin{center} \begin{tikzar}[column sep = 1.5em, row sep = 2em] {\mathbb A} \ar{dd}[description]{\blacktriangledown}%{\curlyvee}%{\nabla} \ar[equals]{rr} \&\& {\mathbb A} \ar[two heads]{dr} \ar{dd}[description]{\left(\ladj \Phi\right)^{o}} \ar{rr}{\ladj F} \&\& {\mathbb B} \ar{dd}[description]{\blacktriangledown}%{\curlyvee}%{\nabla} \ar[equals]{rr} \&\& {\mathbb B} \ar[two heads]{dl} \ar{dd}[description]{\radj \Phi} \ar{rr}{\radj F} \&\& {\mathbb A}\ar{dd}[description]{\blacktriangle}%{\curlywedge}%{{\rm \Delta}} \\ \&\&\&\Klm {\mathbb A} F\ar[tail]{dl} \ar[bend left =25,dashed, crossing over]{rr}[pos = 0.66]{\rkadj F}\&\& \Klc {\mathbb B} F \ar[tail]{dr} \\ {\sf Set}^{{\mathbb A}^{o}} \ar{rr}[swap]{{\Lan}_\blacktriangledown}%{\curlyvee}%{\nabla\left(\ladj\Phi\right)^{o}} \&\& \left({\sf Set}^{\mathbb B}\right)^{o} \ar[equals]{rr} \& \& \left({\sf Set}^{\mathbb B}\right)^{o} \ar{rr}[swap]{{\Ran}_\blacktriangledown}%{\curlyvee}%{\nabla\radj\Phi} \&\& {\sf Set}^{{\mathbb A}^{o}} \ar[equals]{rr} \&\& {\sf Set}^{{\mathbb A}^{o}} \\ \\ {\mathbb B} \ar{dd}[description]{\blacktriangle}%{\curlywedge}%{{\rm \Delta}} \&\& {\mathbb A} \ar{ll}[swap]{\ladj F} \ar[two heads]{dr} \ar{dd}[description]{\left(\ladj \Phi\right)^{o}} \&\& {\mathbb A} \ar[equals]{ll} \ar{dd}[description]{\blacktriangledown}%{\curlyvee}%{\nabla} \&\& {\mathbb B} \ar{ll}[swap]{\radj F} \ar[two heads]{dl} \ar{dd}[description]{\radj \Phi} \&\& {\mathbb B} \ar[equals]{ll}\ar{dd}[description]{\blacktriangle}%{\curlywedge}%{{\rm \Delta}} \\ \&\&\&\Klm {\mathbb A} F\ar[tail]{dl} \&\& \Klc {\mathbb B} F \ar[tail]{dr} \ar[bend left =25,dashed, crossing over]{ll}[pos = 0.66]{\lkadj F}\\ \left({\sf Set}^{\mathbb B}\right)^{o} \ar[equals]{rr} \&\& \left({\sf Set}^{{\mathbb B}}\right)^{o} \& \& {\sf Set}^{{\mathbb A}^{o}} \ar{ll}{{\Lan}_\blacktriangledown}%{\curlyvee}%{\nabla\left(\ladj\Phi\right)^{o}} \&\& {\sf Set}^{{\mathbb A}^{o}} \ar[equals]{ll} \&\& \left({\sf Set}^{\mathbb B}\right)^{o} \ar{ll}{{\Ran}_\blacktriangledown}%{\curlyvee}%{\nabla\radj\Phi} \end{tikzar} \caption{Separated-extensional nucleus + Kan extensions = Kleisli resolutions \label{Fig:repres-chu} \end{center} \end{figure} Here the representation ${\mathbb A}(x, \radj F y) = \Phi(x,y) = {\mathbb B}(\ladj Fx, y)$ induces \begin{align*} \radj \Phi : {\mathbb B} & \to {\sf Set}^{{\mathbb A}^{o}} & \ladj \Phi: {\mathbb A}^{o} & \to {\sf Set}^{{\mathbb B}}\\ b & \mapsto\ \ \lambda x.\ {\mathbb A}(x, \radj F b) & a & \mapsto\ \ \lambda y. \ {\mathbb B}(\ladj F a, y) \end{align*} i.e. $\radj \Phi = \big({\mathbb B}\tto{\radj F} {\mathbb A} \tto\blacktriangledown}%{\curlyvee}%{\nabla {\sf Set}^{{\mathbb A}^{o}}\big)$ and $\ladj \Phi = \big({\mathbb A}\tto{\ladj F} {\mathbb B} \tto\blacktriangle}%{\curlywedge}%{{\rm \Delta} \left({\sf Set}^{{\mathbb B}}\right)^{o}\big)$. So the Chu view of a distributor $\Phi$ representable by an adjunction $\adj F$ is based on the Kan extensions of the adjunction. The point of this packaging is that the separated-extensional nucleus of the distributor $\Phi$ for the factorization system $({\sf Ess}\wr{\sf Ffa})$ in $\mathsf{CAT}$ where\footnote{This basic factorization takes scene in the final moments of the paper, in Sec.~\ref{Sec:What-cat}.} \begin{itemize} \item ${\mathcal E} = {\sf Ess} = $ essentially surjective functors, \item ${\mathcal M} = {\sf Ffa} = $ full-and-faithful functors \end{itemize} gives rise to the Kleisli categories $\Klm {\mathbb A} F$ and $\Klc {\mathbb B} F$ for the monad $\lft F = \radj F\ladj F$ and the comonad $\rgt F = \ladj F\radj F$, since \begin{align}\label{eq:Kleisli} \left| \Klm {\mathbb A} F\right| & =\ |{\mathbb A}| & \left| \Klm {\mathbb A} F\right| & =\ |{\mathbb B}|\\ \Klm {\mathbb A} F(x,x') & =\ {\mathbb B}(\ladj F x, \ladj F x') & \Klc {\mathbb B} F(y,y') & = \ {\mathbb A}(\radj F y, \radj F y')\notag \end{align} It is easy to see that this is equivalent to the usual Kleisli definitions, since ${\mathbb B}(\ladj F x, \ladj F x') \cong {\mathbb A}(x, \radj F\ladj Fx')$ and ${\mathbb A}(\radj F y, \radj F y')\cong {\mathbb B}(\ladj F\radj F y, y')$. The functors $\rkadj F$ and $\lkadj F$ induced in Fig.~\ref{Fig:repres-chu} by the factorization form the adjunction displayed in Fig.~\ref{Fig:Kleisli-nuc}, because \[ \Klm {\mathbb A} F(\radj F y, x) \ = \ {\mathbb B}(\ladj F\radj F y, \ladj F x) \ \cong\ {\mathbb A}(\radj F y ,\radj F\ladj F x)\ =\ \Klc {\mathbb B} F(y, \ladj Fx)\] \begin{figure}[!ht] \begin{center} \[\begin{tikzar}[row sep=2.5cm,column sep=3cm] {\mathbb A} \arrow[phantom]{d}[description]{\dashv} \arrow[loop, out = 135, in = 45, looseness = 4,thin]{}[swap]{\lft F} \arrow[bend right = 13]{d}[swap]{\ladj F} \arrow[bend right = 13,two heads,thin]{dr}[pos=.75]{{\mathcal E}(\ladj F)} \& \arrow[tail,thin]{l}[swap]{{\mathcal M}(\radj F)} \Klc{\mathbb B} F \arrow[phantom]{d}[description]{{\dashv}} \arrow[bend right = 13]{d}[swap]{\lkadj{F}} \\ {\mathbb B} \arrow[loop, out = -45, in=-135, looseness = 6,thin]{}[swap]{\rgt F} \arrow[bend right = 13,pos=0.8,two heads,crossing over,thin]{ur}{{\mathcal E}(\radj F)} \arrow[bend right = 13]{u}[swap]{\radj F} \& \Klm {\mathbb A} F \arrow[tail,thin]{l}{{\mathcal M}(\ladj F)} \arrow[bend right = 13]{u}[swap]{\rkadj{F}} \end{tikzar}\] \caption{A nucleus $\kadj F$ spanned by the initial resolutions of the adjunction $\adj F$} \label{Fig:Kleisli-nuc} \end{center} \end{figure} While this construction is universal, it is not idempotent, as the adjunctions between the categories of free algebras over cofree coalgebras and of cofree coalgebras over free algebras often form transfinite embedding chains. The idempotent nucleus construction is just a step further. Remarkably, categorical localizations turn out to arise beyond factorizations. \subsection{The categories} The general case of Fig.~\ref{Fig:Nuc} involves the following categories: \begin{itemize} \item matrices between categories, or distributors (also called profunctors, or bimodules): \begin{eqnarray}\label{eq:Mat} |\mathsf{Mat}| & = & \coprod_{{\mathbb A}, {\mathbb B}\in \mathsf{CAT}} \mathsf{CAT}({\mathbb A}^{o}\times {\mathbb B}, {\sf Set})\\ \mathsf{Mat} (\Phi, \Psi) & = & \left\{<H,K>\in \mathsf{CAT}({\mathbb A},{\mathbb C})\times \mathsf{CAT}({\mathbb B},{\mathbb D})\ |\ \Phi(a,b) \cong \Psi(Ha, Kb) \right\}\notag \end{eqnarray} \item adjoint functors: \begin{eqnarray}\label{eq:Adj} |{\mathcal A}{\mathcal D}{\mathcal J}| & = & \coprod_{{\mathbb A}, {\mathbb B}\in \mathsf{CAT}} \coprod_{\substack{\ladj F\in \mathsf{CAT}({\mathbb A},{\mathbb B})\\ \radj F\in \mathsf{CAT}({\mathbb B},{\mathbb A})}}\big\{ <\eta,\varepsilon>\in {\rm Nat}(\mathrm{id}, \radj F\ladj F)\times {\rm Nat}(\ladj F\radj F, \mathrm{id})\ | \\[-1ex] && \hspace{9em} \varepsilon \ladj F \circ \ladj F \eta = \ladj F \ \wedge\ \radj F\varepsilon \circ \eta \radj F = \radj F\big\}\notag\\[2ex] {\mathcal A}{\mathcal D}{\mathcal J} (F, G) & = & \big\{<H,K>\in \mathsf{CAT}({\mathbb A},{\mathbb C})\times \mathsf{CAT}({\mathbb B},{\mathbb D})\ |\ K\ladj F \stackrel{\ladj\upsilon}\cong \ladj G H\ \wedge\ H\radj F \stackrel{\radj\upsilon}\cong \radj G K \ \wedge\notag\\ && \hspace{9em} H\eta^F \ \stackrel{\radj \upsilon\ladj\upsilon}\cong\ \eta^GH \ \wedge\ K\varepsilon^F\ \stackrel{\ladj \upsilon\radj\upsilon}\cong\ \varepsilon^GK\big\}\notag \end{eqnarray} \item monads (also called triples): \begin{eqnarray}\label{eq:Mnd} |{\mathcal M}{\mathcal N}{\mathcal D}| & = & \coprod_{{\mathbb A}\in \mathsf{CAT}} \coprod_{\lft T\in \mathsf{CAT}({\mathbb A},{\mathbb A})}\big\{ <\eta,\mu>\in {\rm Nat}(\mathrm{id}, \lft T)\times {\rm Nat}(\lft T\lft T, \lft T)\ | \\[-1ex] && \hspace{9em} \mu \circ \lft T \mu = \mu\circ \mu \lft T\wedge \mu\circ \lft T\eta = \lft T = \mu \circ \eta\lft T\big\}\notag\\[2ex] {\mathcal M}{\mathcal N}{\mathcal D} \left(\lft T,\lft S\right) & = & \big\{H\in \mathsf{CAT}({\mathbb A},{\mathbb C})\ |\ H\lft T\stackrel\chi \cong \lft S H \ \wedge \notag\\ && \hspace{9em} H\eta^{\lft T} \stackrel\chi\cong \eta^{\lft S} H\ \wedge\ H\mu^{\lft T} \stackrel\chi\cong \mu^{\lft S} H\big\}\notag \end{eqnarray} \item comonads (or cotriples): \begin{eqnarray}\label{Cmn} |{\mathcal C}{\mathcal M}{\mathcal N}| & = & \coprod_{{\mathbb B}\in \mathsf{CAT}} \coprod_{\rgt T\in \mathsf{CAT}({\mathbb B},{\mathbb B})}\big\{ <\varepsilon,\nu>\in {\rm Nat}(\rgt T,\mathrm{id} )\times {\rm Nat}(\rgt T, \rgt T \rgt T)\ |\\[-1ex] && \hspace{9em} \rgt T \nu \circ \nu = \nu \rgt T\circ \nu \wedge \rgt T\varepsilon \circ \nu = \rgt T = \varepsilon \rgt T\circ \nu \big\}\notag\\[2ex] {\mathcal C}{\mathcal M}{\mathcal N} \left(\rgt S,\rgt T\right) & = & \big\{K\in \mathsf{CAT}({\mathbb B},{\mathbb D})\ |\ K\rgt S\stackrel\kappa\cong \rgt T K \ \wedge \notag\\ && \hspace{9em} K\varepsilon^{\rgt S} \stackrel\kappa\cong \varepsilon^{\rgt T} K\ \wedge\ K\nu^{\rgt S} \stackrel\kappa\cong \nu^{\rgt T} K\big\}\notag \end{eqnarray} \item The category $\mathsf{Nuc}$ can be equivalently viewed as a full subcategory of ${\mathcal A}{\mathcal D}{\mathcal J}$, ${\mathcal M}{\mathcal N}{\mathcal D}$ or ${\mathcal C}{\mathcal M}{\mathcal N}$, and the three versions will be discussed later. \end{itemize} \para{Remark.} The above definitions follow the pattern from the preceding sections. The difference is that the morphisms, which are still structure-preserving pairs, this time of functors, now satisfy the preservation requirements up to isomorphism. In each case, there may be many different isomorphisms witnessing the structure preservation. We leave them out of picture, under the pretext that they are preserved under the compositions. This simplification does not change the nucleus construction itself, but it does project away information about the morphisms. Moreover, the construction also applies to a richer family of morphisms, with non-trivial 2-cells. The chosen presentation framework thus incurs a loss of information and generality. We believe that this is the unavoidable price of not losing the sight of the forest for the trees, at least in this presentation. Some aspects of the more general framework of the results are sketched in Appendix~\ref{appendix:adj}. We leave further explanations for the final section of the paper. \subsection{\textbf{Assumption:} Idempotents can be split.}\label{assumption} An endomorphism $\varphi:X\to X$ is \emph{idempotent}\/ if it satisfies $\varphi\circ\varphi = \varphi$. A \emph{retraction}\/ is a pair of morphisms $e :X \to R$ and $m :R\to X$ such that $e\circ m = \mathrm{id}_R$. We often write retractions in the form $R\begin{tikzar}[column sep = 1.3em] \hspace{.1ex} \ar[thin,tail,bend left=15]{r}{m} \& \hspace{.1ex} \ar[thin,two heads,bend left=15]{l}{e}\end{tikzar}X$, or $m: R\retr X : e$. Note that $\varphi = m\circ e$ is an idempotent. Given an idempotent $\varphi$, any retraction with $\varphi = m\circ e$ is called the \emph{splitting}\/ of $\varphi$. It is easy to see that the component $m:R\rightarrowtail X$ of a retraction is an equalizer of $\varphi$ and the identity on $X$; and that $e:X\twoheadrightarrow R$ is a coequalizer of $\varphi$ and the identity. It follows that all splittings of an idempotent are isomorphic. An idempotent on $X$ is thus resolved by a splitting into a projection and an injection of an object $R$, which is called its \emph{retract}. When $\varphi$ is a function on sets, then its idempotency means that $\varphi$ picks in $X$ a representative of each equivalence class modulo the equivalence relation $\big(x\sim y\big) \iff \big(\varphi(x) = \varphi(y)\big)$, and thus represents the quotient $X/\!\!\sim$ as a subset $R\subseteq X$. The assumption that all idempotents split is the weakest categorical \emph{completeness}\/ requirement. A categorical limit or colimit is said to be \emph{absolute}\/ if it is preserved by all functors. Since all functors preserve equations, they map idempotents to idempotents, and preserve their splittings. Since a splitting of an idempotent consist of its equalizer and a coequalizer with the identity, the idempotent splittings are absolute limits and colimits. It was proved in \cite{PareR:absolute} that all absolute limits and colimits must be in this form. \emph{The concepts of absolute limit, absolute colimit, and retraction coincide.} The absolute completion $\overline {\mathbb A}$ of a given category ${\mathbb A}$ consists of the idempotents in ${\mathbb A}$ as the objects. A morphism $f\in \overline {\mathbb A}(\varphi, \psi)$ between the idempotents $\varphi:X\to X$ and $\psi:Y\to Y$ in ${\mathbb A}$ is an arrow $f\in {\mathbb A}(X,Y)$ such that $\psi\circ f \circ \varphi = f$, or equivalently $\psi \circ f = f = f\circ \varphi$. A morphism from $\varphi$ to $\psi$ thus coequalizes $\varphi$ with the identity, and equalizes $\psi$ with the identity. If $\varphi$ and $\psi$ split in ${\mathbb A}$ into retracts $R$ and $S$, then the set $\overline {\mathbb A}(\varphi, \psi)$ is in a bijective correspondence with ${\mathbb A}(R,S)$. It follows that ${\mathbb A}$ embeds into $\overline {\mathbb A}$ fully and faithfully, and that they are equivalent if and only if ${\mathbb A}$ is absolutely complete. While the assumption that the idempotents split can presently be taken as a matter of convenience, we argue in Sec.~\ref{Sec:What-cat}, at the very end of the paper, that the absolute completeness is not a side condition, but a central feature of categories observed through the lense of adjunctions. \para{The assumption that the idempotents \emph{can}\/ split does not mean that they \emph{must}\/ split.} Like any assumption, the above assumption should not be taken as a constraint. Applying it blindly would eliminate, e.g., the categories of free algebras and coalgebras from consideration, since they are not absolutely complete. This could be repaired by completing them, which would leave us with the category of projective algebras on one hand, and the category of injective coalgebras on the other hand\footnote{An algebra is projective if it is a retract of a free algebra. Dually, a coalgebra is injective if it is a retract of a cofree coalgebra \cite[Sec.~II]{PavlovicD:LICS17}.}. This is, however, not only unnecessary, but also undesirable. Assuming that an equation has a solution does not mean that it can only be viewed in the solved form. Assuming that the idempotents split makes their retractions available, not mandatory. This expands our toolkit, but it should not be misunderstood to narrow our perspective by banishing any subjects of interest. \subsection{Tools}\label{Sec:Tools} \subsubsection{Extending matrices to adjunctions} Any matrix $\Phi\colon{\mathbb A}^{o} \times {\mathbb B} \to {\sf Set}$ from small categories ${\mathbb A}$ and ${\mathbb B}$ can be extended along the Yonda embeddings ${\mathbb A}\tto{\blacktriangledown}%{\curlyvee}%{\nabla} {\sf Set}^{{\mathbb A}^{o}}$ and ${\mathbb B}\tto{\blacktriangle}%{\curlywedge}%{{\rm \Delta}} \left({\sf Set}^{{\mathbb B}}\right)^{o}$ into an adjunction $\adj \Phi: \left({\sf Set}^{{\mathbb B}}\right)^{o} \to {\sf Set}^{{\mathbb A}^{o}}$ as follows: \begin{equation}\label{eq:deriv} \prooftree \prooftree \Phi\colon{\mathbb A}^{o} \times {\mathbb B} \to {\sf Set} \justifies \Phi_\bullet \colon {\mathbb A}^{o} \to {\sf Set}^{\mathbb B}\qquad \qquad _\bullet \Phi \colon {\mathbb B} \to {\sf Set}^{{\mathbb A}^{o}} \endprooftree \justifies \ladj \Phi \colon {\sf Set}^{{\mathbb A}^{o}} \to \Bigl({\sf Set}^{\mathbb B}\Bigr)^{o} \qquad \qquad \radj \Phi \colon \Bigl({\sf Set}^{\mathbb B}\Bigr)^{o} \to {\sf Set}^{{\mathbb A}^{o}} \endprooftree \end{equation} The second step brings us to Kan extensions. In the current context, the path to extensions leads through comprehensions. \subsubsection{Comprehending presheaves as discrete fibrations}\label{Sec:Groth-constr} Following the step from \eqref{eq:matrp} to \eqref{eq:Mat}, the comprehension correspondence \eqref{eq:compreh-pos} now lifts to \begin{eqnarray}\label{eq:compreh-cat} \mathsf{Cat} ({\mathbb A}^{o} \times {\mathbb B} , {\sf Set}) & \begin{tikzar}[row sep = 4em]\hspace{.1ex} \ar[bend left]{r}{\eh{(-)}} \ar[phantom]{r}[description]{\cong} \& \hspace{.1ex} \ar[bend left]{l}{\Xi} \end{tikzar} & \Dfib\diagup A\times B^{o} \\ \left({\mathbb A}^{o} \times {\mathbb B}\tto{\Phi} {\sf Set} \right) & \mapsto & \left(\textstyle\int\Phi\tto{\eh \Phi} {\mathbb A} \times {\mathbb B}^{o} \right) \notag \\ \notag \left({\mathbb A}^{o} \times {\mathbb B}\tto{\Xi_E} {\sf Set} \right) &\mathrel{\reflectbox{\ensuremath{\mapsto}}} & \left({\mathbb E}\tto E {\mathbb A}\times {\mathbb B}^{o}\right) \end{eqnarray} Transposing the arrow part of $\Phi$, which maps every pair $f\in {\mathbb A}(a,a')$ and $g\in {\mathbb B}(b',b)$ into $\Phi(a',b')\tto{\Phi_{fg}} \Phi(a,b)$, the closure property expressed by the implication in \eqref{eq:monotone} becomes the mapping \begin{eqnarray} {\mathbb A}(a,a') \times \Phi(a', b') \times {\mathbb B}(b', b) & \to & \Phi(a,b) \end{eqnarray} The \emph{lower-upper}\/ closure property expressed by \eqref{eq:monotone} is now captured as the structure of the total category $\textstyle\int \Phi$, defined as follows: \begin{eqnarray}\label{eq:tint} \left|\textstyle\int \Phi \right| & = & \coprod_{\substack{a\in {\mathbb A}\\ b\in {\mathbb B}}} \Phi(a,b)\\ \textstyle\int\Phi\left(x_{ab},x'_{a'b'}\right) & = & \left\{<f,g>\in {\mathbb A}(a,a')\times {\mathbb B}(b',b)\ |\ x = \Phi_{fg}(x') \right\}\notag \end{eqnarray} It is easy to see that the obvious projection \begin{eqnarray}\label{eq:tintPhi} \textstyle\int \Phi & \tto{\eh\Phi} & {\mathbb A}\times {\mathbb B}^{o}\\ x_{ab} &\mapsto & <a,b>\notag \end{eqnarray} is a discrete fibration, i.e., an object of $\Dfib \diagup {\mathbb A}\times {\mathbb B}^{o}$. In general, a functor ${\mathbb F} \tto F {\mathbb C}$ is a discrete fibration over ${\mathbb C}$ when for all $x\in {\mathbb F}$ the obvious induced functors ${\mathbb F}/x \tto{F_x} {\mathbb C}/Fx$ are isomorphisms. In other words, for every $x\in {\mathbb F}$ and every morphism $c \tto{t} Fx$ in ${\mathbb C}$, there is a unique lifting $t^! x\tto{\vartheta^t} x$ of $t$ to ${\mathbb F}$, i.e., a unique ${\mathbb F}$-morphism into $x$ such that $F(\theta^t) = t$. For a discrete fibration ${\mathbb E}\tto E {\mathbb A}\times {\mathbb B}^{o}$, such liftings induce the arrow part of the corresponding presheaf \begin{eqnarray*} \Xi_E \colon {\mathbb A}^{o} \times B & \to & {\sf Set}\\ <a,b> & \mapsto & \{x\in {\mathbb E}\ |\ Ex =<a,b>\} \end{eqnarray*} because any pair of morphisms $<f,g>\in {\mathbb A}(a,a')\times {\mathbb B}^{o}(b,b')$ lifts to a function $\Xi_E(f,g) = <f,g>^! :\Xi_E(a',b') \to \Xi_E(a,b)$. Fibrations go back to Grothendieck \cite{GrothendieckA:fibrations59,GrothendieckA:SGA1}. Overviews can be found in \cite{JacobsB:book,PavlovicD:thesis}. With \eqref{eq:matrp} generalized to \eqref{eq:Mat}, and \eqref{eq:compreh-pos} to \eqref{eq:compreh-cat}, (\ref{eq:DoA-pos}--\ref{eq:UpB-pos}) become \begin{alignat}{3} \Do {\mathbb A}\ & =\hspace{1em} \Dfib\diagup {\mathbb A} && \simeq\ \ {\sf Set}^{{\mathbb A}^{o}} \label{eq:DoA-cat}\\ \Up B\ & = \ \ \left(\Dfib\diagup {\mathbb B}^{o}\right)^{o} && \simeq\ \left({\sf Set}^{{\mathbb B}}\right)^{o} \label{eq:UpB-cat} \end{alignat} Just like the poset embeddings $A\tto \blacktriangledown}%{\curlyvee}%{\nabla \Do A$ and $B\tto\blacktriangle}%{\curlywedge}%{{\rm \Delta} \Up B$ were the join and the meet completions, the Yoneda embeddings ${\mathbb A} \tto\blacktriangledown}%{\curlyvee}%{\nabla \Do {\mathbb A}$ and ${\mathbb B}\tto\blacktriangle}%{\curlywedge}%{{\rm \Delta} \Up{\mathbb B}$, where $\blacktriangledown}%{\curlyvee}%{\nabla a = \left({\mathbb A}/a\tto{\mathrm{Dom}} {\mathbb A}\right)$ amd $\blacktriangle}%{\curlywedge}%{{\rm \Delta} b = \left(b/{\mathbb B}\tto{\mathrm{Cod}} {\mathbb B}\right)$ are the colimit and the limit completions, respectively. \subsection{The functors} \subsubsection{The functor ${\sf MA}: \mathsf{Mat}\to {\mathcal A}{\mathcal D}{\mathcal J}$} The adjunction ${\sf MA}(\Phi) = \left(\adj \Phi\right)$ induced by a matrix $\Phi:{\mathbb A}^{o} \times {\mathbb B}\to {\sf Set}$ is defined by lifting \eqref{eq:galois-pos} from posets to categories: \begin{equation}\label{eq:galois-cat} \begin{tikzar}{} {\mathbb L}\tto L{\mathbb A} \ar[mapsto]{dd} \& \Do {\mathbb A} \ar[bend right=15]{dd}[swap]{\ladj \Phi}\ar[phantom]{dd}[description]\dashv \& \displaystyle \mathop{\protect\underleftarrow{\mathrm{lim}}}\ \Big({\mathbb U}\tto U{\mathbb B} \tto{_\bullet\Phi} \Do{\mathbb A}\Big) \\ \\ \displaystyle \mathop{\protect\underleftarrow{\mathrm{lim}}}\ \Big({\mathbb L}^{o}\tto{L^{o}}{\mathbb A}^{o} \tto{\Phi_\bullet} \left(\Up{\mathbb B}\right)^{o}\Big) \& \Up {\mathbb B} \ar[bend right=15]{uu}[swap]{\radj \Phi} \& {\mathbb U}\tto U {\mathbb B} \ar[mapsto]{uu} \end{tikzar} \end{equation} The fact that ${\mathbb A} \tto \blacktriangledown}%{\curlyvee}%{\nabla \Do {\mathbb A}$ is a colimit completion means that every $L\in \Do {\mathbb A}$ is generated by the representables, i.e. $L = \mathop{\protect\underrightarrow{\mathrm{lim}}}\left({\mathbb L}\tto L{\mathbb A}\tto\blacktriangledown}%{\curlyvee}%{\nabla \Do{\mathbb A}\right)$. Any $\mathop{\protect\underrightarrow{\mathrm{lim}}}$-preserving functor $\ladj \Phi:\Do {\mathbb A} \to \Up {\mathbb B}$ thus satisfies \[ \ladj \Phi(L)\ =\ \ladj\Phi\Bigg(\mathop{\protect\underrightarrow{\mathrm{lim}}}\left({\mathbb L}\tto L{\mathbb A}\tto\blacktriangledown}%{\curlyvee}%{\nabla \Do{\mathbb A}\right)\Bigg)\ =\ \mathop{\protect\underrightarrow{\mathrm{lim}}}\left({\mathbb L}\tto L{\mathbb A}\tto{\Phi^{o}_\bullet} \Up{\mathbb B}\right)\ =\ \mathop{\protect\underleftarrow{\mathrm{lim}}}\left({\mathbb L}^{o}\tto{L^{o}}{\mathbb A}^{o}\tto{\Phi_\bullet} \left(\Up {\mathbb B}\right)^{o}\right)\] Analogous reasoning goes through for $\radj \Phi$. This completes the definition of the object part of ${\sf MA}: \mathsf{Mat}\to {\mathcal A}{\mathcal D}{\mathcal J}$. The arrow part is completely determined by the object part. \para{Remark.} The limits in $\Do {\mathbb A} \simeq {\sf Set}^{{\mathbb A}^{o}}$ and in $\left(\Up{\mathbb B}\right)^{o} \simeq {\sf Set}^{\mathbb B}$ are pointwise, which means that for any $b\in {\mathbb B}$ and diagram ${\mathbb D}\tto D{\sf Set}^{\mathbb B}$, the Yoneda lemma implies \[ \left(\mathop{\protect\underleftarrow{\mathrm{lim}}} D\right)b \ \ = \ \ {\sf Set}^{\mathbb B}\left(\blacktriangle}%{\curlywedge}%{{\rm \Delta} b, \mathop{\protect\underleftarrow{\mathrm{lim}}} D\right)\ \ =\ \ \Con(b,\eh D) \] In words, the limit of $D$ at a point $b$ is the set of commutative cones in ${\mathbb B}$ from $b$ to a diagram $\eh D:\textstyle\int D\to {\mathbb B}$ constructed by a lifting like \eqref{eq:tint}. \subsubsection{From adjunctions to monads and comonads, and back The projections of adjunctions onto monads and comonads, and the embeddings that arise as their left and right adjoints, all displayed in Fig.~\ref{Fig:AdjMndCmn}, are one of the centerpieces of the categorical toolkit. \begin{figure}[htbp] \begin{center} \begin{tikzar}[column sep = 8em] {\mathcal C}{\mathcal M}{\mathcal N} \ar[bend left = 12,phantom]{r}[description]{\scriptstyle \top} \ar[bend right = 12,phantom]{r}[description]{\scriptstyle \top} \ar[bend right=20,tail]{r}[swap]{{\sf KC}} \ar[bend left=20,tail]{r}{{\sf EC}} \ar[twoheadleftarrow]{r}[description]{{\sf AC}} \& {\mathcal A}{\mathcal D}{\mathcal J} \& {\mathcal M}{\mathcal N}{\mathcal D} \ar[bend left = 12,phantom]{l}[description]{\scriptstyle \top} \ar[bend right = 12,phantom]{l}[description]{\scriptstyle \top} \ar[bend right=20,tail]{l}[swap]{{\sf EM}} \ar[bend left=20,tail]{l}{{\sf KM}} \ar[twoheadleftarrow]{l}[description]{{\sf AM}} \end{tikzar} \caption{Relating adjunctions, monads and comonads} \label{Fig:AdjMndCmn} \end{center} \end{figure} The displayed functors are well known, but we list them for naming purposes: \begin{itemize} \item ${\sf EC}\Big(\rgt F:{\mathbb B}\to {\mathbb B}\Big)\ =\ \Big(\adj V:{\mathbb B} \to \Emc {\mathbb B} F \Big)$ {\small \hfill$\leftsquigarrow$ all coalgebras (Eilenberg-Moore)} \item ${\sf AC}\Big(\adj F:{\mathbb B}\to {\mathbb A}\Big)\ =\ \left(\rgt F = \ladj F\radj F:{\mathbb B}\to {\mathbb B} \right)$ {\small \hfill$\leftsquigarrow$ adjunction-induced comonad} \item ${\sf KC}\Big(\rgt F:{\mathbb B}\to {\mathbb B}\Big)\ =\ \Big(\adj U:{\mathbb B} \to \Klc {\mathbb B} F\Big)$ {\small \hfill$\leftsquigarrow$ cofree coalgebras (Kleisli)} \item ${\sf EM}\Big(\lft F:{\mathbb A}\to {\mathbb A}\Big)\ =\ \Big(\adj V:\Emm {\mathbb A} F \to {\mathbb A} \Big)$ {\small \hfill$\leftsquigarrow$ all algebras (Eilenberg-Moore)} \item ${\sf AM}\Big(\adj F:{\mathbb B}\to {\mathbb A}\Big)\ =\ \Big(\lft F = \radj F\ladj F:{\mathbb A}\to {\mathbb A} \Big)$ {\small \hfill$\leftsquigarrow$ adjunction-induced monad} \item ${\sf KM}\Big(\lft F:{\mathbb A}\to {\mathbb A}\Big)\ =\ \Big(\adj U:\Klc {\mathbb A} F \to {\mathbb A} \Big)$ {\small \hfill$\leftsquigarrow$ free algebras (Kleisli)} \end{itemize} Here $\Emm {\mathbb A} F$ is the category of all algebras and $\Klm {\mathbb A} F$ is the category of free algebras for the monad $\lft F$ on ${\mathbb A}$; and dually $\Emc {\mathbb B} F$ is the category of all coalgebras for the comonad $\rgt F$ on ${\mathbb B}$, whereas $\Klc {\mathbb B} F$ is the category of cofree coalgebras. As the right adjoints, the Eilenberg-Moore constructions of all algebras and all coalgebras thus provide the final resolutions for their respective monad and comonad, whereas the Kleisli constructions of free algebras and cofree coalgebras as the left adjoints provide the initial resolutions. Note that the nucleus setting in Fig.~\ref{Fig:Nuc} only uses parts of the above reflections: the final resolution ${\sf AM}\dashv {\sf EM}$ of monads, and the initial resolution ${\sf KC}\dashv {\sf AC}$ of comonads. Dually, we could use ${\sf KM}\dashv {\sf AM}$ and ${\sf AC}\dashv {\sf EC}$. Either choice induces a composite adjunction, with an induced monad on one side, and a comonad on the other side, as displayed in Fig.~\ref{Fig:street}. \section*{THIS IS PROBABLY FOR THE COMPLETIONS PAPER} \subsection{Little nucleus theorem} \begin{theorem \label{Thm:little} {\it The comonads $\rgt \MndL:{\mathcal C}{\mathcal M}{\mathcal N}\to {\mathcal C}{\mathcal M}{\mathcal N}$ and $\rgt \CmnL:{\mathcal M}{\mathcal N}{\mathcal D}\to {\mathcal M}{\mathcal N}{\mathcal D}$, defined \begin{eqnarray} \rgt \CmnL & = & {\sf AM}\circ{\sf KC}\circ{\sf AC}\circ{\sf EM} \\ \rgt \MndL &=& {\sf AC}\circ{\sf EM}\circ{\sf AM}\circ{\sf EC} \end{eqnarray} are idempotent. Iterating them leads to the natural equivalences \[ \rgt \MndL\circ \rgt\MndL \stackrel \varepsilon \simeq \rgt\MndL \qquad\qquad\qquad\qquad \rgt\CmnL\circ \rgt\CmnL \stackrel\varepsilon\simeq\rgt\CmnL\] Moreover, their categories of coalgebras are equivalent: \begin{gather} {\mathcal C}{\mathcal M}{\mathcal N}^{\rgt\MndL}\ \ \simeq\ \ \mathsf{Luc}\ \ \simeq\ \ {\mathcal M}{\mathcal N}{\mathcal D}^{\rgt\CmnL}\label{eq:luceq} \end{gather} with $\mathsf{Luc}$ as defined in \eqref{eq:Lucl}, and \begin{eqnarray} {\mathcal C}{\mathcal M}{\mathcal N}^{\rgt\MndL} & = & \left\{\rgt F\in {\mathcal C}{\mathcal M}{\mathcal N}\ |\ \rgt\CmnL \left( \rgt F\right) \stackrel \varepsilon \cong \rgt F \right\} \\ {\mathcal M}{\mathcal N}{\mathcal D}^{\rgt\CmnL} & = & \left\{\lft F\in {\mathcal M}{\mathcal N}{\mathcal D}\ |\ \rgt\MndL \left(\lft F\right) \stackrel\varepsilon \cong \lft F\right\} \end{eqnarray} \begin{figure}[!ht] \begin{center} $\begin{tikzar}[row sep = 2em,column sep = 3em] \& {\mathcal M}{\mathcal N}{\mathcal D} \arrow[loop, out = 135, in = 45, looseness = 4]{}{\displaystyle \rgt \CmnL} \ar[phantom]{r}[description,rotate = -90]{\dashv} \ar[bend right=8,two heads]{r \& {\mathcal M}{\mathcal N}{\mathcal D}^{\rgt \CmnL}\ar[bend right=8,hook]{l} \ar[leftrightarrow]{d}[description]{\mbox{\LARGE$\sim$}} \\ \& \& \mathsf{Luc} \ar[leftrightarrow]{dd}[description]{\mbox{\LARGE$\sim$}} \ar[phantom]{lld}[description,rotate = -75]{\dashv} \ar[bend left=8,twoheadleftarrow]{lld} \ar[bend right=8,hook]{lld} \\ {\mathcal A}{\mathcal D}{\mathcal J} \ar[phantom]{uur}[rotate = 45]{\scriptstyle \bot} \ar[phantom]{ddr}[rotate = -45]{\scriptstyle \bot} \ar[bend left=10,two heads]{uur}{{\sf AM}} \ar[bend right=10,two heads]{ddr}[swap]{{\sf AC}} \ar[bend right=10,leftarrowtail]{uur}[swap]{{\sf EM}} \ar[bend left=10,leftarrowtail]{ddr}{{\sf KC}} \\ \&\& \mathsf{Nuc} \ar[phantom]{llu}[description,rotate = -105]{\dashv} \ar[bend right=8,twoheadleftarrow]{llu} \ar[bend left=8,hook']{llu} \ar[leftrightarrow]{d}[description]{\mbox{\LARGE$\sim$}} \\ \& {\mathcal C}{\mathcal M}{\mathcal N} \arrow[loop, out = -45, in=-135, looseness = 6]{}{\displaystyle \lft \CmnL} \& {\mathcal C}{\mathcal M}{\mathcal N}^{\lft \CmnL} \ar[phantom]{l}[description,rotate = -90]{\dashv} \ar[bend left=8,hook']{l} \ar[bend right=8,twoheadleftarrow]{l \end{tikzar}$ \hspace{1.5cm} $\begin{tikzar}[row sep = 2em,column sep = 3em] \& {\mathcal M}{\mathcal N}{\mathcal D} \arrow[loop, out = 135, in = 45, looseness = 4]{}{\displaystyle \lft \MndL} \ar[phantom]{r}[description,rotate = -90]{\dashv} \ar[bend left=8,two heads]{r \& {\mathcal M}{\mathcal N}{\mathcal D}^{\lft \MndL}\ar[bend left=8,hook']{l} \ar[leftrightarrow]{d}[description]{\mbox{\LARGE$\sim$}} \\ \& \& \mathsf{Nuc} \ar[leftrightarrow]{dd}[description]{\mbox{\LARGE$\sim$}} \ar[phantom]{lld}[description,rotate = -75]{\dashv} \ar[bend right=8,twoheadleftarrow]{lld} \ar[bend left=8,hook']{lld} \\ {\mathcal A}{\mathcal D}{\mathcal J} \ar[phantom]{uur}[rotate = 45]{\scriptstyle \bot} \ar[phantom]{ddr}[rotate = -45]{\scriptstyle \bot} \ar[bend right=10,two heads]{uur}[swap]{{\sf AM}} \ar[bend left=10,two heads]{ddr}{{\sf AC}} \ar[bend left=10,leftarrowtail]{uur}{{\sf KM}} \ar[bend right=10,leftarrowtail]{ddr}[swap]{{\sf EC}} \\ \&\& \mathsf{Luc} \ar[phantom]{llu}[description,rotate = -105]{\dashv} \ar[bend left=8,twoheadleftarrow]{llu} \ar[bend right=8,hook]{llu} \ar[leftrightarrow]{d}[description]{\mbox{\LARGE$\sim$}} \\ \& {\mathcal C}{\mathcal M}{\mathcal N} \arrow[loop, out = -45, in=-135, looseness = 6]{}{\displaystyle \rgt \MndL} \& {\mathcal C}{\mathcal M}{\mathcal N}^{\rgt\MndL} \ar[phantom]{l}[description,rotate = -90]{\dashv} \ar[bend right=8,hook]{l} \ar[bend left=8,twoheadleftarrow]{l \end{tikzar}$ \caption{Relating little and big nuclei} \label{Fig:thms} \end{center} \end{figure}} \end{theorem} \noindent The {\bf proof} boils down to straightforward verifications with the simple nucleus formats. Fig.~\ref{Fig:thms} summarizes and aligns the claims of Theorems~\ref{Sec:Theorem} and ~\ref{Thm:little}. \subsection{Simplices and the simplex category} One of the seminal ideas of algebraic topology arose from Eilenberg's computations of homology groups of topological spaces by decomposing them into simplices \cite{EilenbergS:singular}. An $m$-simplex is the set \begin{eqnarray}\label{eq:simpl} \Delta_{[m]} & = & \left\{\vec x \in [0,1]^{m+1}\ \big|\ \sum_{i=0}^m x_i = 1\right\} \end{eqnarray} with the product topology induced by the open intervals on $[0,1]$. The relevant structure of a topological space $X$ is captured by families of continuous maps $\Delta_m \to X$, for all $m\in {\mathbb N}$. Some such maps do not \emph{embed}\/ simplices into a space, like triangulations do, but contain degeneracies, or singularities. Nevertheless, considering the entire family of such maps to $X$ makes sure that any simplices that can be embedded into $X$ will be embedded by some of them. Since the simplicial structure is captured by each $\Delta_{[m]}$'s projections onto all $\Delta_{[\ell]}$s for $\ell \lt m$, and by $\Delta_{[m]}$'s embeddings into all $\Delta_{[n]}$s for $n\gt m$, a coherent simplicial structure corresponds to a functor of the form \mbox{$\Delta_{[-]}: \Delta \to\mathsf{Esp}$}, where $\mathsf{Esp}$ is the category of topological spaces and continuous maps\footnote{We denote the category of topological spaces by the abbreviation $\mathsf{Esp}$ of the French word \emph{espace}, not just because there are other things called ${\sf Top}$ in the same contexts, but also as authors' reminder-to-self of the tacit sources of the approach \cite{GrothendieckA:SGA1,GrothendieckA:SGA4}.}, and $\Delta$ is the simplex category. Its objects are finite ordinals \begin{eqnarray*} [m] & = & \{0\lt1\lt 2\lt\cdots \lt m\} \end{eqnarray*} while its morphisms are the order-preserving functions \cite{Eilenberg-Zilber}. All information about the simplicial structure of topological spaces is thus captured in the matrix \begin{eqnarray}\label{eq:simp-matrix} \Upsilon \colon \Delta^o\times \mathsf{Esp} & \to & {\sf Set}\\ \protect \left[m\right]\times X & \mapsto & \mathsf{Esp}\left(\Delta_{[m]}, X\right)\notag \end{eqnarray} This is, in a sense, the \emph{"context matrix"}\/ of homotopy theory, if it were to be translated to the language of Sec.~\ref{Sec:FCA}, and construed as a geometric \emph{"concept analysis"}. \subsection{Kan adjunctions and extensions} Daniel Kan's work was mainly concerned with computing homotopy groups in combinatorial terms \cite{KanD:combinatorial}. That led to the discovery of categorical adjunctions as a tool for Kan's extensions of the simplicial approach \cite{KanD:adj}. Applying the toolkit from Sec.~\ref{Sec:Tools}, the matrix $\Upsilon$ from \eqref{eq:simp-matrix} gives rise to the following functors \begin{equation}\label{eq:deriv-kan} \prooftree \prooftree \Upsilon \colon \Delta^o\times \mathsf{Esp} \to {\sf Set} \justifies \Upsilon_\bullet \colon \Delta \to \Up \mathsf{Esp}\qquad \qquad _\bullet \Upsilon \colon \mathsf{Esp} \to \Do\Delta \endprooftree \justifies \ladj \Upsilon \colon \Do\Delta \to \Up\mathsf{Esp} \qquad \qquad \radj \Upsilon \colon \Up\mathsf{Esp} \to \Do\Delta \endprooftree \end{equation} where \begin{itemize} \item $\Do \Delta = \Dfib/\Delta \simeq {\sf Set}^{\Delta^{o}}$ is the category of simplicial sets $K:\Delta^{o} \to {\sf Set}$, or equivalently of complexes $\textstyle\int K : \eh K \to \Delta$, comprehended along the lines of Sec.~\ref{Sec:Groth-constr}; \item $\Up \mathsf{Esp} = \left(\Ofib / \mathsf{Esp}\right)^{o}$ is the opposite category of discrete opfibrations over $\mathsf{Esp}$, i.e. of functors ${\mathcal D}\tto D \mathsf{Esp}$ which establish isomorphisms between the coslices $x/{\mathcal D}\stackrel{D_x}\cong Dx/\mathsf{Esp}$. \end{itemize} The Yoneda embedding $\Delta \tto\blacktriangledown}%{\curlyvee}%{\nabla \Do \Delta$ makes $\Do\Delta$ into a colimit-completion of $\Delta$, and induces the extension $\ladj \Upsilon \colon \Do\Delta \to \Up\mathsf{Esp}$ of $\Upsilon_\bullet \colon \Delta \to \Up \mathsf{Esp}$. The Yoneda embedding $\mathsf{Esp} \tto\blacktriangle}%{\curlywedge}%{{\rm \Delta} \Up\mathsf{Esp}$ makes $\Up\mathsf{Esp}$ into a limit-completion of $\mathsf{Esp}$, and induces the extension $\radj \Upsilon \colon \Up\mathsf{Esp} \to \Do\Delta$ of $_\bullet \Upsilon \colon \mathsf{Esp} \to \Do\Delta$. However, $\mathsf{Esp}$ is a large category, and the category $\Up\mathsf{Esp}$ lives in another universe. Moreover, $\mathsf{Esp}$ already has limits, and completing it to $\Up\mathsf{Esp}$ obliterates them, and adjoins the formal ones. Kan's original extension was defined using the original limits in $\mathsf{Esp}$, and there was no need to form $\Up\mathsf{Esp}$. Using the standard notation $\mathsf{sSet}$ for simplicial sets ${\sf Set}^{\Delta^{o}}$, or equivalently for complexes $\Do\Delta$, Kan's original adjunction boils down to \begin{equation}\label{eq:galois-Kan} \begin{tikzar}{} {\mathbb K}\tto K\Delta \ar[mapsto]{dd} \& \mathsf{sSet} \ar[bend right=15]{dd}[swap]{\ladj \Upsilon}\ar[phantom]{dd}[description]\dashv \& \displaystyle \left(\Delta_{[-]}/ X\tto{\mathrm{Dom}} \Delta\right) \\ \\ \displaystyle \mathop{\protect\underrightarrow{\mathrm{lim}}}\ \Big({\mathbb K}\tto{K}\Delta \tto{\Delta_{[-]}} \mathsf{Esp} \Big) \& \mathsf{Esp} \ar[bend right=15]{uu}[swap]{\radj \Upsilon} \& X \ar[mapsto]{uu} \end{tikzar} \end{equation} where \begin{itemize} \item $\Upsilon_\bullet = \left(\Delta \tto{\Delta_{[-]}}\mathsf{Esp}\tto{\blacktriangle}%{\curlywedge}%{{\rm \Delta}} \Up\mathsf{Esp}\right)$, is truncated to $\Delta \tto{\Delta_{[-]}}\mathsf{Esp}$; \item $_\bullet \Upsilon\colon \Up\mathsf{Esp}\to \Do\Delta$ from \eqref{eq:galois-cat}, restricted to $\mathsf{Esp}$ leads to \begin{eqnarray*} \mathop{\protect\underleftarrow{\mathrm{lim}}}\left(1\tto X \mathsf{Esp} \tto{_\bullet \Upsilon} \Dfib\diagup \Delta\right) & = & \left(\Delta_{[-]}/X\tto{\mathrm{Dom}} \Delta\right) \end{eqnarray*} \end{itemize} The adjunction ${\sf MA}(\Upsilon) = \left(\adj \Upsilon : \mathsf{Esp} \to \mathsf{sSet}\right)$, displayed in \eqref{eq:galois-Kan}, has been studied for many years. The functor $\ladj \Upsilon: \mathsf{sSet}\to \mathsf{Esp}$ is usually called the geometric realization \cite{MilnorJ:geometric-realization}, whereas $\radj \Upsilon:\mathsf{Esp} \to \mathsf{sSet}$ is the singular decomposition on which Eilenberg's singular homology was based \cite{EilenbergS:singular}. Kan spelled out the concept of adjunction from the relationship between these two functors \cite{KanD:adj,KanD:functors}. The overall idea of the approach to homotopies through adjunctions was that recognizing this abstract relationship between $\ladj \Upsilon$ and $\radj \Upsilon$ should provide a general method for transferring the invariants of interest between a geometric and an algebraic or combinatorial category. For a geometric realization $\ladj \Upsilon K\in \mathsf{Esp}$ of a complex $K\in\mathsf{sSet}$, the homotopy groups can be computed in purely combinatorial terms, from the structure of $K$ alone \cite{KanD:combinatorial}. Indeed, the spaces in the form $\ladj \Upsilon K$ boil down to Whitehead's CW-complexes \cite{MilnorJ:geometric-realization,whitehead1949combinatorial}. What about the spaces that do not happen to be in this form? \subsection{Troubles with localizations} The upshot of Kan's adjunction $\adj \Upsilon\colon\mathsf{Esp}\to \mathsf{sSet}$ is that for any space $X$, we can construct a CW-complex $\rgt \Upsilon X = \ladj \Upsilon\radj\Upsilon X$, with a continuous map $\rgt \Upsilon X \tto\varepsilon X$, that arises as the counit of Kan's adjunction. In a formal sense, this counit is the best approximation of $X$ by a CW-complex. When do such approximations preserve the geometric invariants of interest? By the late 1950s, it was already known that such combinatorial approximations work in many special cases, certainly whenever $\varepsilon$ is invertible. But in general, even $\rgt \Upsilon \rgt \Upsilon X \tto \varepsilon \rgt \Upsilon X$ is not always invertible. The idea of approximating topological spaces by combinatorial complexes thus grew into a quest for making the units or the counits of adjunctions invertible. Which spaces have the same invariants as the geometric realizations of their singular\footnote{The word "singular" here means that the simplices, into which space may be decomposed, do not have to be embedded into it, which would make the decomposition \emph{regular}, but that the continuous maps from their geometric realizations may have \emph{singularities}.} decompositions? For particular invariants, there are direct answers \cite{Eilenberg-MacLane:relations,Eilenberg-MacLane:relationsTwo}. In general, though, localizing at suitable spaces along suitable reflections or coreflections aligns \eqref{eq:deriv-kan} with \eqref{eq:steps} and algebraic topology can be construed as a geometric extension of concept analysis from Sec.~\ref{Sec:FCA}, extracting concept nuclei from context matrices as the invariants of adjunctions that they induce. Some of the most influential methods of algebraic topology can be interpreted in this way. Grossly oversimplifying, we mention three approaches. The direct approach \cite[Vol. I, Ch.~5]{Gabriel-Zisman,BorceuxF:handbook} was to enlarge the given category by formal inverses of a family of arrows, usually called weak equivalences, and denoted by $\Sigma$. They are thus made invertible in a calculus of fractions, generalizing the one for making the integers, or the elements of an integral domain, invertible in a ring. When applied to a large category, like $\mathsf{Esp}$, this calculus of fractions generally involves manipulating proper classes of arrows, and the resulting category may even have large hom-sets. Another approach \cite{dwyer2005homotopy,QuillenD:book} is to factor out the $\Sigma$-arrows using two factorization systems. This approach is similar to the constructions outlined in Sections \ref{Sec:lin} and \ref{Sec:se-double}, but the factorizations of continuous maps that arise in this framework are not unique: they comprise families of fibrations and cofibrations, which are orthogonal by lifting and descent, thus only weakly. Abstract homotopy models in categories thus lead to pairs of \emph{weak}\/ factorization systems. Sticking with the notation ${\mathcal E}^{\bullet}\wr {\mathcal M}$ and ${\mathcal E}\wr {\mathcal M}^{\bullet}$ for such weak factorization systems, the idea is thus that the family $\Sigma$ is now generated by composing the elements of ${\mathcal E}^{\bullet}$ and ${\mathcal M}^{\bullet}$. Localizing at the arrows from ${\mathcal E}\cap {\mathcal M}$, that are orthogonal to both ${\mathcal M}^{\bullet}$ and ${\mathcal E}^{\bullet}$, makes $\Sigma$ invertible. It turns out that suitable factorizations can be found both in $\mathsf{Esp}$ and in $\mathsf{sSet}$, to make the adjunction between spaces and complexes into an equivalence. This was Dan Quillen's approach \cite{QuillenD:rational,QuillenD:book}. The third approach \cite{Applegate-Tierney:models,Applegate-Tierney:iterated} tackles the task of making the arrows $\rgt \Upsilon X \tto\varepsilon X$ invertible by modifying the comonad $\rgt \Upsilon$ until it becomes idempotent, and then localizing at the coalgebras of this idempotent comonad. Note that this approach does not tamper with the continuous maps in $\mathsf{Esp}$, be it to make some of them formally invertible, or to factor them out. The idea is that an idempotent comonad, call it $\rgt \Upsilon_\infty:\mathsf{Esp} \to \mathsf{Esp}$, should localize any space $X$ at a space $\rgt \Upsilon_\infty X$ such that $\rgt \Upsilon_\infty\rgt \Upsilon_\infty X \stackrel \varepsilon \cong \rgt \Upsilon_\infty X$. That means that $\Upsilon_\infty$ is an idempotent monad. The quest for such a monad is illustrated in Fig.~\ref{Fig:idempot}. \begin{figure}[!ht] \[\begin{tikzar}[row sep=2.5cm,column sep=3cm] \mathsf{sSet} \arrow[bend right = 12]{d \arrow[bend left = 12,leftarrow]{d} \arrow[phantom]{d}[description]{\scriptstyle \adj\Upsilon} \arrow[bend right = 7,thin]{dr \arrow[bend left = 7,leftarrow,thin]{dr} \arrow[bend right = 4,thin]{drr \arrow[bend left = 4,leftarrow,thin]{drr} \arrow[phantom]{dr}[description]{\scriptstyle\Upsilon^0\dashv\Upsilon_0} \arrow[phantom]{drr}[description]{\scriptstyle\Upsilon^1\dashv\Upsilon_1} \arrow[bend right = 2,thin,no head]{drrr} \arrow[bend left = 2,thin,no head]{drrr} \arrow[phantom]{drrr}[description]{\scriptstyle\Upsilon^\alpha\dashv\Upsilon_\alpha} \\ \mathsf{Esp} \arrow[loop, out = -50, in=-130, looseness = 6]{}[swap]{\rgt \Upsilon} \arrow[bend right = 9]{r \arrow[bend left = 9,leftarrow]{r \arrow[phantom]{r}[description]{\upadj V} \& \Emc \mathsf{Esp} \Upsilon \arrow[phantom]{r}[description,rotate=-90]{\dashv} \arrow[loop, out = -50, in=-130, looseness = 6]{}[swap]{\rgt \Upsilon_0} \arrow[bend right = 9]{r \arrow[bend left = 9,leftarrow]{r \arrow[phantom]{r}[description]{\upadj V} \& \arrow[phantom]{r}[description,rotate=-90]{\dashv} \left(\Emc\mathsf{Esp} \Upsilon\right)^{\rgt \Upsilon_0} \arrow[loop, out = -60, in=-120, looseness = 5.5]{}[swap]{\rgt \Upsilon_1} \arrow[bend right = 9,dotted]{r} \& \cdots \arrow[bend right = 9,dotted]{l} \arrow[loop, out = -55, in=-125, looseness = 12]{}[swap]{\rgt \Upsilon_\alpha} \end{tikzar}\] \caption{Iterating the comonad resolutions for $\rgt \Upsilon$} \label{Fig:idempot} \end{figure} $\Emc \mathsf{Esp} \Upsilon$ denotes the category of coalgebras for the comonad $\rgt \Upsilon = \ladj \Upsilon\radj \Upsilon$, the adjunction $\adj V:\mathsf{Esp}\to \Emc \mathsf{Esp} \Upsilon$ is the final resolution of this comonad, and $\Upsilon^0$ is the couniversal comparison functor into this resolution, mapping a complex $K$ to the coalgebra $\ladj \Upsilon K \tto{\ladj\eta} \ladj\Upsilon\radj\Upsilon\ladj \Upsilon K$. Since $\mathsf{sSet}$ is a complete category, $\Upsilon^0$ has a right adjoint $\Upsilon_0$, and they induce the comonad $\rgt\Upsilon_0$ on $\Emc \mathsf{Esp}\Upsilon$. If $\rgt \Upsilon$ was idempotent, then the final resolution $\adj V$ would be a coreflection, and the comonad $\rgt \Upsilon_0$ would be (isomorphic to) the identity. But $\rgt \Upsilon$ is not idempotent, and the construction can be applied to $\rgt \Upsilon_0$ again, leading to $\left(\Emc\mathsf{Esp} \Upsilon\right)^{\rgt \Upsilon_0}$, with the final resolution generically denoted $\adj V:\Emc \mathsf{Esp} \Upsilon \to \left(\Emc\mathsf{Esp} \Upsilon\right)^{\rgt \Upsilon_0}$, and the comonad $\rgt\Upsilon_1$ on $\left(\Emc\mathsf{Esp} \Upsilon\right)^{\rgt \Upsilon_0}$. Remarkably, Applegate and Tierney \cite{Applegate-Tierney:models} found that the process needs to be repeated \emph{transfinitely}\/ before the idempotent monad $\rgt \Upsilon_\infty$ is reached. At each step, some parts of a space that are not combinatorially approximable are eliminated, but that causes some other parts, that were previously approximable, to cease being so. And this may still be the case after infinitely many steps. A transfinite induction becomes necessary. The situation is similar to Cantor's quest for accumulation points of the convergence domains of Fourier series, which led him to discover transfinite induction in the first place. \begin{figure}[!ht] \[\begin{tikzar}[row sep=2.5cm,column sep=3cm] \mathsf{sSet} \arrow[loop, out = 135, in = 45, looseness = 4]{}[swap]{\lft \Upsilon} \arrow[bend right = 9]{d \arrow[bend left = 9,leftarrow]{d \arrow[phantom]{d}[description]{\adj \Upsilon} \arrow[bend right = 5]{r \arrow[bend left = 5,leftarrow]{r \arrow[phantom]{r}[description]{\scriptstyle\Upsilon^0\dashv \Upsilon_0} \& \Emc\mathsf{Esp} \Upsilon \arrow[loop, out = 135, in = 45, looseness = 2.5]{}[swap]{\Lft \Upsilon} \arrow[bend right = 9]{d \arrow[bend left = 9,leftarrow]{d \arrow[phantom]{d}[description]{\nadj \Upsilon} \arrow[leftrightarrow]{r}[description]{\mbox{\Huge$\simeq$}} \& \Ec \mathsf{sSet} \Upsilon \arrow[bend right = 9]{d \arrow[bend left = 9,leftarrow]{d \arrow[phantom]{d}[description]{\nadj \Upsilon} \\ \mathsf{Esp} \arrow[loop, out = -45, in=-135, looseness = 6]{}[swap]{\rgt \Upsilon} \arrow[bend right = 5]{ur \arrow[bend left = 5,leftarrow]{ur \arrow[phantom]{ur}[description]{\scriptstyle\adj V} \arrow[bend right = 5]{r \arrow[bend left = 5,leftarrow]{r \arrow[phantom]{r}[description]{\scriptstyle H^1\dashv H_1} \& \Emm \mathsf{sSet} \Upsilon \arrow[loop, out = -45, in=-135, looseness = 6]{}[swap]{\Rgt \Upsilon} \arrow[leftrightarrow]{r}[description]{\mbox{\Huge$\simeq$}} \& \Em \mathsf{Esp} \Upsilon \end{tikzar}\] \caption{The nucleus of the Kan adjunction} \label{Fig:kan-nuc} \end{figure} The nucleus of the same adjunction is displayed in Fig.~\ref{Fig:kan-nuc}. The category $\Emc \mathsf{Esp} \Upsilon$ comprises spaces that may not be homeomorphic with a geometric realization of a complex, but are their retracts, projected along the counit $\rgt \Upsilon X \eepi \varepsilon X$, and included along the structure coalgebra $X\rightarrowtail \rgt \Upsilon X$. But the projection does not preserve simplicial decompositions; i.e., it is not an $\rgt \Upsilon$-coalgebra homomorphism. The transfinite construction of the idempotent monad $\rgt \Upsilon_\infty$ was thus needed to extract just those spaces where the projection boils down to a homeomorphism. But Prop.~\ref{Prop:three} implies that simplicial decompositions of spaces in $\Emc \mathsf{Esp} \Upsilon$ can be equivalently viewed as objects of the simple nucleus category $\Ec \mathsf{sSet} \Upsilon$. Any space $X$ decomposed along a coalgebra $X\rightarrowtail \rgt \Upsilon X$ in $\Emc \mathsf{Esp} \Upsilon$ can be equivalently viewed in $\Ec \mathsf{sSet} \Upsilon$ as a complex $K$ with an idempotent $\ladj \Upsilon K \tto\varphi \ladj \Upsilon K$. This idempotent secretly splits on $X$, but the category $\Ec \mathsf{sSet} \Upsilon$ does not know that. It does know Corollary~\ref{corollary:retr}, though, which says that the object $\varphi_K = \left<K, \ladj \Upsilon K \tto\varphi \ladj \Upsilon K\right>$ is a retract of $\Rgt \Upsilon\varphi_K$; and $\Rgt \Upsilon\varphi_K$ secretly splits on $\rgt \Upsilon X$. The space $X$ is thus represented in the category $\Ec \mathsf{sSet} \Upsilon$ by the idempotent $\varphi_K$, which is a retract of $\Rgt \Upsilon\varphi_K$, representing $\rgt \Upsilon X$. Simplicial decompositions of spaces along coalgebras in $\Emc \mathsf{Esp} \Upsilon$ can thus be equivalently captured as idempotents over simplicial sets within the simple nucleus category $\Ec \mathsf{sSet} \Upsilon$. The idempotency of the nucleus construction can be interpreted as a suitable completeness claim for such representations. \para{To be continued.} How is it possible that $X$ is not a retract of $\rgt \Upsilon X$ in $\Emc \mathsf{Esp} \Upsilon$, but the object $\varphi_K$, representing $X$ in the equivalent category $\Ec \mathsf{sSet} \Upsilon$, is recognized as a retract of the object $\Rgt \Upsilon\varphi_K$, representing $\rgt \Upsilon X$? The answer is that the retractions occur at different levels of the representation. Recall, first of all, that $\Ec\mathsf{sSet} \Upsilon$ is a simplified form of $\EMC \mathsf{sSet} \Upsilon$. The reader familiar with Beck's Theorem, this time applied to comonadicity, will remember that $X$ can be extracted from $\rgt\Upsilon X$ using an equalizer that splits in $\mathsf{Esp}$, when projected along a forgetful functor $\ladj V: \Emc \mathsf{Esp} \Upsilon \to \mathsf{Esp}$. This split equalizer in $\mathsf{Esp}$ lifts back along the comonadic $\ladj V$ to an equalizer in $\Emc \mathsf{Esp} \Upsilon$, which is generally not split. On the other hand, the splitting of this equalizer occurs in $\EMC \mathsf{sSet} \Upsilon$ as the algebra carrying the corresponding coalgebra. In $\Ec \mathsf{sSet} \Upsilon$, this splitting is captured as the idempotent that it induces. We have shown, of course, that all three categories are equivalent. But $\Ec \mathsf{sSet} \Upsilon$ internalizes the absolute limits that get reflected along the forgetful functor $\ladj V$. It makes them explicit, and available for computations. But they have to be left for after the break. \subsection{What we did} We studied nuclear adjunctions. To garner intuition, we considered some examples. Since every adjunction has a nucleus, the reader's favorite adjunctions provide additional examples and applications. Our favorite example is in \cite{PavlovicD:bicompletions}. In any case, the abstract concept arose from concrete applications, so there are many \cite{PavlovicD:CALCO15,PavlovicD:ICFCA12,PavlovicD:Samson13,PavlovicD:HoTSoS15,PavlovicD:LICS17,PavlovicD:AMAI17,WillertonS:tight}. Last but not least, the nucleus construction itself is an example of itself, as it provides the nuclei of the adjunctions between monads and comonads. \subsection{What we did not do} We studied adjunctions, monads, and comonads in terms of adjunctions, monads, and comonads. We took category theory as a language and analyzed it in that same language. We preached what we practice. There is, of course, nothing unusual about that. There are many papers about the English language that are written in English. However, self-applications of category theory get complicated. They sometimes cause chain reactions. Categories and functors form a category, but natural transformations make them into a 2-category. 2-categories form a 3-category, 3-categories a 4-category, and so on. Unexpected things already happen at level 3 \cite{Gordon-Power-Street,GurskiN:threecats}. Strictly speaking, the theory of categories is not a part of category theory, but of \emph{higher}\/ category theory \cite{BaezJ-May:higher,LeinsterT:operads,LurieJ:higher,SimpsonC:higher}. Grothendieck's \emph{homotopy hypothesis}\/ \cite{GrothendieckA:pursuing,pursuing-asterisque} made higher category theory into an expansive geometric pursuit, subsuming homotopy theory. While most theories grow to be simpler as they solve their problems, and dimensionality reduction is, in fact, the main tenet of statistics, machine learning, and concept analysis, higher category theory makes the dimensionality increase into a principle of the method. This opens up the realm of applications in modern physics but also presents a significant new challenge for the language of mathematical constructions. Category theory reintroduced diagrams and geometric interactions as first-class citizens of the mathematical discourse, after several centuries of the prevalence of algebraic prose, driven by the facility of printing. Categories were invented to dam the flood of structure in algebraic topology, but they also geometrized algebra. In some areas, though, they produced their own flood of structure. Since the diagrams in higher categories are of higher dimensions, and the compositions are not mere sequences of arrows, diagram chasing became a problem. While it is naturally extended into cell pasting by filling 2-cells into commutative polygons, diagram pasting does not boil down to a directed form of diagram chasing, as one would hope. The reason is that 1-cell composition does not extend into 2-cell composition freely, but modulo the \emph{middle-two interchange}\/ law (a.k.a. \emph{Godement's naturality}\/ law). A 2-cell can thus have many geometrically different representatives. This factoring is easier to visualize using string diagrams, which are the Poincar\'e duals of the pasting diagrams. The duality maps 2-cells into vertices, and 0-cells into faces of string diagrams. Chasing 2-categorical string diagrams is thus a map-coloring activity. In the earlier versions of this paper, the nucleus was presented as a 2-categorical construction. We spent several years validating some of the results at that level of generality, and drawing colored maps to make them communicable. Introducing a new idea in a new language is a bootstrapping endeavor. It may be possible when the boots are built and strapped, but not before that. At least in our early presentations, the concept of nucleus and the diagrams of its 2-categorical context evolved two narratives. This paper became possible when we gave up on one of the narratives, and factored out the 2-categorical aspects. \subsection{What needs to be done} In view of Sec.~\ref{Sec:HT}, a higher categorical analysis of the nucleus construction seems to be of interest. The standard reference for the 2-categories of monads and comonads is \cite{StreetR:monads}, extended in \cite{Street-Lack:monads}. The adjunction morphisms were introduced in \cite{Auderset}. Their 1-cells, which we sketch in the Appendix, are the lax versions of the morphisms we use in Sec.~\ref{Sec:cat}. The 2-cells are easy to derive from the structure preservation requirement, though less easy to draw, and often even more laborious to read. Understanding is a process that unfolds at many levels. The language of categories facilitates understanding by its flexibility, but it is can also obscure its subject when imposed rigidly. The quest for categorical methods of geometry has grown into a quest for geometric methods of category theory. There is a burgeoning new scene of diagrammatic tools \cite{CoeckeB-Kissinger:book,Hinze-Marsden}. If pictures help us understand categories, then categories will help us to speak in pictures, and the nuclear methods will help us mine concepts as invariants. \subsection{What are categories and what are their model structures?} \label{Sec:What-cat} The spirit of category theory is that the objects should be studied as black boxes, in terms of the morphisms coming in and out of them. If categories themselves are studied in the spirit of category theory, then they should be studied in terms of the functors coming in and out. A functor is defined by specifying an object part and an arrow part, and confirms that a category consists of objects and arrows. Any functor $G:{\mathbb A}\to {\mathbb B}$ can be decomposed\footnote{An overview of the basic structure of factorization systems is in Appendix~\ref{appendix:factorizations}.}, as displayed in Fig.~\ref{Fig:EssFfa}, into a surjection on the objects, and an injection on the arrows, through the category ${\mathbb A}_G$, with the objects of ${\mathbb A}$ and the arrows of ${\mathbb B}$. \begin{figure}[!ht] \begin{center} \begin{minipage}{.45\linewidth} \footnotesize \begin{eqnarray*} |{\mathbb A}_G| & = & |{\mathbb A}|\\ {\mathbb A}_G(u,v) & = & {\mathbb B}(Gu, Gv) \end{eqnarray*} \end{minipage} \hspace{2em} \begin{tikzar}[row sep = 3.5em,column sep = 4em] {\mathbb A}_G \ar[tail]{d}[swap]{{\sf Ffa}(G)} \& {\mathbb A} \ar{dl}[description]{G}\ar[two heads]{l}[pos=.4,swap]{{\sf Ess}(G)}\\ {\mathbb B} \end{tikzar} \caption{Factoring of an arbitrary functor $G$ through $({\sf Ess}\wr{\sf Ffa})$} \label{Fig:EssFfa} \end{center} \end{figure} The orthogonality of the essentially surjective functors $E\in {\sf Ess}$ and full-and-faithful functors $M\in {\sf Ffa}$, is displayed in Fig.~\ref{Fig:EssFfa-fact}. \begin{figure}[!ht] \begin{center} \begin{minipage}{.45\linewidth} \footnotesize \begin{eqnarray*} HE \cong U & \rightsquigarrow & Hy = UE^{-1} (y)\\ M H \cong V & \rightsquigarrow & Hf = M^{-1} V(f) \end{eqnarray*} \end{minipage} \hspace{2em} \begin{tikzar}[row sep=3em,column sep=3em] {\mathbb A}\ar[two heads]{d}[swap]{E} \ar{r}{U} \& {\mathbb C}\ar[tail]{d}{M} \\ {\mathbb B}\ar{r}[swap]{V} \ar[dotted]{ur}[description]{H} \& {\mathbb D} \end{tikzar} \caption{The orthogonality of an essential surjection $E\in {\sf Ess}$ and a full-and-faithful $M\in {\sf Ffa}$} \label{Fig:EssFfa-fact} \end{center} \end{figure} Since $E$ is essentially surjective, for any object $y$ in ${\mathbb B}$ there is some $x$ in ${\mathbb A}$ such that $Ex \cong y$, so we take $Hy = Ux$. If $Ex' \cong y$ also holds for some other $x'$ in ${\mathbb A}$ then $MUx \cong VE x \cong Vy \cong VEx' \cong MUx'$ implies $Ux \cong Ux'$, because $M$ is full-and-faithful. The arrow part is defined using the bijections between the hom-sets provided by $M$. The factorization system $({\sf Ess} \wr {\sf Ffa})$ can be used as a stepping stone into category theory. It confirms that functors see categories as comprised of objects and arrows. \para{Functors are not the only available morphisms between categories.} Many mathematical theories study objects that are instances of categories, but require morphisms for which the functoriality is not enough. E.g., a topology is a lattice of open sets, and a lattice is, of course, a special kind of category. A continuous map between two topological spaces is an adjunction between the lattices of opens: the requirement that the inverse image of a continous map preserves the unions of the opens means that it has a right adjoint. The general functors between topologies, i.e. merely monotone maps between the lattices of opens, are seldom studied because they do not capture continuity, which is the subject of topology. For an even more general example, consider basic set theory. Functions are defined as total and single-valued relations. A total and single-valued relation between two sets is an adjunction between the two lattices of subsets: the totality is the unit of the adjunction, and the single-valuedness is the counit \cite{PavlovicD:mapsII}. A general relation induces a monotone map, i.e. a functor between the lattices of subsets. But studying functions means studying adjunctions. There are many mathematical theories where the objects of study are categories of some sort, and the morphisms between them are adjunctions. \para{What are categories in terms of adjunctions?} We saw in Sec.~\ref{Sec:churan} that applying the factorization system $({\sf Ess} \wr {\sf Ffa})$ to a pair of adjoint functors gives rise to the two initial resolutions of the adjunction: the (Kleisli) categories of free algebras and coalgebras. Completing them to the final resolutions lifts Fig.~\ref{Fig:EssFfa} to Fig.~\ref{Fig:Comparison-Comonadic}. \begin{figure}[!ht] \begin{center} \begin{minipage}{.3\linewidth} \scriptsize \begin{eqnarray*} \big|\Ec {\mathbb A} F\big| & = & \coprod_{x\in |{\mathbb A}|}\left\{\ladj F x\tto{\alpha_x} \ladj Fx\ |\ \eqref{eq:monCoalg}\right\}\\ \Ec {\mathbb A} F (\alpha_x, \gamma_z) & = & \Emc {\mathbb B} F(\ladj R\alpha_x, \ladj R\gamma_z) \end{eqnarray*} \end{minipage} \qquad\quad\begin{tikzar}[row sep = 2em,column sep = 3.5em] \Ec {\mathbb A} F \ar[tail]{dd}[description]{{\mathcal F}(F)} \& {\mathbb A} \ar[bend right=12]{ddl}[swap]{\ladj F}\ar[phantom]{ddl}[description]\dashv \ \ar[two heads]{l}[swap]{{\mathcal C}^\bullet(F)} \\ \\ {\mathbb B} \ar[bend right=12]{uur}[swap]{\radj F} \ar[two heads]{r}[swap]{{\mathcal F}^\bullet(F)} \& \Em {\mathbb B} F \ar[tail]{uu}[description]{{\mathcal C}(F)} \end{tikzar} \qquad\quad \begin{minipage}{.3\linewidth} \scriptsize \begin{eqnarray*} \big|\Em {\mathbb B} F\big| & = & \coprod_{u\in |{\mathbb B}|}\left\{\radj F u\tto{\beta^u} \radj Fu\ |\ \eqref{eq:algComon}\right\}\\ \Em {\mathbb B} F (\beta^u, \delta^w) & = & \Emm {\mathbb A} F(\radj L \beta^y, \radj L\delta^w) \end{eqnarray*} \end{minipage} \caption{Factoring the adjunction $F = (\adj F)$ through $({\mathcal C}^\bullet\wr {\mathcal F})$ and $({\mathcal C}\wr {\mathcal F}^\bullet)$} \label{Fig:Comparison-Comonadic} \end{center} \end{figure} This lifting is yet another perspective on the equivalences $\ladj R: \Ec {\mathbb A} F \to \Emc {\mathbb B} F$ and $\radj L :\Em{\mathbb B} F\to \Emm {\mathbb A} F$ from Sec.~\ref{Sec:simple} and \cite[Theorems~III.2 and III.3]{PavlovicD:LICS17}. Note that the adjunctions are taken here as morphisms in the direction of their lefth-hand component (like functions, and unlike the continuous maps), so that the functors ${\mathcal C}(F)$ and ${\mathcal F}^\bullet(F)$ in Fig.~\ref{Fig:Comparison-Comonadic}, as components of a right adjoint, are displayed in the opposite direction. That is why the ${\mathcal C}^\bullet$-component is drawn with a tail, although in the context of left-handed adjunctions it plays the role of an abstract epi. The weak factorization systems $({\mathcal C}^\bullet\wr {\mathcal F})$ and $({\mathcal C}\wr {\mathcal F}^\bullet)$ are comprised of the families \begin{itemize} \item[$\sim$] ${\mathcal F} = \{ (\adj F)\ |\ \ladj F \mbox{ is comonadic}\}$, \item[$\sim$] ${\mathcal C}^\bullet = \{ (\adj F)\ |\ \ladj F \mbox{ is a comparison functor for a comonad}\}$, \item[$\sim$] ${\mathcal C} = \{ (\adj F)\ |\ \radj F \mbox{ is monadic}\}$, \item[$\sim$] ${\mathcal F}^\bullet = \{ (\adj F)\ |\ \radj F \mbox{ is a comparison functor for a monad}\}$. \end{itemize} To see how these factorizations are related with $({\sf Ess}\wr{\sf Ffa})$, and how Fig.~\ref{Fig:Comparison-Comonadic} arises from Fig.~\ref{Fig:EssFfa}, recall from Sec.~\ref{Sec:churan} that the $({\sf Ess}\wr {\sf Ffa})$-decomposition of $\ladj F$ gives the initial resolution $\Klm {\mathbb A} F$, whereas the $({\sf Ess}\wr {\sf Ffa})$-decomposition of $\radj F$ gives the initial resolution $\Klc {\mathbb B} F$. However, $\Klm {\mathbb A} F \hookrightarrow \Emm {\mathbb A} F \simeq \Em {\mathbb B} F$ factors through the $({\mathcal C}\wr {\mathcal F}^\bullet)$-decomposition of $\radj F$, whereas $\Klc {\mathbb B} F \hookrightarrow \Emc {\mathbb B} F \simeq \Ec {\mathbb A} F$ factors through the $({\mathcal C}^\bullet\wr {\mathcal F})$-decomposition of $\ladj F$. In particular, while \begin{enumerate}[a)] \item the ${\sf Ess}$-image $\Klm {\mathbb A} F$ of ${\mathbb A}$ in ${\mathbb B}$ along $\ladj F$ is spanned by the isomorphisms $y\cong \ladj F x$, \item the ${\mathcal C}^\bullet$-image $\Ec {\mathbb A} F$ of ${\mathbb A}$ in ${\mathbb B}$ along $\ladj F$ is spanned by the retractions $y \retr \ladj F x$. \end{enumerate} It is easy to check that such retractions in ${\mathbb B}$ correspond to $\rgt F$-coalgebras. Worked out in full detail, this correspondence is the equivalence ${\ladj R}\colon \Ec {\mathbb A} F \simeq \Emc {\mathbb B} F$. Looking at the $({\mathcal C}^\bullet\wr {\mathcal F})$-decompositions from the two sides of this equivalence aligns the orthogonality of ${\mathcal C}^\bullet$ and ${\mathcal F}$ with the orthogonality of ${\sf Ess}$ and ${\sf Ffa}$, as indicated in Fig.~\ref{Fig:Comparison-Comonadic-fact}. \begin{figure}[!ht] \begin{center} \begin{minipage}{.45\linewidth} \footnotesize \begin{eqnarray*} HE \cong U & \rightsquigarrow & H\alpha_x =\left(V\alpha_x\begin{tikzar}[row sep = 3em,column sep = 1.8em] \ar[thin,tail]{r}{V\tilde \alpha_x} \&\hspace{.1ex}\ar[thin,two heads,bend left=30,dashed]{l}{V\alpha^x}\end{tikzar}VEx\cong MUx\right) \\ M H \cong V & \rightsquigarrow & Hf = V(f) \end{eqnarray*} \end{minipage} \hspace{2em} \begin{tikzar}[row sep=3em,column sep=3em] {\mathbb A}\ar[two heads]{d}[swap]{E} \ar{r}{U} \& \Emc {\mathbb D} G\ar[tail]{d}{M} \\ \Ec {\mathbb A} F\ar{r}[swap]{V} \ar[dotted]{ur}[description]{H} \& {\mathbb D} \end{tikzar} \caption{The orthogonality of a comparison functor $E\in {\mathbb C}^\bullet$ and a comonadic $M\in {\mathcal F}$} \label{Fig:Comparison-Comonadic-fact} \end{center} \end{figure} Since any object $\alpha_x$ of $\Ec {\mathbb A} F$ induces a retraction $\lft Fx \eepi{\tilde \alpha_x} x \mmono{\alpha^x} \lft F x$, and the comparison functor $E$ maps $x$ to $Ex = \left<\lft Fx, \ladj F \lft Fx \eepi{\varepsilon \radj F} \ladj F x \mmono{\ladj F \eta} \ladj F\lft F x\right>$, the image $V\alpha_x$ splits into $VEx \eepi{V\tilde \alpha_x} V\alpha_x \mmono{V\alpha^x} VE x$. But the isomorphism $VEx \cong MU x$ and the comonadicity of $M$ imply that the $M$-split equalizer $V\alpha_x \mmono{V\alpha^x} VE x \cong MU x$ lifts to $\Emc {\mathbb D} G$. This lifting determines $H\alpha_x$. The conservativity of $M$ assures that $H$ is well-defined, and that the $V$-images of the $\Ec {\mathbb A} F$-morphisms in ${\mathbb D}$ lift to coalgebra homomorphisms in $\Emc {\mathbb D} G$. \para{Moral.} Lifting the canonical factorization $({\sf Ess} \wr {\sf Ffa})$ of functors to the canonical factorizations \mbox{$({\mathcal C}^\bullet\wr {\mathcal F})$} and \mbox{$({\mathcal C}\wr {\mathcal F}^\bullet)$} of adjunctions thus boils down to \emph{generalizing from isomorphisms to retractions}. If the $({\sf Ess}\wr{\sf Ffa})$-factorization confirmed that a category, from the standpoint of functors, consists of objects and arrows, then the factorizations $({\mathcal C}^\bullet\wr {\mathcal F})$ and $({\mathcal C}\wr {\mathcal F}^\bullet)$ suggest that from the standpoint of adjunctions, a category also comprises the absolute limits and colimits, a.k.a. retractions. In summary, \begin{eqnarray*} \frac{\mbox{functors}}{\mbox{category = objects + arrows}} & = & \frac{\mbox{adjunctions}}{\mbox{category = objects + arrows + retractions}} \end{eqnarray*} This justifies the assumption that all idempotents can be split, announced and explained in Sec.~\ref{assumption}. The readers familiar with Quillen's homotopy theory \cite{QuillenD:book} may notice a homotopy model structure lurking behind the weak factorizations $({\mathcal C}^\bullet\wr {\mathcal F})$ and $({\mathcal C}\wr {\mathcal F}^\bullet)$. Corollary~\ref{corollary:retr} suggests that the family of weak equivalences, split by the nucleus construction, consists of the functors which do not only preserve, but also reflect the absolute limits and colimits. \subsection{DRAFTS} \subsection{Matrices (a.k.a. distributors, profunctors, bimodules)} \begin{eqnarray}\label{eq:Mat-appendix} |\mathsf{Mat}| & = & \coprod_{{\mathbb A}, {\mathbb B}\in \mathsf{CAT}} \Dfib\diagup {\mathbb A} \times {\mathbb B}^{o}\\[2ex] \mathsf{Mat} (\Phi, \Psi) & = & \coprod_{\substack{H\in \mathsf{CAT}({\mathbb A},{\mathbb C})\\ K\in \mathsf{CAT}({\mathbb B},{\mathbb D})}} \Bigg(\Dfib\diagup {\mathbb A} \times {\mathbb B}^{o}\Bigg)\Big(\Phi,(H\times K^{o})^\ast\Psi\Big)\notag \end{eqnarray} where $\Psi\in \Dfib\diagup {\mathbb C}\times {\mathbb D}$, and $(H\times K^{o})^\ast\Psi$ is its pullback along \mbox{$\left(H\times K^{o}\right)\colon {\mathbb A}\times {\mathbb B}^{o}\tto{\hspace{1em}} {\mathbb C}\times {\mathbb D}^{o}$}. Obviously, $\Phi\in \Dfib\diagup {\mathbb A}\times {\mathbb B}^{o}$. \subsection{Adjunctions} \begin{eqnarray}\label{eq:Adj-appendix} |{\mathcal A}{\mathcal D}{\mathcal J}| & = & \coprod_{{\mathbb A}, {\mathbb B}\in \mathsf{CAT}} \coprod_{\substack{\ladj F\in \mathsf{CAT}({\mathbb A},{\mathbb B})\\ \radj F\in \mathsf{CAT}({\mathbb B},{\mathbb A})}}\Big\{ <\eta,\varepsilon>\in {\rm Nat}(\mathrm{id}, \radj F\ladj F)\times {\rm Nat}(\ladj F\radj F, \mathrm{id})\ \big| \\[-1ex] && \hspace{9em} \varepsilon \ladj F \circ \ladj F \eta = \ladj F \ \wedge\ \radj F\varepsilon \circ \eta \radj F = \radj F\Big\} \notag \\[3ex] {\mathcal A}{\mathcal D}{\mathcal J} (F, G) & = & \coprod_{\substack{H\in \mathsf{CAT}({\mathbb A},{\mathbb C})\\ K\in \mathsf{CAT}({\mathbb B},{\mathbb D})}} \Big\{ \left<\ladj \upsilon,\radj \upsilon\right>\in {\rm Nat}(K\ladj F, \ladj G H)\times {\rm Nat}(H\radj F, \radj GK)\ \big| \notag\\[-1ex] && \hspace{4em} \varepsilon^G K \circ \ladj G\radj \upsilon \circ \ladj \upsilon\radj F = K\varepsilon^F \ \wedge\ \eta^G H = \radj G\ladj \upsilon\circ \radj \upsilon \ladj F \circ H \eta^F\Big\}\nota \end{eqnarray} \subsection{Monads} \begin{eqnarray}\label{eq:Mnd-appendix} |{\mathcal M}{\mathcal N}{\mathcal D}| & = & \coprod_{{\mathbb A}\in \mathsf{CAT}} \coprod_{\lft T\in \mathsf{CAT}({\mathbb A},{\mathbb A})}\big\{ <\eta,\mu>\in {\rm Nat}(\mathrm{id}, \lft T)\times {\rm Nat}(\lft T\lft T, \lft T)\ | \\[-1ex] && \hspace{9em} \mu \circ \lft T \mu = \mu\circ \mu \lft T\wedge \mu\circ \lft T\eta = \lft T = \mu \circ \eta\lft T\big\} \notag \\[3ex] {\mathcal M}{\mathcal N}{\mathcal D} \left(\lft T,\lft S\right) & = & \coprod_{H\in \mathsf{CAT}({\mathbb A},{\mathbb C})} \Big\{\chi \in {\rm Nat}(\lft T H, H \lft S)\ \big|\ \notag \\ && \hspace{4em} \chi\circ \eta^T H= H\eta^S \ \wedge\ H\mu^S\circ \chi S\circ T\chi = \chi \circ \mu^T H \big\} \notag \end{eqnarray} \subsection{Comonads} \begin{eqnarray}\label{eq:Cmn} |{\mathcal C}{\mathcal M}{\mathcal N}| & = & \coprod_{{\mathbb B}\in \mathsf{CAT}} \coprod_{\rgt T\in \mathsf{CAT}({\mathbb B},{\mathbb B})}\big\{ <\varepsilon,\nu>\in {\rm Nat}(\rgt T,\mathrm{id} )\times {\rm Nat}(\rgt T, \rgt T \rgt T)\ | \notag\\[-1ex] && \hspace{9em} \rgt T \nu \circ \nu = \nu \rgt T\circ \nu \wedge \rgt T\varepsilon \circ \nu = \rgt T = \varepsilon \rgt T\circ \nu \big\} \\[3ex] {\mathcal C}{\mathcal M}{\mathcal N} \left(\rgt S,\rgt T\right) & = & \coprod_{K\in \mathsf{CAT}({\mathbb B},{\mathbb D})} \notag \Big\{ \kappa \in {\rm Nat}(K\rgt S, \rgt T K)\ \big|\ \notag \\ && \hspace{4em} \varepsilon^T K\circ \kappa= K\varepsilon^S\ \wedge\ \rgt T \kappa \circ \kappa \rgt S \circ K \nu^S = \nu^S K \circ \kappa \Big\} \notag \end{eqnarray} \begin{figure}[p] \begin{center} \begin{alignat*}{5} \begin{tikzar {\mathbb A} \arrow{d}{\ladj{F}} \arrow[bend right=75]{dd}[swap]{\mathrm{id}}[name=L]{} \\ {\mathbb B} \arrow{d}{\radj F} \arrow[bend left=75]{dd}{\mathrm{id}}[name=R,swap]{} \arrow[Leftarrow,to path = -- (L)\tikztonodes]{}[swap]{\eta} \\ {\mathbb A} \arrow{d}{\ladj{F}} \arrow[Rightarrow,to path = -- (R)\tikztonodes]{}{\varepsilon} \\ {\mathbb B} \end{tikzar} & =\ \ \begin{tikzar}[row sep=2.8em {\mathbb A} \arrow{ddd}{\ladj{F}} \\ \\ \\ {\mathbb B} \end{tikzar} &\qquad \qquad & \begin{tikzar {\mathbb B} \arrow{d}{\radj{F}} \arrow[bend left=75]{dd}{\mathrm{id}}[name=R,swap]{}\\ {\mathbb A} \arrow{d}{\ladj F} \arrow[bend right=75]{dd}[swap]{\mathrm{id}}[name=L]{} \arrow[Rightarrow,to path = -- (R)\tikztonodes]{}{\varepsilon} \\ {\mathbb B} \arrow{d}{\radj{F}} \arrow[Leftarrow,to path = -- (L)\tikztonodes]{}[swap]{\eta} \\ {\mathbb A} \end{tikzar} &\ \ =\ \ \begin{tikzar}[row sep=2.8em {\mathbb B} \arrow{ddd}{\radj{F}} \\ \\ \\ {\mathbb A} \end{tikzar} \end{alignat*} \caption{Pasting equations for adjunction $\adj F$.} \label{Fig:eta-epsilon-appendix} \end{center} \end{figure} \begin{figure}[p] \begin{center} \begin{alignat*}{5} \begin{tikzar}[column sep=.8em] \& {\mathbb A} \arrow{d}{\ladj{F}} \arrow[bend right=30]{ddl}[swap]{\mathrm{id}}[name=L]{} \arrow{dr}{H} \\ \& {\mathbb B} \arrow{dl}{\radj F} \arrow[Leftarrow,to path = -- (L)\tikztonodes]{}[swap]{\eta} \arrow{dr}[swap]{K} \ar[Rightarrow]{r}{\ladj \upsilon} \& {\mathbb C} \ar{d}{\ladj G} \\ {\mathbb A} \arrow{dr}[swap]{H} \&\stackrel{\radj \upsilon}\Longrightarrow \& {\mathbb D} \ar{dl}{\radj G} \\ \& {\mathbb C} \end{tikzar} \ \ \ \ \ & =\ \ \ \ \begin{tikzar}[column sep=.001cm,row sep = .9cm {\mathbb A} \arrow{d}{H} \\ {\mathbb C} \arrow{dr}{\ladj G} \arrow[bend right=30]{dd}[swap]{\mathrm{id}}[name=L]{} \\ \& {\mathbb D} \arrow{dl}{\radj G} \arrow[Leftarrow,to path = -- (L)\tikztonodes]{}[swap]{\eta} \\ {\mathbb C} \end{tikzar} &\qquad \qquad\qquad & \begin{tikzar}[column sep=.8em] \& {\mathbb B} \arrow{dl}[swap]{\radj F} \arrow{dr}{K} \\ {\mathbb A} \arrow{dr}{H} \ar{d}[swap]{\ladj F} \&\stackrel{\radj \upsilon}\Longrightarrow \& {\mathbb D} \ar{dl}[swap]{\radj G} \arrow[bend left=30]{ddl}{\mathrm{id}}[name=R,swap]{} \\ {\mathbb B} \ar[Rightarrow]{r}{\ladj \upsilon}\ar{dr}[swap]{K}\& {\mathbb C} \arrow{d}{\ladj{G}} \arrow[Rightarrow,to path = -- (R)\tikztonodes]{}{\varepsilon} \\ \& {\mathbb D} \end{tikzar}&\ \ \ \ \ =\ \ \ \ \ \begin{tikzar}[column sep=.001cm,row sep = .9cm \& {\mathbb B} \arrow{dl}[swap]{\radj F} \arrow[bend left=30]{dd}{\mathrm{id}}[name=L,swap]{} \\ {\mathbb A} \arrow{dr}[swap]{\ladj F} \arrow[Rightarrow,to path = -- (L)\tikztonodes]{}{\varepsilon} \\ \& {\mathbb B} \ar{d}[swap]{K} \\ \&{\mathbb D} \end{tikzar} \end{alignat*} \caption{Pasting equations for adjunction 1-cell $<H,K,\ladj \upsilon, \radj \upsilon>:F\to G$.} \label{Fig:upsilon-appendix} \end{center} \end{figure} \begin{figure}[p] \begin{center} \begin{alignat*}{3} \begin{tikzar}[row sep=2.8em,column sep=3.2em] {\mathbb A} \arrow{d}[swap]{T} \arrow[phantom]{}[name=U,below=.25]{} \& {\mathbb A} \arrow{l}[swap]T \arrow{dd}{T}[name=R,swap]{} \arrow{dl}{T}[name=D,swap]{} \arrow[Rightarrow,to path =(U) -- (D)\tikztonodes]{}[swap]{\mu} \\ {\mathbb A} \arrow{dr}[swap]{T} \arrow[Rightarrow,to path = -- (R)\tikztonodes]{}{\mu} \\ \& {\mathbb A} \end{tikzar} &\ \ =\ \ \begin{tikzar}[row sep=2.8em,column sep=3.2em] \& {\mathbb A} \arrow{dd}{T}[name=R,swap]{} \arrow{dl}[swap]{T} \\ {\mathbb A} \arrow{dr}{T}[name=D,swap]{} \arrow{d}[swap]{T} \arrow[Rightarrow,to path = -- (R)\tikztonodes]{}{\mu} \\ {\mathbb A} \arrow{r}[swap]T \arrow[phantom]{}[name=U,above=0.25]{} \arrow[Rightarrow,to path =(U) -- (D)\tikztonodes]{}{\mu} \& {\mathbb A} \end{tikzar} \end{alignat*} \begin{alignat*}{3} \begin{tikzar}[row sep=2.8em,column sep=2.8em] \& {\mathbb A} \arrow[bend right = 75]{dl}[swap]{\mathrm{id}}[name=U]{} \arrow{dd}{T}[name=R,swap]{} \arrow{dl}{T}[name=D,swap]{} \arrow[Rightarrow,to path =(U) -- (D)\tikztonodes]{}[swap]{\eta} \\ {\mathbb A} \arrow{dr}[swap]{T} \arrow[Rightarrow,to path = -- (R)\tikztonodes]{}{\mu} \\ \& {\mathbb A} \end{tikzar} &\ \ \ \ =\ \ \begin{tikzar}[row sep=2.4em {\mathbb A} \arrow{ddd}{T} \\ \\ \\ {\mathbb A} \end{tikzar} &\ \ \ \ =\ \ \begin{tikzar}[row sep=2.8em,column sep=2.8em] \& {\mathbb A} \arrow{dd}{T}[name=R,swap]{} \arrow{dl}[swap]{T}\\ {\mathbb A} \arrow[bend right = 75]{dr}[swap]{\mathrm{id}}[name=U]{} \arrow{dr}{T}[name=D,swap]{} \arrow[Rightarrow,to path =(U) -- (D)\tikztonodes]{}{\eta} \arrow[Rightarrow,to path = -- (R)\tikztonodes]{}{\mu} \\ \& {\mathbb A} \end{tikzar} \end{alignat*} \caption{Pasting equations for monad $\lft T$ on ${\mathbb A}$.} \label{Fig:mu-eta} \end{center} \end{figure} \begin{figure}[p] \begin{center} \begin{tikzar}[row sep=2.8em,column sep=3.2em] \& \lft T H \ar{dd}[description]{\chi} \& \lft T\lft T H \ar{l}[swap]{\mu^T H} \ar{d}[description]{\lft T \chi} \\ H \ar{ur}{\eta^T H} \ar{dr}[swap]{H\eta^S} \&\& \lft T H\lft S \ar{d}[description]{\chi \lft S} \\ \& H\lft S \& H \lft S\lft S \ar{l}{H\mu^S} \end{tikzar} \caption{Commutative diagrams for monad 1-cell $<H,\chi>: \lft T \to \lft S$.} \label{Fig:monad-morphism} \end{center} \end{figure} \begin{figure}[p] \begin{center} \begin{alignat*}{3} \begin{tikzar}[row sep=2.8em,column sep=3.2em] {\mathbb A} \arrow{d}[swap]{S} \arrow[phantom]{}[name=U,below=0.25]{} \& {\mathbb A} \arrow{l}[swap]S \arrow{dd}{S}[name=R,swap]{} \arrow{dl}{S}[name=D,swap]{} \arrow[Leftarrow,to path =(U) -- (D)\tikztonodes]{}[swap]{\nu} \\ {\mathbb A} \arrow{dr}[swap]{S} \arrow[Leftarrow,to path = -- (R)\tikztonodes]{}{\nu} \\ \& {\mathbb A} \end{tikzar} &\ \ =\ \ \begin{tikzar}[row sep=2.8em,column sep=3.2em] \& {\mathbb A} \arrow{dd}{S}[name=R,swap]{} \arrow{dl}[swap]{S} \\ {\mathbb A} \arrow{dr}{S}[name=D,swap]{} \arrow{d}[swap]{S} \arrow[Leftarrow,to path = -- (R)\tikztonodes]{}{\nu} \\ {\mathbb A} \arrow{r}[swap]S \arrow[phantom]{}[name=U,above=0.25]{} \arrow[Leftarrow,to path =(U) -- (D)\tikztonodes]{}{\nu} \& {\mathbb A} \end{tikzar} \end{alignat*} \begin{alignat*}{3} \begin{tikzar}[row sep=2.8em,column sep=2.8em] \& {\mathbb A} \arrow[bend right = 75]{dl}[swap]{\mathrm{id}}[name=U]{} \arrow{dd}{S}[name=R,swap]{} \arrow{dl}{S}[name=D,swap]{} \arrow[Leftarrow,to path =(U) -- (D)\tikztonodes]{}[swap]{\varepsilon} \\ {\mathbb A} \arrow{dr}[swap]{S} \arrow[Leftarrow,to path = -- (R)\tikztonodes]{}{\nu} \\ \& {\mathbb A} \end{tikzar} &\ \ \ \ =\ \ \begin{tikzar}[row sep=2.4em {\mathbb A} \arrow{ddd}{S} \\ \\ \\ {\mathbb A} \end{tikzar} &\ \ \ \ =\ \ \begin{tikzar}[row sep=2.8em,column sep=2.8em] \& {\mathbb A} \arrow{dd}{S}[name=R,swap]{} \arrow{dl}[swap]{S}\\ {\mathbb A} \arrow[bend right = 75]{dr}[swap]{\mathrm{id}}[name=U]{} \arrow{dr}{S}[name=D,swap]{} \arrow[Leftarrow,to path =(U) -- (D)\tikztonodes]{}{\varepsilon} \arrow[Leftarrow,to path = -- (R)\tikztonodes]{}{\nu} \\ \& {\mathbb A} \end{tikzar} \end{alignat*} \caption{Pasting equations for comonad $\rgt S$ on ${\mathbb B}$} \label{Fig:comonad} \end{center} \end{figure} \begin{figure}[p] \begin{center} \begin{tikzar}[row sep=2.8em,column sep=3.2em] \& \rgt T K \ar{dl}[swap]{\varepsilon^T K} \ar{r}{\nu^T H} \& \rgt T\rgt T K \\ K \&\& \rgt T H\rgt S \ar{u}[description]{\rgt T \kappa} \\ \& K\rgt S\ar{uu}[description]{\kappa} \ar{r}[swap]{K\nu^S} \ar{ul}{K\varepsilon^S} \& K \rgt S\rgt S \ar{u}[description]{\kappa \rgt S} \end{tikzar} \caption{Commutative diagrams for comonad 1-cell $<K,\kappa>: \rgt S \to \rgt T$.} \label{Fig:comonad-morphism} \end{center} \end{figure} \subsection{The initial (Kleisli) resolutions ${\sf KM}:{\mathcal M}{\mathcal N}{\mathcal D} \to {\mathcal A}{\mathcal D}{\mathcal J}$ and ${\sf KC}:{\mathcal C}{\mathcal M}{\mathcal N}\to {\mathcal A}{\mathcal D}{\mathcal J}$} The Kleisli construction assigns to the monad $T:{\mathbb A}\to {\mathbb A}$ the resolution $\overleftarrow{\KKK} T = \left(\kadj{T}: \Klm {\mathbb A} {T} \to {\mathbb A}\right)$ where the category $\Klm {\mathbb A} {T}$ consists of \begin{itemize} \item \emph{free algebras} as objects, which boil down to $|\Klm {\mathbb A} T|\ = \ |{\mathbb A}|$; \item \emph{algebra homomorphisms} as arrows, which boil down to $\Klm {\mathbb A} T(x,x') = {\mathbb A}(x,Tx')$; \end{itemize} with the composition \begin{eqnarray*} \Klm {\mathbb A} T (x,x') \times \Klm {\mathbb A} T(x',x'') & \stackrel \circ \longrightarrow & \Klm {\mathbb A} T(x,x'')\\ \big < x \tto f Tx'\, ,\ x'\tto g Tx'' \big> &\longmapsto & \big(x\tto f Tx' \tto {Tg} TTx'' \tto \mu Tx'' \big) \end{eqnarray*} and with the identity on $x$ induced by the monad unit $\eta : x \to Tx$ \subsection{The final (Eilenberg-Moore) resolutions ${\sf EM}:{\mathcal M}{\mathcal N}{\mathcal D} \to {\mathcal A}{\mathcal D}{\mathcal J}$ and ${\sf EC}:{\mathcal C}{\mathcal M}{\mathcal N}\to {\mathcal A}{\mathcal D}{\mathcal J}$} The Eilenberg-Moore construction assigns to the monad $T:{\mathbb A}\to {\mathbb A}$ the resolution $\overleftarrow{\EEE} T = \left(\nadj{T}: \Emm {\mathbb A} {T} \to {\mathbb A}\right)$ where the category $\Emm {\mathbb A} {T}$ consists of \begin{itemize} \item \emph{all algebras}\/ as objects: \begin{eqnarray*} |\Emm {\mathbb A} {T}| & = & \sum_{x\in |{\mathbb A}|} \big\{\alpha \in {\mathbb A}(Tx, x)\ |\ \alpha\circ \eta = \mathrm{id}\ \wedge \ \alpha\circ T\alpha = \alpha \circ \mu \big\}\end{eqnarray*} \item \emph{algebra homomorphisms}\/ as arrows: \begin{eqnarray*} \Emm {\mathbb A} T (Tx\tto \alpha x, Tx'\tto \gamma x') & = & \big\{f\in {\mathbb A}(x,x')\ |\ f\circ \alpha = \gamma\circ Tf \big\}\end{eqnarray*} \end{itemize}
1,116,691,501,105
arxiv
\section{Introduction} Amorphous optical media such as the amorphous silicon are used extensively as waveguides in optoelectronic devices \cite{Street,Dood}. Because of the strong light confinement and the sub-wavelength core size of these waveguides, light propagates paraxially along their axis. Normally defects are present in the medium, which affect its optical properties due to the elasto-optic strain. The study of paraxial propagation in defected amorphous media is, thus, important. Recently, we have examined paraxial propagation in screw dislocated amorphous media \cite{Mashhadi}. While dislocations are translational defects, rotational defects (disclinations) are also common in disordered media such as amorphous solids and liquid crystals. In the present work, we study paraxial beam propagation along the wedge axis of a disclinated amorphous medium. The presence of wedge disclination in an initially homogeneous isotropic amorphous medium is shown to induce weak inhomogeneity as well as \textit{uniaxial} anisotropy due to the elsto-optic effect. The inhomogeneity causes an adiabatic variation in the direction of the wave propagation, resulting in Berry phase and curvature that are affected by the uniaxial anisotropy. The geometric Berry phase \cite{Berry} acquired by an optical beam (as the Pancharatnam phase \cite{Pancharatnam} or a spin redirection phase \cite{Tomita,Chiao}) has attracted extensive attention. In particular, observable effects such as the Rytov-Vladimirskii rotation \cite{Rytov,Vladimirskii} and the optical spin Hall (or Magnus) effect \cite{Zeldovich,Zeldovich2} have been derived as manifestations of Berry phase and curvature, respectively \cite{ Bliokh,Bliokh1,Bliokh2,Onoda,Onoda2,Sawada,Mehrafarin}. Here, the Berry phase manifests itself as a precession of the polarization vector, which is characteristic of anisotropic media \cite{Bliokh5}. The Berry curvature enters the equations of motion of the beam and is responsible for the opposite deflections of the right/left circularly polarized beams. Because of the anisotropy, these deflections vary sinusoidally along the paraxial direction. This yields the optical spin Hall effect in the disclinated medium, whose application in determining the birefringence and the magnitude of the Frank vector is explained. \section{The effect of wedge disclination} We consider a monochromatic circularly polarized wave propagating paraxially in a homogeneous isotropic medium of refractive index $n$. The unit wave vector $\hat{\bm{k}}$ thus holds an angle $\theta$, which is always sufficiently small, with the paraxial direction $z$. Denoting the polarization vectors by $\bm{\epsilon}_\sigma$, where $\sigma=\pm1$ correspond to right/left circular polarization ($\bm{\epsilon}_\sigma^\dagger\bm{\epsilon}_{\sigma'}=\delta_{\sigma\sigma'}$), we have $$ \bm{\epsilon}_\sigma=\frac{1}{\surd{2}}(\hat{\bm{\theta}}-i\sigma \hat{\bm{\varphi}}) $$ $\hat{\bm{\theta}}, \hat{\bm{\varphi}}$ being the spherical unit vectors orthogonal to $\hat{\bm{k}}$. The beam's spin angular momentum along the direction of propagation (the helicity) is \cite{Berry3} \begin{equation} \bm{\epsilon}_\sigma^\dagger (-i \hat{\bm{k}}\times\bm{\epsilon}_\sigma)=\sigma. \label{hel} \end{equation} We consider the effect of introducing a wedge disclination, with Frank vector $\bm{\omega}$ oriented along the paraxial direction, in the initially homogeneous isotropic medium. From the standpoint of a Volterra process, the wedge disclination corresponds to cutting or inserting a material wedge of dihedral angle $\omega=|\bm{\omega}|$, which is generally small. The corresponding displacement vector field, $\bm{u}$, in cylindrical coordinates has the nonzero component $u_\varphi=(\alpha-1)\rho\varphi$, where $\alpha-1=\pm \omega/2\pi$ and the $+$($-$) sign pertains to insertion (cut). Since $\nabla\cdot\bm{u}=\alpha-1$ is small, the disclination produces slight expansion/compression and renders the medium weakly inhomogeneous. This will cause an adiabatic variation in the direction of the wave propagation resulting in Berry phase and curvature. Furthermore, the disclination strain tensor field has the following nonzero components: $$S_{\rho\varphi}=S_{\varphi\rho}=\frac{1}{2}(\alpha-1)\varphi,\ \ S_{\varphi\varphi}=\alpha-1.$$ The relative permittivity tensor $n^2 \delta_{ij}$, thus, acquires an anisotropic part $\Delta_{ij}$ due to the strain, where (see e.g. \cite{Liu}) $$ \Delta_{ij}=-n^4 p_{ijkl}S_{kl} $$ $p_{ijkl}$ being the elasto-optic coefficients of the medium. A wedge disclination, therefore, renders an otherwise homogeneous isotropic medium weakly inhomogeneous and anisotropic. For amorphous media, where only two independent elasto-optic coefficients (customarily denoted by $p_{11}$ and $p_{12}$) exist, we find, after detailed calculations, $$ \Delta_{\rho\rho}=\Delta_{zz}=-(\alpha-1)p_{12}n^4, \ \ \Delta_{\varphi\varphi}=-(\alpha-1)p_{11}n^4 $$ other components being zero. The principle refractive indices are, therefore, $$ n_\rho= n_z=n-\frac{1}{2}(\alpha-1)p_{12}n^3,\ \ n_\varphi=n-\frac{1}{2}(\alpha-1)p_{11}n^3 $$ to first order in the elasto-optic perturbation. (The adiabatic variation of refractive indices with position are to be ignored, of course.) The weak \textit{uniaxial} anisotropy thus induced in the amorphous medium results in a phase difference for the two linearly polarized modes that constitute the paraxial beam. Therefore, the polarization vector becomes \begin{equation} \bm{\epsilon}_\sigma =\frac{1}{\surd 2}(\hat{\bm{\theta}}- i \sigma e^{ik_0\Delta n z} \hat{\bm{\varphi}}) \label{e} \end{equation} where $k_0$ is the wave number in vacuum and \begin{equation} \Delta n=n_\varphi-n_\rho=\frac{1}{2}(\alpha-1) (p_{11}-p_{12})n^3 \label{biref} \end{equation} is the induced birefringence. Note that $\bm{\epsilon}_\sigma$ still satisfy the orthonormality condition $\bm{\epsilon}_\sigma^\dagger\bm{\epsilon}_{\sigma'}=\delta_{\sigma\sigma'}$, of course. The beam's helicity is calculated from (\ref{hel}) to be $\sigma \cos(k_0\Delta nz)$. As expected \cite{Berry3}, the helicity varies along the paraxial direction due to the induced elasto-optic birefringence and reduces to the constant value $\sigma$ in the absence of the wedge disclination ($\alpha=1$). \section{Berry effects in the beam dynamics} The adiabatic variation of the refractive indices with position has negligible dynamical effect and was, therefore, ignored. However, the resulting adiabatic variation of the beam direction $\hat{\bm{k}}$ plays a geometric role with nontrivial consequences for the beam dynamics. As usual, the variation gives rise to a parallel transport law in the momentum space, defined by the Berry connection (gauge potential) $$ \bm{A}_{\sigma \sigma'}(\bm{k})=\bm{\epsilon}_\sigma^\dagger(-i\nabla_{\bm{k}}) \bm{\epsilon}_{\sigma'}. $$ Using (\ref{e}), we obtain $$ \bm{A}_{\sigma \sigma'}=\left(\cos(k_0\Delta nz)\delta_{\sigma \sigma'}+i \sin(k_0\Delta nz)(\delta_{\sigma \sigma'}-1)\right)\sigma\frac{\cot\theta}{k} \hat{\bm{\varphi}} $$ or in matrix notation, \begin{equation} \bm{A}= (\bm{\sigma} \cdot\bm{h})\frac{\cot\theta}{k} \hat{\bm{\varphi}} \label{m} \end{equation} where $\bm{\sigma}$ is the Pauli matrix vector and $$ \bm{h}(z)=(0,\sin(k_0\Delta nz),\cos(k_0\Delta nz)). $$ Equation (\ref{m}) describes the parallel transport of the polarization vector along the beam and generalizes a previous result for inhomogeneous isotropic media \cite{Bliokh,Bliokh1,Bliokh2}. The Berry curvature (gauge field strength) associated with this connection is ($\bm{A} \times \bm{A}=0$) $$ \bm{B}=\nabla_{\bm{k}} \times \bm{A}=-(\bm{\sigma} \cdot \bm{h})\frac{\bm{k}}{k^3}. $$ In the course of propagation, the polarization evolves according to $\bm{\epsilon}_\sigma \rightarrow e^{i\Theta}\bm{\epsilon}_\sigma$, where \begin{equation} \Theta=\int_C \bm{A} \cdot d\bm{k}= (\bm{\sigma} \cdot \bm{h})\Theta_0 \label{ph} \end{equation} is the geometric Berry phase. Here $C$ is the beam trajectory in momentum space and $\Theta_0=\int_C \cos \theta d\varphi$ is the Berry phase accumulated for $\sigma=1$ in the absence of anisotropy. (In the absence of anisotropy, (\ref{ph}) simply yields the phase factor $e^{i\sigma\Theta_0}$ that leads to the well known Rytov rotation.) The evolution, thus, entails a precession of the polarization vector, which is characteristic of anisotropic media \cite{Bliokh5}, about the unit vector $\bm{h}$. In view of the polarization evolution, the Berry curvature for a given beam is, therefore, $$ \bm{B}_\sigma=(e^{i\Theta}\bm{\epsilon}_\sigma)^\dagger\bm{B} (e^{i\Theta}\bm{\epsilon}_\sigma)= \bm{\epsilon}_\sigma^\dagger\bm{B}\bm{\epsilon}_\sigma $$ where the last expression follows because $\Theta$ and $\bm{B}$ commute. Hence $$ \bm{B}_\sigma=-\sigma \cos(k_0\Delta nz)\frac{\bm{k}}{k^3} $$ which reduces to the well known result in the absence of anisotropy, namely, the field of a magnetic monopole of charge $\sigma$ situated at the origin of the momentum space \cite{Bliokh,Bliokh1,Bliokh2}. The equations of motion of the beam in the presence of momentum space Berry curvature have been derived repeatedly for various particle beams (photons \cite{Bliokh,Bliokh1,Bliokh2,Zeldovich2,Onoda,Onoda2,Sawada,Mehrafarin}, phonons \cite{Bliokh6,Torabi,Mehrafarin2} and electrons \cite{Chang,Sundaram,Culcer,Berard}). The beam trajectory, $\bm{r}_\sigma$, satisfies $$ \dot{\bm{r}}_\sigma=\hat{\bm{k}}+\bm{B}_\sigma\times \dot{\bm{k}}\nonumber $$ where dot denotes derivative with respect to the beam length. This differs from the standard ray equation of the geometrical optics, which holds in the absence of disclination, by the term involving the Berry curvature. It yields the beam deflection $$ \delta \bm{r}_\sigma(z)=-\sigma \cos(k_0\Delta nz)\int_C \frac{\bm{k}}{k^3}\times d\bm{k} $$ which results in the splitting of beams of opposite polarizations and, being orthogonal to the beam direction, produces a spin current across the direction of propagation. This is the optical spin Hall effect in the disclinated medium and generalizes a previous result for inhomogeneous isotropic media \cite{Bliokh,Bliokh1,Bliokh2}. $\delta\bm{r}_\sigma$ varies sinusoidally along the paraxial direction with wavelength $\lambda_0/\Delta n$, where $\lambda_0$ is the beam's wavelength in vacuum (see figure \ref{fig1}). In particular, it vanishes for successive beam points that are separated by $\lambda_0/2\Delta n$ along the $z$-axis. Measurement of this determines the birefringence and provides an indirect method for determining the magnitude of the Frank vector, $\omega$, through (\ref{biref}). \begin{figure} \includegraphics{fig1.eps} \caption{ Propagation of oppositely polarized paraxial beams in a wedge disclinated amorphous medium. In the absence of disclination, the deflections vanish and the two trajectories collapse along the solid line.} \label{fig1} \end{figure}
1,116,691,501,106
arxiv
\section{Heuristic description of MHD instabilities} \label{sec:gen} This first section is intended to provide the reader with some qualitative and semi-quantitative ideas about the onset and characteristics of pressure-driven instabilities, leaving technical aspects of the stability analysis to the later sections. \subsection{Qualitative conditions of ideal MHD instabilities in static equilibria} \label{sec:base} The equilibrium configurations leading to an ideal MHD instability have been well investigated in the fusion literature. For current-driven instabilities, the first criterion was devised by Kruskal and Shafranov. Basically, it states that in cylindrical column of length $L$, instability follows if the magnetic field line rotates more than a certain number of times around the cylinder, from end to end. The exact number of rotations required for instability is dependent on the considered equilibrium configuration; it is usually of order unity. Concerning pressure-driven instabilities, a more clear-cut necessary condition of instability can be stated: instability follows once the pressure force pushes the plasma outwards from the inside of the field line curvature. This condition can be derived from the Energy Principle, as will be shown later on. \begin{figure} \centering \includegraphics[scale=0.5]{MHD-inst.eps} \caption{Qualitative description of the conditions of onset of MHD instabilities (see text for details).} \label{MHD-Inst} \end{figure} These conditions of onset of instability are illustrated on Fig.~\ref{MHD-Inst}. In an actual plasma, the origin of an instability (current- or pressure-driven) is usually not easy to pinpoint except in special instances. For example, if the plasma is cold (no pressure force), the instability is necessarily current-driven. Also the growth rates of current-driven modes are known to decrease with spatial order -- e.g., they decrease with increasing azimuthal wave-number $m$ -- while the most unstable pressure-driven modes have a growth rate which is nearly independent on the wavenumber. Consequently, large wavenumber unstable modes are therefore always pressure-driven in a static, ideal MHD column. Besides these two limiting cases, an MHD instability almost always results from an inseparable mix of pressure and current driving. If the column is moving, the distinction between Kelvin-Helmholtz, current- and pressure-modes is even more blurred, except in some cases, where branches of instability can be identified by taking appropriate limits. In terms of outcome of the instability, it is essential to know whether unstable modes are internal or external, i.e., have vanishing or substantial displacement on the plasma surface (here, the jet surface) . It is well-known in the fusion context that unstable external modes are prone to disrupt the plasma, as may be the case, e.g., with the $m=1$ (``kink") current-driven mode. In the next sections, mostly high wavenumber modes will be examined, where the pressure-driving is most obvious, in order to best identify the characteristic features of this type of instability. \subsection{Magnetic shear, magnetic resonances, and Suydam's criterion}\label{suyd} The concept of magnetic shear plays an important role in the understanding of the stability of pressure-driven mode. The magnetic shear characterizes the change of orientation of field lines when moving perpendicularly to magnetic surfaces. In the case of cylindrical equilibria, this concept is illustrated on Fig.~\ref{shearfig}. Magnetic surfaces are cylindrical. Field lines within magnetic surfaces have an helix shape; the change of helix pitch $rB_z/B_\theta$, characterizes the magnetic shear. A quantity related to the pitch and largely used in the fusion community is the safety factor $q$: \begin{equation}\label{safety} q=\frac{r B_z}{R_o B_\theta}, \end{equation} \noindent where $R_o$ is the column radius. For reasons soon to be discussed, a high enough safety factor is required for stability, hence its name. The magnetic shear $s$ is defined as \begin{equation}\label{shear} s\equiv \frac{r}{q}\frac{dq}{dr}. \end{equation} \begin{figure} \centering \includegraphics[scale=0.5]{pitch.eps} \caption{The change in the pitch of field lines between magnetic surfaces is the source of the magnetic shear (see text).} \label{shearfig} \end{figure} Magnetic resonances constitute another important key to the question of stability. A cylindrically symmetric equilibrium is invariant in the vertical and azimuthal direction, so that perturbations from equilibrium can without loss of generality be expanded in Fourier terms in these directions and assumed to be proportional to $\exp i (m\theta+kz)$. Magnetic resonances are the (cylindrical) surfaces where the wave vector ${\vec k} = m/r {\vec e}_\theta + k {\vec e}_z$ is perpendicular to the equilibrium magnetic field: \begin{equation}\label{magres} \frac{{\vec k}\cdot{\vec B}_o}{B_o}\equiv k_\parallel= \frac{1}{B_o}\left(\frac{m}{r}B_\theta+k B_z\right)=0 \end{equation} \noindent where $k_\parallel$ is the component of the wave vector parallel to the equilibrium magnetic field. The significance of these surfaces stems from the fact that in general, dispersion relations incorporate a stabilizing piece of the form $V_A^2 k_\parallel^2$, where $V_A$ is the Alfv\'en speed. This term is responsible for the propagation of Alfv\'en waves, and arises from the restoring force due to the magnetic tension (see section \ref{disprel} for the precise meaning of these statements). As such, it is always stabilizing. Obviously, this stabilization is minimal in the vicinity of a magnetic resonance for a given ($m,k$) mode, so that pressure-driven instabilities are preferentially triggered at magnetic resonances for any given mode. Note however that a large magnetic shear limits the role of magnetic resonances in the destabilization of the plasma. Indeed, defining the perpendicular wave number \begin{equation}\label{kperp} k_\perp=-\frac{1}{B_o}\left(\frac{m}{r}B_z - k B_\theta\right), \end{equation} \noindent and designating by $r_c$ the radial position of the magnetic resonance of the ($m,k$) mode, one finds that \begin{equation}\label{kpar} k_\parallel\simeq \frac{B_\theta B_z}{B_o^2} k_\perp s \frac {r-r_c}{r_c}, \end{equation} \noindent to first order in $(r-r_c)/r_c$ in the vicinity of the magnetic resonance $r_c$. This implies that $V_A k_\parallel$ will remain small either if $s \ll 1$ (small shear) or if the magnetic field is mostly perpendicular ($|B_\theta| \ll |B_z|$) or azimuthal ($|B_z| \ll |B_\theta|$) so that $|B_\theta B_z|/B_o^2 \ll 1$. However, if the field is predominantly vertical, it is little curved, and pressure destabilization is expected to be weak or non-existent according to the description of the condition on instability depicted in Fig.~\ref{MHD-inst}; furthermore, $s$ being a logarithmic derivative is usually of order unity. Therefore, in practice stabilization by magnetic tension will be reduced essentially when the field is mostly azimuthal. These features are embodied in Suydam criterion, which expresses a sufficient condition for instability: \begin{equation}\label{suydam} \frac{B_z^2}{8\mu_o r}s^2+\frac{dP}{dr} > 0. \end{equation} \noindent The converse of this statement is a necessary condition for stability. The origin of this criterion is briefly discussed in section \ref{EnPrin}. It turns out that this condition is both a necessary and sufficient condition of instability for large wavenumber modes \cite{DTYNM04}. The condition of instability requires $dP/dr < 0$, which agrees with our heuristic description of the onset of instability given above. It will be also discussed in Section \ref{EnPrin} that the growth rates $\gamma$ of pressure-driven instabilities are $\gamma\sim C_S/R_o$ ($C_S$ is the sound speed and $R_o$ the jet radius). Coming back to Eq.~(\ref{suydam}), the first term is stabilizing, but the stabilization will be minimal in the condition just discussed, i.e., when the field is mostly azimuthal. Indeed, in this case, the equilibrium condition Eq.~(\ref{radequi}) implies that $dP/dr\sim B_\theta^2/\mu_o r \gg B_z^2/r\sim B_z^2 s^2/r$. This situation is expected to hold in magnetically self-confined jets outer regions. Indeed, most such jet models (e.g., \cite{L96} and \cite{F97}) have $|B_\theta| \gg |B_z|$ in the asymptotic jet regime to ensure confinement. This feature combined to the previous statement that MHD instabilities involving the boundary are most prone to disrupt static MHD columns makes the assessment of the role of pressure-driven instabilities in MHD jets particularly critical for the viability of such models. This viability hinges on the hopefully stabilizing role of the jet bulk motion (see section \ref{movcol}). \section{Ideal MHD in static columns:}\label{idealMHD} The simplest framework in which the stability of jets can be investigated is ideal magnetohydrodynamics (MHD). Justifications and limitations of this approach are briefly discussed in Appendix \ref{app:idealmhd}. \subsection{Equations} The MHD equations used in these notes are the continuity equation, the momentum equation without the viscous term, the induction equation without the resistive term, and a polytropic equation of state. Incompressibility is not assumed, as pressure-driven modes are not incompressible except at the marginal stability limit. These equations read \begin{equation}\label{cont} \frac{\partial \rho}{\partial t} + \nabla\rho\vec{v}=0, \end{equation} \begin{equation}\label{mouv} \frac{\partial \vec{v}}{\partial t} + \vec{v}\cdot\nabla\cdot{v}=-\frac{\nabla P_T}{\rho}+\frac{\vec{B}\cdot\nabla\vec{B}}{\mu_o\rho}, \end{equation} \begin{equation}\label{ind} \frac{\partial \vec{B}}{\partial t}= \nabla\times(\vec{v}\times\vec{B}), \end{equation} \begin{equation}\label{pres} P= K \rho^\gamma, \end{equation} \noindent with standard notations, and where $P_T=P+B^2/2\mu_o$ is the total (gas and magnetic) pressure, $K$ a constant, and $\gamma$ the polytropic index. \subsection{Equilibrium} Using a cylindrical coordinate system ($r,\theta,z$), a static ($\mathbf{v}=0$) cylindrical column of axis $z$ is described by a helical magnetic field $B_\theta(r), B_z(r)$, and a gas pressure $P(r)$ depending only on the cylindrical radius $r$. The continuity and induction equations are then trivially satisfied, as well as the vertical and azimuthal component of the momentum equation, while the radial component reduces to \begin{equation}\label{radequi} - \frac{d P_T}{dr} - \frac{B^2_{\theta}} {\mu_o r} = 0. \end{equation} This cylindrical equilibrium is best characterized by introducing a number of quantities homogeneous to an inverse length, both in vectorial ($\vec{\mathcal{K}_B}$, $\vec{\mathcal{K}_P}$ and $\vec{\mathcal{K}_C}$) or algebraic form ($\mathcal{K}_B$, $\mathcal{K}_P$ and $\mathcal{K}_C$). They are defined by: \begin{equation}\label{kb} \vec{\mathcal{K}_B}\equiv\frac{\vec{\nabla} B_o}{B_o}= \frac{1}{B_o}\frac{dB_o}{dr}\mathbf{e}_r\equiv \mathcal{K}_B\mathbf{e}_r, \end{equation} \begin{equation}\label{kp} \vec{\mathcal{K}_P}\equiv\frac{\vec{\nabla} P_o}{P_o}= \frac{1}{P_o}\frac{d P_ o}{dr}\mathbf{e}_r\equiv \mathcal{K}_P\mathbf{e}_r, \end{equation} \begin{equation}\label{kc} \vec{\mathcal{K}_C}\equiv\mathbf{e}_\parallel \cdot\vec{\nabla}\mathbf{e}_\parallel =-\frac{B_\theta^2}{rB_o^2}\mathbf{e}_r\equiv \mathcal{K}_C \mathbf{e}_r. \end{equation} \noindent where $B_o$ and $P_o$ are the equilibrium distribution of magnetic field and gas pressure, and $\vec{e}_\parallel=\vec{B}_o/B_o$ is the unit vector parallel to the magnetic field; $\vec{\mathcal{K}_C}$ is the curvature vector of the magnetic field lines, and $\vec{\mathcal{K}_B}$ characterizes the inverse of the spatial scale of variation of the magnetic field, while $\vec{\mathcal{K}_P}$ characterizes the inverse scale of variation of the fluid pressure. The first identity in these relations is general, whereas the second one pertains to cylindrical equilibria only. It is also convenient to introduce the plasma $\beta$ parameter: \begin{equation}\label{beta} \beta\equiv \frac{2\mu_o P_o}{B_o^2}, \end{equation} \noindent This parameter measures the relative importance of the gas and magnetic pressures. With these definitions, the jet force equilibrium relation reads \begin{equation}\label{kckb} \frac{\beta}{2}\mathcal{K}_P=\left( \mathcal{K}_C -\mathcal{K}_B \right), \end{equation} Both forms of the equilibrium relation, Eqs.~(\ref{radequi}) and (\ref{kckb}), express the fact that the hoop stress due to the magnetic tension ($\mathcal{K}_C$) balances the gas ($\mathcal{K}_P$) and magnetic ($\mathcal{K}_B$) pressure gradient to achieve equilibrium and confine the plasma in the column. Self-confinement is achieved in this way when the external pressure is negligible at the column boundary. \subsection{Perturbations:} We want to investigate the stability with respect to deviations from equilibrium. As the background equilibrium is static, the problem is most easily formulated and analyzed in Lagrangian form: indeed, in this case, all equations but the momentum equation can be integrated with respect to time. To this effect, we introduce, for any fluid particle at position $\vec{r}_o$ in the absence of perturbation, the displacement $\vec{\xi}(\vec{r}_o,t)$ at time $t$ from its unperturbed position, so that its actual position is given by \begin{equation}\label{displace} \vec{r}(\vec{r}_o,t)= \vec{r}_o+\vec{\xi}(\vec{r}_o,t). \end{equation} \noindent The unperturbed position $\vec{r}_o$ is used to uniquely label all fluid elements. Denoting by $\delta X$ the (Lagrangian) variation during the displacement of any quantity $X$, the linearized (Eulerian) equation of continuity $\partial_t \delta\rho=-\nabla(\rho_o \vec{v})$ integrates into\footnote{In these expressions, the difference between the Eulerian and Lagrangian variations has been ignored as they disappear to first order in the displacement $\vec{\xi}$ in the final equations. For the same reason, no distinction is made between the derivative with respect to $\vec{r}$ or $\vec{r}_o$.} \begin{equation}\label{drho} \delta\rho=-\nabla(\rho_o\vec{\xi}). \end{equation} Similarly, the linearized induction equation $\partial_t \delta\vec{B}=\nabla\times(\vec{B}_o \times \vec{v})$ leads to \begin{equation}\label{dB} \delta\vec{B}=\nabla\times(\vec{B}_o \times \vec{\xi}). \end{equation} From these results and the polytropic equation of state, the total pressure variation reads \begin{equation}\label{dpress} \delta P_T=-\vec{\xi}\cdot\nabla P_o -\gamma P_o\nabla\cdot\vec{\xi} + \frac{\vec{B}_o\cdot\delta\vec{B}}{\mu_o}. \end{equation} For static equilibria, one can without loss of generality take a Fourier transform the linearized momentum equation with respect to time. For a given Fourier mode, one can write $\vec{\xi}(\vec{r},t)=\vec{\xi}(r)\exp i\omega t$, so that the linearized momentum equation becomes \begin{equation}\label{fourier} -\rho_o\omega^2 \vec{\xi}=-\nabla \delta P_T+\delta\vec{T}\equiv\vec{F}(\vec{\xi}), \end{equation} \noindent where $\delta\vec{T}=(\vec{B}_o.\nabla\vec{B}+\vec{B}.\nabla\vec{B}_o)/\mu_o$ represents the variation of the magnetic tension force\footnote{Within a factor $\rho_o$.}. The last identity in Eq.~(\ref{fourier}) defines the linear operator $\vec{F}$, operating on $\vec{\xi}$ through Eqs.~(\ref{dB}) and (\ref{dpress}). \section{The Energy Principle and its consequences:}\label{EnPrin} The linear operator $\vec{F}$ of Eq.~(\ref{fourier}) is self-adjoint, i.e., taking into account that $\vec F$ is real: \begin{equation}\label{autoadj} \int{\vec\eta}\cdot {\bf F}({\vec\xi})d^3 r = \int{\vec\xi}\cdot {\bf F}({\vec\eta})d^3 r. \end{equation} \noindent A demonstration of this relation can be found, e.g., in Freidberg \cite{F87} (cf p.~242 and Appendix A of the book). As a consequence of this property of $\vec{F}$, an Energy Principle can be formulated. Defining \begin{equation}\label{dW} \delta W({\vec\xi}^*,{\vec\xi})=-\frac{1}{2}\int{\vec\xi}^*\cdot {\bf F}({\vec\xi})d^3 r, \end{equation} \noindent and \begin{equation}\label{K} K({\vec\xi^*},{\vec\xi})=\frac{1}{2}\int\rho|{\vec\xi}|^2 d^3 r, \end{equation} \noindent and taking the scalar product of Eq.~(\ref{fourier}) with ${\vec\xi}^*$ leads to \begin{equation}\label{omtot} \omega^2=\frac{\delta W}{K}. \end{equation} \smallskip The self-adjointness of ${\bf F}$ has two important consequences (Energy Principle): \begin{itemize} \item $\omega^2$ is also extremum with respect to a variation of $\vec\xi$. \item Stability follows if and only if $\delta W > 0$ for all possible $\vec\xi$. \end{itemize} \noindent Ascertaining stability through the last statement is usually an impossible task. Instead, one usually makes use of the Energy Principle in a less ambitious manner: if one can find some displacement making $\delta W < 0$ then one has a sufficient condition of instability (or, taking the converse statement, a necessary condition of stability). This is actually how Suydam criterion is demonstrated. First the expression of $\delta W$ is simplified by taking advantage of the cylindrical geometry and focusing on marginal stability and incompressible displacements (as they make $\delta W$ more easily negative; see below). Next, one chooses a particular form of displacement in the vicinity of the magnetic resonance of an ($m,k$) mode, and looks under which conditions this displacement makes $\delta W$ negative; the condition turns out to be Suydam criterion for a well-chosen displacement. These computations are rather lengthy and the reader is referred to Freidberg's book \cite{F87} for details. A useful form\footnote{The boundary term is ignored, as it is not required in this discussion.} of $\delta W$ has been derived by Bernstein \textit{et al.\ } \cite{BFKK58}, which reads (see Freidberg \cite{F87}, p.~259) \begin{eqnarray}\label{dWstand} \delta W = \frac{1}{2}\int & d^3\vec{r}\ \left[\frac{|\vec{Q}_\perp|^2}{\mu_o}+ \frac{B_o^2}{\mu_o}|\nabla\cdot\vec{\xi}_\perp + 2 \vec{\mathcal{K}_C}\cdot\vec{\xi}_\perp|^2 + \gamma P_o|\nabla\cdot\vec{\xi}|^2\right. \nonumber\\ & - \left. 2P_o(\vec{\mathcal{K}_P}\cdot\vec{\xi}_\perp)(\vec{\mathcal{K}_C}\cdot\vec{\xi}^*_\perp) - J_\parallel(\vec{\xi}^*_\perp\times \vec{e}_\parallel) \cdot\vec{Q}_\perp\right], \end{eqnarray} \noindent where $\vec{\xi}_\perp$ is the component of the displacement perpendicular to the unperturbed field $\vec{B}$, $\vec{Q}_\perp = \nabla\times(\vec{\xi}_\perp\times\vec{B}_o)$ is the perturbation in the magnetic field, $\vec{\mathcal{K}_C}$ is the curvature vector of the magnetic field and $\vec{\mathcal{K}_P}$ the inverse pressure length-scale vector defined earlier; $J_\parallel$ and $\vec{e}_\parallel$ are the current and unit vector parallel to the magnetic field, respectively. The quantities $\vec\mathcal{K}_P$ and $\vec\mathcal{K}_C$ are defined in Eqs.~(\ref{kp}) and (\ref{kc}). The first term describes the field line bending energy; it is the term responsible for the propagation of Alfv\'en waves, through the restoring effect of the magnetic tension, which makes field lines acting somewhat like a rubber band. The second term is the energy in the field compression, while the third is the energy in the plasma compression. The fourth term arises from the perpendicular current (as ${\bf\nabla}P={\bf J}_\perp\times{\bf B}$ in a static equilibrium), and the last one arises from the parallel current $J_\parallel$. Only these two terms can be negative, and give rise to an instability if they are large enough to make $\omega^2 < 0$. Pressure-driven instabilities are driven by the first of these two terms, while current-driven instabilities are due to the second one. Pressure-driven instabilities are further subdivided into interchange and ballooning modes, depending on the shape of the perturbation, but the basic properties of these different modes are similar, and this distinction will not be discussed further in these notes\footnote{In particular, Suydam criterion applies also to ballooning mode in cylindrical geometry; see Freidberg's book \cite{F87}, pp.~401-402 for details}. For our purposes here, we are mostly interested in what can be learned from the form of the fourth term. First note that this term is destabilizing in cylindrical geometry when $\mathcal{K}_C \mathcal{K}_P > 0$; this justifies the necessary condition of instability given in section \ref{sec:base}. Furthermore, Eq.~(\ref{omtot}) and (\ref{dWstand}) imply that the pressure-driving term produces an inverse growth rate $\gamma$, of order of magnitude \begin{equation}\label{gamma} |\gamma|^2 \sim C_S^2 \mathcal{K}_C\mathcal{K}_P\sim C_S^2/R_o^2, \end{equation} \noindent where $C_S$ is the sound speed and $R_o$ the column radius. This is quite fast, comparable to the Kelvin-Helmholtz growth rate in YSO jets. This order of magnitude will be used in the next section to set up an ordering leading to analytically tractable dispersion relations for pressure-driven unstable modes. \section{Dispersion relation in the large azimuthal field limit:}\label{disprel} Most of the general results on pressure-driven instabilities were obtained in the fusion literature either from the use of the Energy Principle, or from the so-called Hain-L\"ust equation (a reduced perturbation equation for the radial displacement \cite{HL58} \cite{G71}). These approaches are quite powerful, but not familiar to the astrophysics community, and involve a lot of prerequisite. It is more common in astrophysics to grasp the properties of an instability through the derivation of a dispersion relation. There are actually two papers doing this in the jet context for pressure-driven instabilities; however, the first one, by Begelman \cite{B98}, focuses on the relativistic regime which brings a lot of added complexity to the discussion, and the second one \cite{KLP00} is partially erroneous. Fortunately, in the limit of a near toroidal field of interest here, a dispersion relation can be derived ab initio by elementary means, and this approach is adopted here. To this effect, it is first useful to reexamine the behavior of the three MHD modes in an homogeneous medium, in the limit of quasi-perpendicular propagation. It is known that this limit allows the use of a kind of WKB type of approach in the study of interchange and ballooning pressure-driven modes (see, e.g., Dewar and Glasser \cite{DG83}), a feature we shall take advantage of in these notes. \subsection{MHD waves in quasi-perpendicular propagation in homogeneous media:} We consider an homogeneous medium pervaded by a constant magnetic field ${\vec B}_o$. The analysis of linear perturbations in such a setting leads to the well-known dispersion relation of the slow and fast magnetosonic modes and the Alfv\'en mode. Our purpose here is to point out useful features of these modes when the wavevector is nearly perpendicular to the unperturbed magnetic field. \begin{figure} \centerline{\includegraphics{triedre}} \caption{Definition of the reference frame $(\vec{e}_\parallel,\vec{e}_l,\vec{e}_A)$} \label{fig:triedre} \end{figure} To this effect, let us consider plane wave solutions to Eq.~(\ref{fourier}), where $\xi\propto \exp(-i\vec{k}\cdot\vec{r})$, and assume that the direction of propagation is nearly perpendicular to the magnetic field, i.e. $k_\parallel\ll k_\perp$ (defined in Eqs.~\ref{kpar} and \ref{kperp}). The focus on quasi-perpendicular propagation comes from the remarks of section \ref{suyd}, where it was noted that instability is easier to achieve in the vicinity of magnetic resonances, i.e., where $k_\parallel\ll k_\perp$. Let us also introduce the orthogonal reference frame ($\vec{e}_\parallel$, $\vec{e}_l$, $\vec{e}_A$) where $\vec{e}_\parallel\equiv \vec{B}_o/B_o$ is parallel to the unperturbed magnetic field, $\vec{e}_l\equiv\vec{k}_\perp/k_\perp$, and $\vec{e}_A\equiv\vec{e}_\parallel\times\vec{e}_l$ (see Fig.~\ref{fig:triedre}). With our definition of $k_\parallel$ and $k_\perp$ in Eqs.~(\ref{kpar}) and (\ref{kperp}), $\vec{e}_A=\vec{e}_r$. The subscripts $l$ and $A$ stand for longitudinal and alfv\'enic, respectively ($\vec{e}_\parallel$, $\vec{e}_l$, and $\vec{e}_A$ are the directions of the displacement of purely slow, fast and alfv\'enic modes in the limit of nearly transverse propagation adopted here, as shown below). Denoting $(\xi_\parallel,\xi_l,\xi_A)$ the components of the lagrangian displacement $\vec\xi$ in this reference frame, the momentum equation Eq.~(\ref{fourier}) yields the following three component equations \begin{equation} \left( \omega^2 - C_S^2\, k_\parallel^2 \right) \, \xi_\parallel = C_S^2 \, k_\parallel \, k_\perp \, \xi_l, \label{equ:mgs1} \end{equation} \begin{equation} \left( \omega^2 - C_S^2\, k_\perp^2 - V_A^2\, k^2 \right) \,\xi_l = C_S^2 \, k_\parallel \,k_\perp \,\xi_\parallel, \label{equ:mgs2} \end{equation} \begin{equation} \left( \omega^2 - V_A^2\,k_\parallel^2 \right) \,\xi_A = 0, \label{equ:alf} \end{equation} \noindent while the total pressure perturbation becomes \begin{equation} \label{equ:dp} \delta P_T =-i\rho_o \left[ (C_S^2+V_A^2) \,k_\perp\xi_l + C_S^2\,k_\parallel\xi_\parallel \right]. \end{equation} Eq.~(\ref{equ:alf}) gives the dispersion relation of Alfv\'en waves, $\omega_A^2=V_A^2\vec{k}_\parallel^2$, which decouple from the two magnetosonic modes described by the remaining two equations. The solutions of the magnetosonic modes are easily derived and possess the following important properties. Characterizing quasi-perpendicular propagation with the small parameter $\epsilon\equiv |k_\parallel/k_\perp|\ll 1$, these two equations imply $\omega_S^2\simeq C_S^2 V_A^2/(C_S^2+V_A^2) k_\parallel^2$ and $\xi_l\sim O(\epsilon\xi_\parallel)$ for the slow magnetosonic wave, while $\omega_F^2\simeq(C_S^2+V_A^2) k_\perp^2$ and $\xi_\parallel\sim O(\epsilon\xi_l)$ for the fast magnetosonic one. Furthermore, the $\xi_l$ momentum component Eq.~(\ref{equ:mgs2}) combined with Eq.~(\ref{equ:alf}) and the ordering of the displacement component just pointed out implies that $\delta P_T=0$ to leading order in $\epsilon$ for the slow magnetosonic mode; note that the same property holds by construction for the Alfv\'en mode. The cancellation of the total pressure for these two modes is essential from a technical point of view, and will lead to substantial simplification in the derivation of a dispersion relation performed in the next subsection. \subsection{Dispersion relation and Kadomtsev criteria:} Let us now come back to cylindrical inhomogeneous equilibria. Remember from section \ref{EnPrin} that the pressure-driving term will contribute a destabilizing term $\omega^2\sim C_S^2/R_o^2$ to the dispersion relation. This term will be able overcome the stabilizing effect of the restoring forces of the Alfv\'en and slow magnetosonic modes only if $V_A |k_\parallel|,\ C_S |k_\parallel| \lesssim C_S/R_o$. This constraint can be achieved in the vicinity of magnetic resonance as already previously noted. More precisely, a simplified dispersion relation can be found in the WKB limit with a displacement of the form $\vec{\xi}(\vec{r})=\vec{\xi}\times\exp - i(k_r r + m\theta+ k_z z)$, if the following ordering is satisfied: \begin{itemize} \item $|k_\parallel r|\ll$ or $\lesssim 1\ll |k_r r|\ll |k_\perp r|$: the first inequality ensures that the stabilization by magnetic tension is ineffective (closeness to a resonance). The following inequalities ensure that a WKB limit can be taken. The implied ordering\footnote{For consistency with the previous sections, $k_\perp$ is the wavenumber in the longitudinal direction; it does not include the piece in the radial direction.} $|k_\parallel|\ll k_\perp$ ensures that $\delta P^*$ will vanish to leading order as in the homogeneous case discussed in the previous section. The last inequality allows us to neglect the contribution of the radial gradient of total pressure (which \textit{does not} vanish), and greatly simplifies the analysis. \item $|B_z/B_\theta|^2 s^2 |k_\perp|\ll |k_\parallel|$: this limit, which applies when $|B _\theta| \gg |B_z|$, ensures that the magnetic shear is not stabilizing. \item $|\omega^2|\ll |\omega_F|^2$: this excludes the fast mode from the problem in the near perpendicular propagation regime considered here. As the fast mode is not expected to be destabilized in this regime (as $|\omega_F^2|\gg V_A^2/r^2$), this does not limit the generality of the results while simplifying the analysis. \end{itemize} It turns out that the resulting dispersion relation captures most of the physics of pressure-driven instabilities; this follows because the most unstable modes have growth rates nearly independent of the azimuthal wavenumber $m$ \cite{DTYNM04}, and because current-driven instabilities are efficient only at low $m$ and disappear from a WKB analysis. As previously, the projection Eq.~(\ref{fourier}) on the longitudinal direction $\vec{e}_l$ shows that the total pressure perturbation vanishes and that $\xi_l\sim |k_\parallel/k_\perp| \xi_\parallel \ll \xi_\parallel$), while the components in the other two directions ($\vec{e}_\parallel, \vec{e}_r$) are now coupled and read (some details of the derivation of these equations can be found in Appendix \ref{details}) \begin{equation}\label{slow} \left(\omega^2 - V_{SM}^2 k_\parallel^2\right)=-i\frac{2\beta^*}{1+\beta^*} V_A^2\mathcal{K}_C k_\parallel \xi_r, \end{equation} \begin{equation}\label{alfven} \left[\omega^2 - V_A^2 (k_\parallel^2+k_o^2)\right]=i\frac{2\beta^*}{1+\beta^*} V_A^2\mathcal{K}_C k_\parallel \xi_\parallel, \end{equation} \noindent where $\beta^*=C_S^2/V_A^2$, and $V_{SM}^2=C_S^2 V_A^2/(C_S^2+V_A^2)$ is the slow mode speed in the near perpendicular propagation limit. The coupling of the modes blurs their character except in limiting cases. The quantity $k_o^2$ is defined as \begin{equation}\label{kzero} k_o^2=\frac{4\beta^*}{1+\beta^*}\mathcal{K}_C^2 - 2\beta^* \mathcal{K}_C\mathcal{K}_\rho. \end{equation} Note that if $\mathcal{K}_C=0$ (i.e., when reverting to an homogeneous medium), Eqs.~(\ref{slow}) and (\ref{alfven}) yield back the slow and Alfv\'en mode, respectively. The field curvature couples the two modes. The quantity $k_o^2$ can be either positive or negative; the first term in Eq.~(\ref{kzero}) comes from the plasma compression, and the second one is the contribution of the pressure destabilizing term identified in section \ref{EnPrin}. As usual, these equations possess a non-trivial solution if their determinant is non zero, which yields the following dispersion for $\omega^2$: \begin{equation}\label{disp} \omega^4 - \left[(V_A^2+V_{SM}^2) k_\parallel^2 + V_A^2 k_o^2\right]\omega^2 + V_A^2V_{SM}^2 k_\parallel^2 (k_\parallel^2 -2\beta^*\mathcal{K}_C\mathcal{K}_\rho)=0. \end{equation} First note that if both $B_z=0$ (the so-called Z-pinch configurations) and $m=0$, this equation is degenerate: one of the roots is $\omega^2=0$ and the other root is $\omega^2=V_A^2 k_o^2$. Instability then requires that $k_o^2<0$, as $k_\parallel=0$ in this case. This constrain is identical to the criterion\footnote{Kadomtsev's criterion for the $m=0$ mode in a Z pinch is a necessary and sufficient condition of instability, whereas the analysis presented here shows only the sufficiency of this condition.} derived by Kadomtsev from the Energy Principle for the $m=0$ mode in Z pinches (see Freidberg \cite{F87} p.\ 286). When $m \neq 0$, Eq.~(\ref{disp}) can be solved exactly but it is more instructive to analyze its properties. As the coefficient of $\omega^2$ is equal to the sum of the two roots, and the last term is equal to their product, one finds that if $k_\parallel^2> 2 \beta^*\mathcal{K}_C\mathcal{K}_\rho$, the two roots are stable, and if $k_\parallel^2 < 2 \beta^*\mathcal{K}_C\mathcal{K}_\rho$, one of the roots is unstable. If $B_z=0$ (Z pinch), this condition is identical to the criterion\footnote{Same comment as in the previous footnote.} derived by Kadomtsev for $m\neq 0$ modes (see Freidberg \cite{F87} pp.\ 284-285). Note that all these conditions of instability require $\mathcal{K}_C\mathcal{K}_P > 0$, in agreement with the discussion of sections \ref{EnPrin} and \ref{sec:gen}; this condition is unavoidable in magnetically self-confined jets. The analysis presented here also shows that once this condition is satisfied, instability necessarily follows in static columns where $|B_\theta|\gg |B_z|$ on some of the radial range, as the magnetic tension stabilizing effect $V_A^2 k_\parallel^2$ is arbitrarily small in the vicinity of a magnetic resonance. Finally, the reader may ask how the local analysis presented here informs us on the global stability properties of the column. The answer lies in in oscillation theorem of Goedbloed and Sakanaka \cite{GS74}. The theorem states that for any ($m,k$) unstable mode, the growth rate decreases when increasing the number of radial nodes. This implies that if an unstable mode with a large number of radial nodes is found (such as the modes considered here), an unstable nodeless mode will also exist, and this mode will have the largest growth rate. Such a mode will have a very disruptive effect on the plasma if its displacement is not vanishing on the boundary, as will be the case if the azimuthal field is dominant on the boundary. \section{Moving columns:}\label{movcol} The previous section has shown that cylindrical columns with a predominant azimuthal magnetic field at least in some radial range are subject to pressure-driven instabilities. This situation holds in the outer region of self-confined magnetic jets, leading to a potentially disruptive configuration. However, in these regions a gradient of axial velocity due to the interaction of the moving jet with the outside medium is also expected to be present, and it is legitimate to investigate the effect of such a velocity gradient on the stability properties of pressure-driven modes. This problem has not yet been addressed in the astrophysics literature, but some relevant results are available in the fusion literature. In all the investigations cited below, the adopted velocity profile contains no inflexion point, in order to avoid the triggering of the Kelvin-Helmhotz instability. It is first useful to consider what becomes of Suydam criterion in presence of background motions\footnote{This requires a generalization of Eq.~(\ref{fourier}); also, the Energy Principle no longer applies as the resulting operator is not self-adjoint.}. This investigation has been performed by Bondeson \textit{et al.\ } \cite{BIB87}. Focusing on axial flows (${\bf U}=U_z(r) \vec{e}_r$), they conclude that the behavior of localized modes depends on the magnitude of \begin{equation}\label{shear-Mach} M\equiv \rho^{1/2}\frac{U'_z}{q' B_z/q}, \end{equation} \noindent where the prime denotes radial derivative, and $q$ is the safety factor (see section \ref{idealMHD}). This quantity is a form of Alfv\'enic Mach number based on the velocity and magnetic shear, hence its name. When $M^2 < \beta$, the flow shear destabilizes resonant modes. Above this limit, these modes are stable, but in this case, unstable modes exist at the edge of the slow continuum, and may be global. The authors found however that in this case the growth rates are small (comparable to the resistive instabilities growth rates). Note also that, as $q'/q\sim 1/r$, $M\sim (B_\phi/B_z)(r/d)(U_z/V_A) \gg 1$ in MHD jets ($d$ is the width of the velocity layer). These results seem to suggest that the region where the velocity shear layer takes place at the jet boundary is substantially stabilized in MHD jets. This seems to be confirmed by global linear stability analyzes, both for interchange and ballooning modes, except possibly for the $m=0$ (``saussage") mode \cite{C96} \cite{SH95} \cite{WC91}. In all cases, increasing the flow Mach number efficiently reduces the amplitude of the displacement of the unstable modes at the plasma boundary, an important feature to avoid the disruption of the plasma. An efficient stabilization mechanism has also been identified in the nonlinear regime by Hassam \cite{H92}. This author exploits an analogy between the $m=0$ pressure-driven interchange mode and the Rayleigh-Taylor instability in an appropriately chosen magnetized plasma configuration. From this analysis, he concludes that the $m=0$ pressure-driven mode is nonlinearly stabilized by a smooth velocity shear ($dU_z/dr\sim U/R_o$) if $M_s=U_z/C_S \gtrsim [\ln (\tau_d/\tau_g)]^{1/2}$, where $\tau_g$ is the instability growth time-scale ($\tau_g\sim c_s(\mathcal{K}_\rho\mathcal{K}_C)^{1/2}$) and $\tau_d$ the diffusion time-scale ($\tau_d\sim\nu\mathcal{K}_\rho\mathcal{K}_C$ where $\nu$ is the viscosity, assumed comparable to the resistivity). The nonlinear evolution of an unstable, slightly viscous and resistive Z-pinch (i.e., a configuration where the field is purely azimuthal), was simulated by Desouza-Machado \textit{et al.\ } \cite{DHS00}. They found that the plasma relaminarizes over almost all its volume for applied an velocity shear in good agreement with this analytic estimate. The core of the plasma still has some residual unstable ``wobble", which can apparently be stabilized by the magnetic shear if some longitudinal field $B_z$ is added to the configuration. Note that the large values of $\tau_d/\tau_g$ relevant to astrophysical jets lead to only weak constraints\footnote{For example $\tau_d/\tau_g=10^{30}$ translates into $M_s \gtrsim 8$ only; in YSO jets, this ratio is most probably significantly smaller, and the constraint even weaker.} on the Mach number $M_s$, so that this nonlinear stabilization mechanism is expected to be efficient in astrophysical jets. \section{Summary and open issues:} Pressure-driven instabilities occur in static columns when the pressure force pushes the plasma out from the inside of the magnetic field lines curvature, as shown from direct inspection of the ``potential energy" of the linearized displacement equation (section \ref{EnPrin}), and from a the dispersion relation of local modes (section \ref{disprel}). When unstable modes exist, the growth rates are of the order $C_S/R_o$ where $C_S$ is the sound speed and $R_o$ the jet radius. These are very large, comparable to the Kelvin-Helmholtz growth rate (the most studied instability in jets), especially that the ratio of the magnetic energy to the gas internal energy is expected to be of order unity (within an order of magnitude or so). Such instabilities are known to be disruptive in the fusion context when the eigenmodes exhibit a substantial displacement of the plasma outer boundary; such a situation is relevant to magnetically self-confined jets, as the magnetic field in their outer region is predominantly azimuthal, a configuration most favorable to the onset of the instability (sections \ref{sec:gen} and \ref{disprel}). However, the presence of a velocity gradient in the outer boundary due to the jet bulk motion is expected to have a substantial stabilizing influence, both in the linear and nonlinear regimes (section \ref{movcol}). In its present state, this picture possesses a number of loose ends: \begin{itemize} \item The stabilizing role of an axial velocity gradient needs to be better understood. Not all modes may be stabilized in the linear regime, depending on the details of the equilibrium jet configuration, and the nonlinear mechanism identified in the literature is highly idealized and may not be generic. The one and only simulation of nonlinear stabilization published to date exhibits a very violent relaxation transient, which may still lead to jet disruption. On the other hand, this transient is also an indication that the initial configuration of the simulation is way out of equilibrium, a situation which may not occur in real jets. \item The role of jet rotation has not yet been correctly investigated. Preliminary results seem to indicate that it is stabilizing \cite{KLP00}; however, jet rotation may not be an important dynamical factor in the asymptotic jet propagation regime. \item Most investigations of pressure-driven instabilities rely on a very simple prescription of the equation of state, which raises an issue of principle. Indeed, the very large growth rates usually found for the instability indicate that it develops on time-scales much shorter than the collisional time-scale, and the use of ideal MHD as well as a polytropic equation of state may be questioned in such a context, an issue briefly addressed in Appendix \ref{app:idealmhd}. A more complex description of the plasma is required to validate the results obtained so far. \end{itemize}
1,116,691,501,107
arxiv
\section{Introduction} The usage of nonparametric density estimation techniques has seen a quick growth in the latest years both in High Energy Physics (HEP) and in other fields of Science dealing with multi-variate data samples. Indeed, the improvement in the computing resources available for data analysis allows today to process a much larger number of entries requiring more accurate statistical models. Avoiding parametrization for the distribution with respect to one or more variables allows to enhance accuracy removing unphysical constraints on the shape of the distribution. The improvement becomes more evident when considering the joint probability density function with respect to correlated variables, for whose model a too large number of parameters would be required. Kernel Density Estimation (KDE) is a nonparametric density estimation technique based on the estimator \begin{equation} \hat f_{\mathrm{KDE}}(\ensuremath{\mathbf{x}}\xspace) = \frac{1}{\ensuremath{N_{\mathrm{tot}}}\xspace}\sum_{i = 1}^{\ensuremath{N_{\mathrm{tot}}}\xspace} k(\ensuremath{\mathbf{x}}\xspace - \ensuremath{\mathbf{x}}\xspace_i), \end{equation} where $\ensuremath{\mathbf{x}}\xspace = (x^{(1)}, x^{(2)}, ..., x^{(d)})$ is the vector of coordinates of the $d$-variate space $\mathcal S$ describing the data sample of \ensuremath{N_{\mathrm{tot}}}\xspace entries, $k$ is a normalized function referred to as \emph{kernel}. KDE is widely used in HEP \cite{Cranmer:2000du,Poluektov:2014rxa} including notable applications to the Higgs boson mass measurement by the ATLAS Collaboration \cite{Aad:2014aba}. The variables considered in the construction of the data-model are the mass of the Higgs boson candidate and the response of a \emph{Boosted Decision Tree} (BDT) algorithm used to \emph{classify} the data entries as \emph{Signal} or \emph{Background} candidates \cite{cart84}. This solution allows to synthesize a set of variables, input of the BDT, into a single variable, the BDT response, which is modeled. In principle, a multivariate data-model of the BDT-input variables may simplify the analysis and result into a more powerful discrimination of signal and background. Though, the computational cost of traditional nonparametric data-model (histograms, KDE, ...) for the sample used for the training of the BDT, including $\mathcal O(10^6)$ entries, is prohibitive. Data modelling, or density estimation, techniques based on decision trees are discussed in the literature of statistics and computer vision communities \cite{ram2011density,provost2000well}, and with some optimization they are suitable for HEP as they can contribute to solve both classification and analysis-automation problems in particular in the first, exploratory stages of data analysis. In this paper I briefly describe the Density Estimation Tree (DET) algorithm, including an innovative and fast cross-validation technique based on KDE and consider few examples of successful usage of DETs in HEP. \section{The algorithm} A decision tree is an algorithm or a flowchart composed of internal \emph{nodes} representing tests of a variable or of a property. Nodes are connected to form \emph{branches}, each terminates into a \emph{leaf}, associated to a \emph{decision}. Decision trees are extended to Density (or Probability) Estimation Trees when the \emph{decisions} are probability density estimations of the underlying probability density function of the tested variables. Formally, the estimator is written as \begin{equation} \hat f (\ensuremath{\mathbf{x}}\xspace) = \sum_{i = 1}^{\ensuremath{N_{\mathrm{leaves}}}\xspace} \frac{1}{\ensuremath{N_{\mathrm{tot}}}\xspace} \frac{\Nbinfunc{i}}{\Vbinfunc{i}} \Ifunc{\ensuremath{\mathbf{x}}\xspace}{i}, \end{equation} where \ensuremath{N_{\mathrm{leaves}}}\xspace is the total number of leaves of the decision tree, \Nbinfunc{i} the number of entries associated to the $i$-th leaf, and \Vbinfunc{i} is its volume. If a generic data entry, defined by the input variables $\ensuremath{\mathbf{x}}\xspace$, would fall within the $i$-th leaf, then \ensuremath{\mathbf{x}}\xspace is said to be in the $i$-th leaf, and the characteristic function of the $i$-th leaf, \begin{equation}\label{eq:detestimator} \Ifunc{\ensuremath{\mathbf{x}}\xspace}{} = \left\{ \begin{array}{ll} 1 & \mbox{if $\ensuremath{\mathbf{x}}\xspace \in \binfunc{i}$} \\ 0 & \mbox{if $\ensuremath{\mathbf{x}}\xspace \not\in \binfunc{i}$} \\ \end{array} \right., \end{equation} equals unity. By construction, all the characteristic functions associated to the other leaves, are null. Namely, \begin{equation} \ensuremath{\mathbf{x}}\xspace \in \binfunc{i} \quad\Rightarrow\quad \ensuremath{\mathbf{x}}\xspace \not\in \binfunc{j} \quad\forall j : j \neq i. \end{equation} The training of the Density Estimation Tree is divided in three steps: \emph{tree growth}, \emph{pruning}, and \emph{cross-validation}. Once the tree is trained it can be evaluated using the simple estimator of Equation \ref{eq:detestimator} or some evolution obtained through \emph{smearing} or \emph{interpolation}. These steps are briefly discussed below. \subsection{Tree growth} As for other decision trees, the tree growth is based on the minimization of an estimator of the error. For DETs, the error is the Integrated Squared Error (ISE), defined as \begin{equation} \mathcal R = \mathrm{ISE} (f, \hat f) = \int_{\mathcal S} (\hat f(\ensuremath{\mathbf{x}}\xspace) - f(\ensuremath{\mathbf{x}}\xspace))^2 \mathrm d\ensuremath{\mathbf{x}}\xspace. \end{equation} It can be shown (see for example \cite{anderlini:2015} for a pedagogical discussion) that, for large samples, the minimization of the ISE is equivalent to the minimization of \begin{equation} \mathcal R_{\mathrm{simple}} = -\sum_{i = 1}^{\ensuremath{N_{\mathrm{leaves}}}\xspace} \left( \frac{\Nbinfunc{i}}{\ensuremath{N_{\mathrm{tot}}}\xspace}\right)^2 \frac{1}{\Vbinfunc{i}}. \end{equation} The tree is therefore grown by defining the replacement error \begin{equation} R(\binfunc{i}) = -\frac{\big(N(\binfunc{i})\big)^2}{\ensuremath{N_{\mathrm{tot}}}\xspace^2 \Vbinfunc{i}}, \end{equation} and iteratively splitting each leaf $\ell$ to two sub-leaves $\ell_L$ and $\ell_R$ maximising the residual gain \begin{equation} G(\ell) = R(\ell) - R(\ell_L) - R(\ell_R). \end{equation} The growth is arrested, and the splitting avoided, when some stop condition is matched. The most common stop condition is $N(\ell_L) < N_{\mathrm{min}}$ or $N(\ell_R) < N_\mathrm{min}$; but it can be OR-ed with some alternative requirement, for example on the widths of the leaves. A more complex stop condition is obtained by defining a minimal leaf-width $t^{(m)}$ with respect to each dimension $m$. Splitting by testing $x^{(m)}$ is forbidden if the width of one of the resulting leaves is smaller than $t^{(m)}$. When no splitting is allowed the branch growth is stopped. This stop condition requires to issue the algorithm with a few more input parameters, the leaf-width thresholds, but is very powerful against over-training. Besides, the determination of reasonable leaf-widths is an easy task for most problems, once the expected resolution on each variable is known. Figure \ref{fig:trainingexample} depicts a simple example of the training procedure on a two-dimensional real data-sample. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{img/det-simpleexample} % \caption{\label{fig:trainingexample} Simple example of training of a density estimation tree over a two dimensional sample. } \end{figure} \subsection{Tree pruning} DETs can be overtrained. Overtraining (or overfitting) occurs when the statistical model obtained through the DET describes random noise or fluctuations instead of the underlying distribution. The effect results in trees with isolated leaves with small volume and therefore associated to large density estimations, surrounded by almost-empty leaves. Overtraining can be reduced through \emph{pruning}, an \emph{a posteriori} processing of the tree structure. The basic idea is to sort the nodes in terms of the actual improvement they introduce in the statistical description of the data model. Following a procedure common for classification and regression trees, the \emph{regularized error} is defined as \begin{equation} R_\alpha(\nodefunc{i}) = \sum_{j \in \mathrm{leaves\ of\ \nodefunc{i}}} \!\!\!\!\!\!\!\!\! R(\binfunc{j}) + \alpha C(\nodefunc{i}), \end{equation} where $\alpha$ is named \emph{regularization parameter}, and the index $j$ runs over the sub-nodes of \nodefunc{i} with no further sub-nodes (its leaves). $C(\nodefunc{i})$ is the \emph{complexity function} of \binfunc{i}. Several choices for the complexity function are possible. In the literature of classification and regression trees, a common definition is to set $C(\nodefunc{i})$ to the number of terminal nodes (or leaves) attached to \nodefunc{i}. Such a complexity function provides a top-down simplification technique which is complementary to the stop condition. Unfortunately, in practice, the optimization through the pruning obtained with a number-of-leaves complexity function is ineffective against overtraining, if the stop condition is suboptimal. An alternative cost function, based on the depth of the node in the tree development, provides a bottom-up pruning, which can be seen as an \emph{a posteriori} optimization of the stop condition. An example of the two cost functions discussed is shown in Figure \ref{fig:complexityFunction}. \begin{figure} \centering \includegraphics[width=0.3\textwidth]{img/det-costFunction} % \caption{\label{fig:complexityFunction} Two examples of complexity function based on the number of leaves or subtrees, or on the node depth. } \end{figure} If $R_\alpha(\nodefunc{i}) > R(\nodefunc{i})$ the splitting of the $i$-th node is pruned, and its sub-nodes merged into a unique leaf. Each node is therefore associated to a threshold value of the regularization parameter, so that if $\alpha$ is larger than the threshold $\alpha_i$, then the $i$-th node is pruned. Namely, \begin{equation} \alpha_i = \frac{1}{C(\nodefunc{i})} \left( R(\nodefunc{i}) - \!\!\!\!\!\! \!\!\!\!\!\! \sum_{j \in \mathrm{leaves\ of\ \nodefunc{i}}} \!\!\!\!\!\! \!\!\!\!\!\! R(\binfunc{j}) \right) . \end{equation} The quality of the estimator $Q(\alpha)$, defined and discussed below, can then be evaluated per each threshold value of the regularization parameter. The optimal pruning is obtained for \begin{equation} \alpha = \alpha_{\mathrm{best}} \quad : \quad Q(\alpha_{\mathrm{best}}) = \max_{\alpha \in \{\alpha_i\}_i} Q (\alpha). \end{equation} \subsection{Cross-validation} The determination of the optimal regularization parameter is named \emph{cross-validation}, and many different techniques are possible, depending on the choice of the quality function. A common cross-validation technique for classification and regression trees is the \emph{Leave-One-Out} (LOO) cross-validation and consists in the estimation of the underlying probability distribution through a resampling of the original dataset. For each data entry $i$, a sample containing all the entries but $i$ is used to train a DET. The ISE is redefined as \begin{equation} R_{\mathrm{LOO}}(\alpha) = \int_{\mathcal S} \left(\hat f^{\alpha}(\ensuremath{\mathbf{x}}\xspace)\right)^2\mathrm d\ensuremath{\mathbf{x}}\xspace -\frac{2}{\ensuremath{N_{\mathrm{tot}}}\xspace} \sum_{i=1}^{\ensuremath{N_{\mathrm{tot}}}\xspace} \hat f_{\mathrm{not}\ i}^\alpha(\ensuremath{\mathbf{x}}\xspace_i) , \end{equation} where $\hat f^\alpha(\ensuremath{\mathbf{x}}\xspace)$ is the probability density estimation obtained with a tree pruned with regularization parameter $\alpha$, and $\hat f_{\mathrm{not}\ i}^\alpha(\ensuremath{\mathbf{x}}\xspace)$ is the analogous estimator obtained from a dataset obtained removing the $i$-th entry form the original sample. The quality function is \begin{equation} Q(\alpha) = - R_{\mathrm{LOO}}(\alpha). \end{equation} The application of the LOO cross-validation is very slow and requires to build one decision tree per entry. When considering the application of DETs to large samples, containing for example one million of entries, the construction of a million of decision trees and their evaluation per one million of threshold regularization constants becomes unreasonable. A much faster cross-validation is obtained comparing the estimation obtained with the DET with a triangular-kernel density estimation \begin{align}\nonumber f_k & (\ensuremath{\mathbf{x}}\xspace) = \frac{1}{\ensuremath{N_{\mathrm{tot}}}\xspace}\times \\ & \times \sum_{i=1}^{\ensuremath{N_{\mathrm{tot}}}\xspace} \prod_{k=1}^{d}\left(1-\left|\frac{\ensuremath{\mathbf{x}}\xspace - \ensuremath{\mathbf{x}}\xspace_i}{h_k}\right|\right) \,\theta\!\left(1-\left|\frac{\ensuremath{\mathbf{x}}\xspace - \ensuremath{\mathbf{x}}\xspace_i}{h_k}\right|\right), \end{align} where $\theta(x)$ is the Heaviside step function, $k$ runs over the $d$ dimensions of the coordinate space $\mathcal S$, and $h_k$ is the kernel bandwidth with respect to the variable $x^{(k)}$. The quality function is \begin{equation} Q^{ker}(\alpha) = - \int_{\mathcal S} (f_\alpha(x)^2 - f_k(x)^2)^2 \mathrm d\ensuremath{\mathbf{x}}\xspace. \end{equation} The choice of a triangular kernel allows to analytically solve the integral writing that \begin{equation} Q^{ker}(\alpha) = \frac{1}{\ensuremath{N_{\mathrm{tot}}}\xspace^2}\sum_{j = 1}^{\ensuremath{N_{\mathrm{leaves}}}\xspace} \frac{N(\binfunc{j}^\alpha)}{V(\binfunc{j}^\alpha)} (2 \mathcal N_j - N(\binfunc{j}^\alpha)) + \mathrm{const}, \end{equation} where $\binfunc{j}^\alpha$ represents the $j$-th leaf of the DET pruned with regularization constant $\alpha$, and \begin{align}\nonumber \mathcal N_j = & \sum_{i=1}^{\ensuremath{N_{\mathrm{tot}}}\xspace}\sum_{k=1}^{d}\mathcal I_{jk}(\ensuremath{\mathbf{x}}\xspace_i;h_k) = \sum_{i=1}^{\ensuremath{N_{\mathrm{tot}}}\xspace} \sum_{k=1}^{d} \Bigg[ u_{ij}^{(k)}-\ell_{ij}^{(k)} + \\ \nonumber - & \frac{(u_{ij}^{(k)} - x_i^{(k)})^2}{2h_k} \ensuremath{\mathrm{sign}}\xspace\left(u_{ij}^{(k)}-x_i^{(k)}\right) + \\ + & \frac{(\ell_{ij}^{(k)} - x_i^{(k)})^2}{2h_k} \ensuremath{\mathrm{sign}}\xspace\left(x_i^{(k)}-\ell_{ij}^{k)}\right)\Bigg]. \label{eq:nj} \end{align} with $\ensuremath{\mathrm{sign}}\xspace(x) = 2\theta(x) - 1$, and \begin{equation}\label{eq:uijlij} \left\{ \begin{array}{l} u_{ij}^{(k)}=\min\left(x_{\max}^{(k)}(\binfunc{j}), x_i^{(k)} + h_k\right)\\ \ell_{ij}^{(k)}=\max\left(x_{\min}^{(k)}(\binfunc{j}),x_i^{(k)}-h_k\right). \end{array} \right. \end{equation} In Equation \ref{eq:uijlij}, $x_{\max}^{(k)}(\binfunc{j})$ and $x_{\min}^{(k)}(\binfunc{j})$ represent the upper and lower boundaries of the $j$-th leaf, respectively. An interesting aspect of this technique is that a large part of the computational cost is hidden in the definition of $\mathcal N_j$ which does not depend on $\alpha$, and therefore can be calculated only once per node, \emph{de facto} reducing the computational complexity by a factor $\ensuremath{N_{\mathrm{tot}}}\xspace \times \ensuremath{N_{\mathrm{leaves}}}\xspace$. \subsection{DET Evaluation: smearing and interpolation} One of the major limitations of DETs is the existence of sharp boundaries which are unphysical. Besides, a small variation of the position of a boundary can lead to a large variation in the final result, when using DETs for data modelling. Two families of solutions are discussed here: smearing and linear interpolation. The former can be seen as a convolution of the density estimator with a resolution function. The effect is that sharp boundaries disappear and residual overtraining is cured, but as long as the resolution function has a fixed width, the adaptability of the DET algorithms is partially lost: resolution will never be smaller than the smearing function width. An alternative technique is interpolation, assuming some behaviour (usually linear) of the density estimator between the middle points of each leaf. The density estimation at the center of each leaf is assumed to be accurate, therefore overtraining is not cured, and may lead to catastrophic density estimations. Interpolation is treated here only marginally. It is not very robust, and it is hardly scalable to more than two dimensions. Still, it may represent a useful smoothing tool for samples composed of contributions with resolutions spanning a large interval, for which adaptability is crucial. \subsubsection{Smearing} The smeared version of the density estimator can be written as \begin{equation} \hat f_s (\ensuremath{\mathbf{x}}\xspace) = \int_{\mathcal S} \hat f( \mathbf{z}) w\left(\frac{\ensuremath{\mathbf{x}}\xspace - \mathbf{z}}{h_k}\right)\mathrm d\mathbf{x}, \end{equation} where $w(\ensuremath{\mathbf{x}}\xspace)$ is the \emph{resolution function}. Using a triangular resolution function $w(t) = (1-|t|)\,\theta(1-|t|)$, \begin{equation} \hat f_s (\ensuremath{\mathbf{x}}\xspace) = \sum_{j = 1}^{\ensuremath{N_{\mathrm{leaves}}}\xspace} \prod_{k=1}^{d} \mathcal I_{jk}(\ensuremath{\mathbf{x}}\xspace; h_k), \end{equation} where $\mathcal I_{jk}(\ensuremath{\mathbf{x}}\xspace; h_k)$ was defined in Equation \ref{eq:nj}. Note that the evaluation of the estimator does not require a loop on the entries, factorized within $\mathcal I_{jk}$. \subsubsection{Interpolation} As mentioned above, the discussion of interpolation is restrained to two-dimensional problems. The basic idea of linear interpolation is to associate each $\ensuremath{\mathbf{x}}\xspace \in \mathcal S$ to the three leaf centers defining the smallest triangle inscribing $\ensuremath{\mathbf{x}}\xspace$ (step named \emph{padding} or \emph{tessellation}). Using the positions of the leaf centers, and the corresponding values of the density estimator as coordinates, it is possible to define a unique plane. The plane can then be ``read'' associating to each $\ensuremath{\mathbf{x}}\xspace \in \mathcal S$ a different density estimation. The key aspect of the algorithm is \emph{padding}. Padding techniques are discussed for example in \cite{deBerg:2008}. The algorithm used in the examples below is based on Delaunay tessellation as implemented in the ROOT libraries \cite{ROOT:1971}. Extensions to more than two dimensions are possible, but non trivial and computationally expensive. Instead of triangles, one should consider hyper-volumes defined by $(d+1)$ leaf centers, where $d$ is the number of dimensions. Moving to parabolic interpolation is also reasonable, but the tessellation problem for $(d+2)$ volumes is less treated in the literature, requiring further development. \section{Timing and computational cost} The discussion of the performance of the algorithm is based on an a single-core C++ implementation. Many-core tree growth, with each core growing an independent branch, is an embarrassing parallel problem. Parallelization of the cross-validation is also possible, if each core tests the Quality function for a different value of the regularization parameter $\alpha$. ROOT libraries are used to handle the input-output, but the algorithm is independent, relying on STL containers for data structures. The advantage of DET algorithms over kernel-based density estimators is the speed of training and evaluation. The complexity of the algorithm is $\ensuremath{N_{\mathrm{leaves}}}\xspace \times \ensuremath{N_{\mathrm{tot}}}\xspace$. In common use cases, the two quantities are not independent, because for larger samples it is reasonable to adopt a finer binning in particular in the tails. Therefore, depending on the stop condition the computational cost scales with the size of the data sample as $\ensuremath{N_{\mathrm{tot}}}\xspace$ to $\ensuremath{N_{\mathrm{tot}}}\xspace^2$. Kernel density estimation in the ROOT implementation is found to scale as $\ensuremath{N_{\mathrm{tot}}}\xspace^2$. Reading time scales roughly as \ensuremath{N_{\mathrm{leaves}}}\xspace. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{img/det-cluster-timing-reasonableCriterion} \includegraphics[width=0.4\textwidth]{img/det-cluster-timing-looseCriterion} \caption{\label{fig:timing} CPU time to train and evaluate a self-optimized decision tree as a function of the number of entries \ensuremath{N_{\mathrm{tot}}}\xspace. On the top, a stop criterion including a reasonable leaf-width threshold is used; on the bottom it is replaced with a very loose threshold. The time needed to train a Kernel Density Estimation (KDE) is also reported for comparison. } \end{figure} Figure \ref{fig:timing} reports the comparison of the CPU time needed to train, optimize and sample on a $200\times 200$ grid a DET; the time to train a kernel density estimation on the same sample is also reported. The two plots show the results obtained with reasonable and loose stop conditions based on the minimal leaf width. It is interesting to observe that when using a loose leaf-width condition, $\ensuremath{N_{\mathrm{leaves}}}\xspace \propto \ensuremath{N_{\mathrm{tot}}}\xspace$ and the algorithm scales as $\ensuremath{N_{\mathrm{tot}}}\xspace^2$. Increasing the size of the sample, the leaf-width condition becomes relevant and the computational cost of the DET deflects from $\ensuremath{N_{\mathrm{tot}}}\xspace^2$, and starts being convenient with respect to KDE. \section{Applications in HEP} In this section I discuss a few possible use cases of density estimation trees in High Energy Physics. In general, the technique is applicable to all problems involving data modeling, including efficiency determination and background subtraction. However, for these applications KDE is usually preferable, and only in case of too large samples, in some development phase of the analysis code, it may be reasonable to adopt DET instead. Here I consider applications where the nature of the estimator, providing fast training and fast integration, introduces multivariate density estimation into problems traditionally treated alternatively. The examples are based on a dataset of real data collected during the $pp$ collision programme of the Large Hadron Collider at CERN by the LHCb experiment. The dataset has been released by the LHCb Collaboration in the framework of the LHCb Masterclass programme. The detail of the reconstruction and selection, not relevant to the discussion of the DET algorithm, are discussed in Ref. \cite{LHCbMasterClass}. The data sample contains combinations of a pion ($\pi$) and a kaon ($K$), two light mesons, loosely consistent with the decay of a $D^0$ meson. Figure \ref{fig:d0mass} shows the invariant mass of the $K\pi$ combination, \emph{i.e.} the mass of an hypothetical mother particle decayed to the reconstructed kaon and pion. Two contributions are evident: a peak due to real $D^0$ decays, with the invariant mass which is consistent with the mass of the $D^0$ meson, and a flat contribution due to the random combination of kaons and pions, with an invariant mass which is a random number. The peaked contribution is named ``Signal'', the flat one is the ``Background''. An important aspect of data analysis in HEP consists in the disentanglement of different contributions to allow statistical studies of the signal without pollution from background. In next two Sections, I consider two different approaches to signal-background separation. First, an application of DETs to the optimization of the rectangular selection is discussed. Then, a more powerful statistical approach based on likelihood analysis is described. \begin{figure} \centering \includegraphics[width=.5\textwidth]{img/det-002-massModel} \caption{\label{fig:d0mass} Invariant mass of the combinations of a kaon and a pion loosely consistent with a $D^0$ decay. Two contributions are described in the model: a peaking contribution for signal, where the $D^0$ candidates are consistent with the mass of the $D^0$ meson (Signal), and a non-peaking contribution due to random combinations of a kaon and a pion not produced in a $D^0$ decay (Background). } \end{figure} \subsection{Selection optimization} When trying to select a relatively pure sample of signal candidates, rejecting background, it is important to define an optimal selection strategy based on the variables associated to each candidate. For example, a large momentum of the $D^0$ candidate ($D^0\, p_T$) is more common for signal than for background candidates, therefore $D^0$ candidates with a $p_T$ below a certain threshold can be safely rejected. The same strategy can be applied to the transverse momentum of the kaon and of the pion separately, which are obviously correlated with the momentum of their mother candidate, the $D^0$ meson. Another useful variable is some measure of the consistency of the reconstructed flight direction of the $D^0$ candidate with its expected origin (the $pp$ vertex). Random combinations of a pion and a kaon are likely to produce $D^0$ candidates poorly aligned with the point where $D^0$ are expected to be produced. In the following I will use the Impact Parameter (IP) defined as the distance between the reconstructed flight direction of the $D^0$ meson and the $pp$ vertex. The choice of the thresholds used to reject background to enhance signal purity often relies on simulated samples of signal candidates, and on data regions which are expected to be well dominated by background candidates. In the example discussed here, the background sample is obtained selecting the $D^0$ candidates with a mass $1.815 < m(D^0) < 1.840$ GeV/$c^2$ or $1.890 < m(D^0) < 1.915$ GeV/$c^2$. The usual technique to optimize the selection is to count the number of simulated signal candidates $N_S$ and background candidates $N_B$ surviving a given combination of thresholds $\mathbf{t}$, and picking the combination which maximizes some metric $M$, for example \begin{equation} M(\mathbf{t}) = \frac{S(\mathbf{t})}{S(\mathbf{t})+B(\mathbf{t})+1} = \frac{\epsilon_S N_S(\mathbf{t})} {\epsilon_S N_S(\mathbf{t})+\epsilon_B N_B(\mathbf{t})+1} \end{equation} where $\epsilon_S$ ($\epsilon_B$) is the normalization factors between the number of entries $N_S^\infty$ ($N_B^\infty$) in the pure sample and the expected yields $S^\infty$ ($B^\infty$) in the mixed sample prior the selection. When the number of thresholds to be optimized is large, the optimization may require many iterations. Only in absence of correlation between the variables used in the selection, the optimization can be factorized reducing the number of iterations. For large samples, counting the surviving candidates at each iteration may become very expensive. Two DET estimators $\hat f_S(\ensuremath{\mathbf{x}}\xspace)$ and $\hat f_B(\ensuremath{\mathbf{x}}\xspace)$ for the pure samples can be used to reduce the computational cost of the optimization from \ensuremath{N_{\mathrm{tot}}}\xspace to \ensuremath{N_{\mathrm{leaves}}}\xspace, integrating the distribution leaf by leaf instead of counting the entries. The integral of the density estimator in the rectangular selection $R$ can be formally written as \begin{equation}\label{eq:integral} \int_R \hat f(\ensuremath{\mathbf{x}}\xspace) \mathrm d\ensuremath{\mathbf{x}}\xspace = \frac{1}{\ensuremath{N_{\mathrm{tot}}}\xspace} \sum_{i = 1}^{\ensuremath{N_{\mathrm{leaves}}}\xspace} \frac{V(\binfunc{i} \cap R)}{\Vbinfunc{i}}\Nbinfunc{i}. \end{equation} The optimization requires to find \begin{equation} R = R_{\mathrm{opt}} \quad : \quad M_I(R_{\mathrm{opt}}) = \max_{R \subset \mathcal S} M_I(R), \end{equation} with \begin{equation} M_I(R) = \frac{S^{\infty}\int_R \hat f_S(\ensuremath{\mathbf{x}}\xspace)\mathrm d \ensuremath{\mathbf{x}}\xspace} {1 + S^{\infty}\int_R \hat f_S(\ensuremath{\mathbf{x}}\xspace)\mathrm d \ensuremath{\mathbf{x}}\xspace + B^{\infty}\int_R \hat f_B(\ensuremath{\mathbf{x}}\xspace)\mathrm d \ensuremath{\mathbf{x}}\xspace}. \end{equation} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{img/det-016-Signal2D} \includegraphics[width=0.4\textwidth]{img/det-017-Background2D} \caption{\label{fig:detsigbkg} Density Estimation of pure signal (top) and pure background (bottom) samples, projected onto the plane of the impact parameter and proper decay time. The entries of the data sample are shown as black dots superposed to the color scale representing the density estimation. } \end{figure} Figure \ref{fig:detsigbkg} reports a projection of $\hat f_S$ and $\hat f_B$ onto the plane defined by the impact parameter (IP) and the proper decay time of the $D^0$ meson. The two variables are obviously correlated, because $D^0$ candidates poorly consistent with their expected origin are associated to a larger decay time in the reconstruction procedure, which is based on the measurements of the $D^0$ flight distance and of its momentum. The estimation reproduces correctly the correlation, allowing better background rejection combining the discriminating power of the two variables when defining the selection criterion. \subsection{Likelihood analyses} Instead of an optimization of the rectangular selection it is reasonable to separate signal and background using multivariate techniques as Classification Trees or Neural Network. A multivariate statistic based on likelihood can be built using DETs: \begin{equation} \Delta \log\mathcal L (\ensuremath{\mathbf{x}}\xspace) = \log\frac{\hat f_S(\ensuremath{\mathbf{x}}\xspace)}{\hat f_B(\ensuremath{\mathbf{x}}\xspace)}. \end{equation} The advantage of using density estimators over Classification Trees is that likelihood functions from different samples can be easily combined. Consider the sample of $K\pi$ combinations described above. Among the variables defined to describe each candidate there are Particle Identification (PID) variables, response of an Artificial Neural Network (ANN) trained on simulation, designed to provide discrimination, for example, between kaons and pions. The distributions of PID variables are very difficult to simulate properly because the conditions of the detectors used for PID are not perfectly stable during the data acquisition. It is therefore preferred to use pure samples of real kaons and pions to study the distributions instead of simulating them. The distributions obtained depends on the particle momentum $p$, and on the angle $\theta$ between the particle momentum and the proton beams. These variables are obviously correlated to the transverse momentum which, as discussed in the previous section, is a powerful discriminating variable, whose distribution has to be taken from simulation, and is in general different from simulated samples. To shorten the equations, below I apply the technique to the kaon only, but the same could be done for the identification of the pion. The multivariate statistic can therefore be rewritten as \begin{align}\nonumber \Delta \log & \mathcal L \big(p_T(D^0), \mathrm{IP}, p_T(K), p_T(\pi), p_K, \theta_K, \mathrm{PID}K_K\big) = \\\nonumber = \ & \frac{\hat f_S(p_T(D^0), \mathrm{IP}, p_T(K), p_T(\pi))} {\hat f_B(p_T(D^0), \mathrm{IP}, p_T(K), p_T(K),\mathrm{PID}K_K)} \times \\ & \times \frac{\hat f_K (\mathrm{PID}K_K, p_K, \theta_K)} {\int\mathrm d(\mathrm{PID}K_K) \hat f_K (\mathrm{PID}K_K, p_K, \theta_K)}, \label{eq:mydll} \end{align} where PID$K_K$ is the response of the PID ANN for the kaon candidate and the kaon hypothesis, and $\hat f_K$ is the DET model built from a pure calibration sample of kaons. The opportunity of operating this disentanglement is due to the properties of the probability distribution functions which are not trivially transferable to Classification Trees. Note that, as opposed to the previous use case, where integration discourages smearing because Equation \ref{eq:integral} is not applicable to the smeared version of the density estimator, likelihood analyses can benefit of smearing techniques for the evaluation of the first term in Equation \ref{eq:mydll}, while for the second term, smearing can be avoided thanks to the large statistics usually available for calibration samples. \section{Conclusion} Density Estimation Trees are fast and robust algorithm providing probability density estimators based on decision trees. They can be grown cheaply beyond overtraining, and then pruned through a kernel-based cross-validation. The procedure is computationally cheaper than pure kernel density estimation because the evaluation of the latter is performed only once per leaf. Integration and projections of the density estimator are also fast, providing an efficient tool for many-variable problems involving large samples. Smoothing techniques discussed here include smearing and linear interpolation. The former is useful to fight overtraining, but challenges the adaptability of the DET algorithms. Linear interpolation requires tessellation algorithms which are nowadays available for problems with three or less variables, only. A few applications to high energy physics have been illustrated using the $D^0 \to K^- \pi^+$ decay mode, made public by the LHCb Collaboration in the framework of the Masterclass programme. Selection optimization and likelihood analyses can benefit of different features of the Density Estimation Tree algorithms. Optimization problems require fast integration of a many-variable density estimator, made possible by its simple structure with leaves associated to constant values. Likelihood analyses benefit of the speed of the method which allows to model large calibration samples in a time much reduced with respect to KDE, and offering an accuracy of the statistical model much better than histograms. In conclusion, Density Estimation Trees are interesting algorithms which can play an important role in exploratory data analysis in the field of High Energy Physics, filling a gap between the simple histograms and the expensive Kernel Density Estimation, and becoming more and more relevant in the age of the Big Data samples.
1,116,691,501,108
arxiv
\section{Definitions and background} A finite zero-sum game with perfect information and simultaneous moves can be described by a tuple $(\mathcal{N}, \mathcal{H}, \mathcal{Z}, \mathcal{A}, \mathcal{T}, u_1, h_0)$, where $\mathcal{N} = \{ 1, 2\}$ contains player labels, $\mathcal{H}$ is a set of inner states and $\mathcal{Z}$ denotes the terminal states. $\mathcal{A} = \mathcal{A}_1 \times \mathcal{A}_2$ is the set of joint actions of individual players and we denote $\mathcal{A}_1(h)=\{1\dots m^h\}$ and $\mathcal{A}_2(h)=\{1\dots n^h\}$ the actions available to individual players in state $h \in \mathcal{H}$. The transition function $\mathcal{T} : \mathcal{H} \times \mathcal{A}_1 \times \mathcal{A}_2 \mapsto \mathcal{H}\cup\mathcal{Z}$ defines the successor state given a current state and actions for both players. For brevity, we sometimes denote $\mathcal{T}(h,i,j) \equiv h_{ij}$. The utility function $u_1 : \mathcal{Z} \mapsto [v_{\min}, v_{\max}] \subseteq \mathbb{R}$ gives the utility of player 1, with $v_{min}$ and $v_{\max}$ denoting the minimum and maximum possible utility respectively. Without loss of generality we assume $v_{\min}=0$, $v_{\max}=1$, and $\forall z \in \mathcal{Z}, u_2(z) = 1 - u_1(z)$. The game starts in an initial state $h_0$. A {\it matrix game} is a single-stage simultaneous move game with action sets $\mathcal{A}_1$ and $\mathcal{A}_2$. Each entry in the matrix $M = (a_{ij})$ where $(i,j) \in \mathcal{A}_1 \times \mathcal{A}_2$ and $a_{ij} \in [0,1]$ corresponds to a payoff (to player 1) if row $i$ is chosen by player 1 and column $j$ by player 2. A strategy $\sigma_q \in \Delta(\mathcal{A}_q)$ is a distribution over the actions in $\mathcal{A}_q$. If $\sigma_1$ is represented as a row vector and $\sigma_2$ as a column vector, then the expected value to player 1 when both players play with these strategies is $u_1(\sigma_1, \sigma_2) = \sigma_1 M \sigma_2$. Given a profile $\sigma = (\sigma_1, \sigma_2)$, define the utilities against best response strategies to be $u_1(br, \sigma_2) = \max_{\sigma_1' \in \Delta(\mathcal{A}_1)} \sigma_1' M \sigma_2$ and $u_1(\sigma_1, br) = \min_{\sigma_2' \in \Delta(\mathcal{A}_2)} \sigma_1 M \sigma_2'$. A strategy profile $(\sigma_1, \sigma_2)$ is an $\epsilon$-Nash equilibrium of the matrix game $M$ if and only if \begin{equation}\label{eq:nfgNE} u_1(br, \sigma_2) - u_1(\sigma_1, \sigma_2) \leq \epsilon \hspace{1cm} \mbox{and} \hspace{1cm} u_1(\sigma_1, \sigma_2) - u_1(\sigma_1, br) \leq \epsilon \end{equation} Two-player perfect information games with simultaneous moves are sometimes appropriately called {\it stacked matrix games} because at every state $h$ each joint action from set $\mathcal{A}_1(h) \times \mathcal{A}_2(h)$ either leads to a terminal state or to a subgame which is itself another stacked matrix game (see Figure~\ref{fig:tree}). \begin{figure} \centering \includegraphics[width=0.6\textwidth]{fig/tree.pdf} \caption{A game tree of a game with perfect information and simultaneous moves. Only the leaves contain the actual rewards; the remaining numbers are the expected reward for the optimal strategy.} \label{fig:tree} \end{figure} A {\it behavioral strategy} for player $q$ is a mapping from states $h \in \mathcal{H}$ to a probability distribution over the actions $\mathcal{A}_q(h)$, denoted $\sigma_q(h)$. Given a profile $\sigma = (\sigma_1, \sigma_2)$, define the probability of reaching a terminal state $z$ under $\sigma$ as $\pi^\sigma(z) = \pi_1(z) \pi_2(z)$, where each $\pi_q(z)$ is a product of probabilities of the actions taken by player $q$ along the path to $z$. Define $\Sigma_q$ to be the set of behavioral strategies for player $q$. Then for any strategy profile $\sigma = (\sigma_1,\sigma_2) \in \Sigma_1 \times \Sigma_2$ we define the expected utility of the strategy profile (for player 1) as \begin{equation} u(\sigma) = u(\sigma_1,\sigma_2) = \sum_{z \in Z} \pi^\sigma(z) u_1(z) \end{equation} An $\epsilon$-Nash equilibrium profile ($\sigma_1,\sigma_2$) in this case is defined analogously to (\ref{eq:nfgNE}). In other words, none of the players can improve their utility by more than $\epsilon$ by deviating unilaterally. If the strategies are an $\epsilon$-NE in each subgame starting in an arbitrary game state, the equilibrium strategy is termed subgame perfect. If $\sigma = (\sigma_1, \sigma_2)$ is an exact Nash equilibrium ({\it i.e.,}~ $\epsilon$-NE with $\epsilon=0$), then we denote the unique value of the game $v^{h_0} = u(\sigma_1, \sigma_2)$. For any $h \in \mathcal{H}$, we denote $v^h$ the value of the subgame rooted in state $h$. \section{Conclusion} We present the first formal analysis of convergence of MCTS algorithms in zero-sum extensive-form games with perfect information and simultaneous moves. We show that any $\epsilon$-Hannan consistent algorithm can be used to create a MCTS algorithm that provably converges to an approximate Nash equilibrium of the game. This justifies the usage of the MCTS as an approximation algorithm for this class of games from the perspective of algorithmic game theory. We complement the formal analysis with experimental evaluation that shows that other MCTS variants for this class of games, which are not covered by the proof, also converge to the approximate NE of the game. Hence, we believe that the presented proofs can be generalized to include these cases as well. Besides this, we will focus our future research on providing finite time convergence bounds for these algorithms and generalizing the results to more general classes of extensive-form games with imperfect information. \section{Empirical analysis \label{sec:experiments}} In this section, we first evaluate the influence of propagating the mean values instead of the current sample value in MCTS to the speed of convergence to Nash equilibrium. Afterwards, we try to assess the convergence rate of the algorithms in the worst case. In most of the experiments, we use as the bases of the SM-MCTS algorithm Regret matching as the selection strategy, because a superior convergence rate bound is known for this algorithm and it has been reported to be very successful also empirically in \cite{Lanctot2013cgw}. We always use the empirical frequencies to create the evaluated strategy and measure the exploitability of the first player's strategy (i.e., $v^{h_0} - u(\hat{\sigma}_1,br)$). \subsection{Influence of propagation of the mean} The formal analysis presented in the previous section requires the algorithms to return the mean of all the previous samples instead of the value of the current sample. The latter is generally the case in previous works on SM-MCTS \cite{Lanctot2013cgw,Teytaud11Upper}. We run both variants with the Regret matching algorithm on a set of randomly generated games parameterized by depth and branching factor. Branching factor was always the same for both players. For the following experiments, the utility values are randomly selected uniformly from interval $\langle 0,1\rangle$. Each experiment uses 100 random games and 100 runs of the algorithm. \begin{figure} \begin{minipage}{0.89\textwidth} \includegraphics[width=\textwidth]{fig/RMvsRMM_bf}\\ \includegraphics[width=\textwidth]{fig/RMvsRMM_depth}\\ \end{minipage} \hfill \includegraphics[width=0.08\textwidth]{fig/RMvsRMM_bf_legend}\\ \caption{Exploitability of strategies given by the empirical frequencies of Regret matching with propagating values (RM) and means (RMM) for various depths and branching factors.}\label{fig:trends} \end{figure} Figure~\ref{fig:trends} presents how the exploitability of the strategies produced by Regret matching with propagation of the mean (RMM) and current sample value (RM) develops with increasing number of iterations. Note that both axes are in logarithmic scale. The top graph is for depth of 2, different branching factors (BF) and $\gamma \in \{0.05, 0.1, 0.2\}$. The bottom one presents different depths for $BF=2$. The results show that both methods converge to the approximate Nash equilibrium of the game. RMM converges slightly slower in all cases. The difference is very small in small games, but becomes more apparent in games with larger depth. \subsection{Empirical convergence rate} Although the formal analysis guarantees the convergence to an $\epsilon$-NE of the game, the rate of the convergence is not given. Therefore, we give an empirical analysis of the convergence and specifically focus on the cases that reached the slowest convergence from a set of evaluated games. We have performed a brute force search through all games of depth 2 with branching factor 2 and utilities form the set $\{0,0.5,1\}$. We made 100 runs of RM and RMM with exploration set to $\gamma=0.05$ for 1000 iterations and computed the mean exploitability of the strategy. The games with the highest exploitability for each method are presented in Figure~\ref{fig:worstGames}. These games are not guaranteed to be the exact worst case, because of possible error caused by only 100 runs of the algorithm, but they are representatives of particularly difficult cases for the algorithms. In general, the games that are most difficult for one method are difficult also for the other. Note that we systematically searched also for games in which RMM performs better than RM, but this was never the case with sufficient number of runs of the algorithms in the selected games. \begin{figure} \hspace{1.6cm} \includegraphics[width=0.3\textwidth]{fig/WC_RM_386943} \hspace{1cm} \includegraphics[width=0.3\textwidth]{fig/WC_RMM_678921}\\ \includegraphics[width=\textwidth]{fig/RMvsExp3_2} \caption{The games with maximal exploitability after 1000 iterations with RM (left) and RMM (right) and the corresponding exploitabililty for all evaluated methods.}\label{fig:worstGames} \end{figure} Figure~\ref{fig:worstGames} shows the convergence of RM and Exp3 with propagating the current sample values and the mean values (RMM and Exp3M) on the empirically worst games for the RM variants. The RM variants converge to the minimal achievable values ($0.0119$ and $0.0367$) after a million iterations. This values corresponds exactly to the exploitability of the optimal strategy combined with the uniform exploration with probability $0.05$. The Exp3 variants most likely converge to the same values, however, they did not fully make it in the first million iterations in WC\_RM. The convergence rate of all the variants is similar and the variants with propagating means always converge a little slower. \section{Formal analysis \label{sec:formal}} We focus on the eventual convergence to approximate NE, which allows us to make an important simplification: We disregard the incremental building of the tree and assume we have built the complete tree. We show that this will eventually happen with probability 1 and that the statistics collected during the tree building phase cannot prevent the eventual convergence. The main idea of the proof is to show that the algorithm will eventually converge close to the optimal strategy in the leaf nodes and inductively prove that it will converge also in higher levels of the tree. In order to do that, after introducing the necessary notation, we start by analyzing the situation in simple matrix games, which corresponds mainly to the leaf nodes of the tree. In the inner nodes of the tree, the observed payoffs are imprecise because of the stochastic nature of the selection functions and bias caused by exploration, but the error can be bounded. Hence, we continue with analysis of repeated matrix games with bounded error. Finally, we compose the matrices with bounded errors in a multi-stage setting to prove convergence guarantees of SM-MCTS. Any proofs that are omitted in the paper are included in the appendix submitted as a supplementary material and available from the web pages of the authors. \subsection{Notation and definitions} Consider a repeatedly played matrix game where at time $s$ players $1$ and $2$ choose actions $i_s$ and $j_s$ respectively. We will use the convention $(|\mathcal{A}_1|,|\mathcal{A}_2|) = (m,n)$. Define \[G(t) = \sum_{s=1}^t a_{i_s j_s}, \mbox{\hspace{0.3cm}} g(t) = \frac{1}{t}G(t), \mbox{\hspace{0.3cm} and \hspace{0.3cm} } G_{max}(t) = \max_{i \in \mathcal{A}_1} \sum_{s=1}^t a_{i j_s}, \] where $G(t)$ is the \emph{cumulative payoff}, $g(t)$ is the \emph{average payoff}, and $G_{max}$ is the \emph{maximum cumulative payoff} over all actions, each to player 1 and at time $t$. We also denote $g_{max}(t) = G_{max}(t)/t$ and by $R(t) = G_{max}(t)-G(t)$ and $r(t) = g_{max}(t)-g(t)$ the \emph{cumulative and average regrets}. For actions $i$ of player $1$ and $j$ of player $2$, we denote $t_i$, $t_j$ the number of times these actions were chosen up to the time $t$ and $t_{ij}$ the number of times both of these actions has been chosen at once. By \emph{empirical frequencies} we mean the strategy profile $\left(\hat{\sigma}_1(t),\hat{\sigma}_2(t)\right)\in\langle 0,1\rangle^{m}\text{\texttimes}\langle 0,1\rangle^{n}$ given by the formulas $\hat{\sigma}_1(t, i)= t_i/t$, $\hat{\sigma}_2(t, j) = t_j/t$. By \emph{average strategies}, we mean the strategy profile $\left(\bar{\sigma}_1(t),\bar{\sigma}_2(t)\right)$ given by the formulas $\bar{\sigma}_1(t, i)= \sum_{s=1}^t \sigma_1^s(i) / t$, $\bar{\sigma}_2(t,j)= \sum_{s=1}^t \sigma^s_2(j) / t$, where $\sigma^s_1$, $\sigma^s_2$ are the strategies used at time $s$. \begin{defn} We say that a player is $\epsilon$\emph{-Hannan-consistent} if, for any payoff sequences (e.g., against any opponent strategy), $\limsup_{t \rightarrow \infty}, r(t)\leq\epsilon$ holds almost surely. An algorithm $A$ is $\epsilon$-Hannan consistent, if a player who chooses his actions based on $A$ is $\epsilon$-Hannan consistent. \end{defn} Hannan consistency (HC) is a commonly studied property in the context of online learning in repeated (single stage) decisions. In particular, RM and variants of Exp3 has been shown to be Hannan consistent in matrix games~\cite{Hart00, Auer2003Exp3}. In order to ensure that the MCTS algorithm will eventually visit each node infinitely many times, we need the selection function to satisfy the following property. \begin{defn} We say that $A$ is an \emph{algorithm with guaranteed exploration}, if for players $1$ and $2$ both using $A$ for action selection $ \lim_{t \rightarrow \infty} t_{ij}=\infty\,\mbox{\textrm{holds almost surely }} \forall (i,j) \in \mathcal{A}_1 \times \mathcal{A}_2. $ \end{defn} Note that most of the HC algorithms, namely RM and Exp3, guarantee exploration without any modification. If there is an algorithm without this property, it can be adjusted the following way. \begin{defn} Let $A$ be an algorithm used for choosing action in a matrix game $M$. For fixed exploration parameter $\gamma\in\left(0,1\right)$ we define a modified algorithm $A^{*}$ as follows: In each time, with probability $(1-\gamma)$ run one iteration of $A$ and with probability $\gamma$ choose the action randomly uniformly over available actions, without updating any of the variables belonging to $A$. \end{defn} \subsection{Repeated matrix games} First we show that the $\epsilon$-Hannan consistency is not lost due to the additional exploration. \begin{lem} \label{E*} Let $A$ be an $\epsilon$-Hannan consistent algorithm. Then $A^{*}$ is an $(\epsilon+\gamma)$-Hannan consistent algorithm with guaranteed exploration. \end{lem} In previous works on MCTS in our class of games, RM variants generally suggested using the average strategy and Exp3 variants the empirical frequencies to obtain the strategy to be played. The following lemma says there eventually is no difference between the two. \begin{lem} \label{emp a avg}As $t$ approaches infinity, the empirical frequencies and average strategies will almost surely be equal. That is, $\limsup_{t \rightarrow \infty} \max_{i \in \mathcal{A}_1}\,|\hat{\sigma}_1(t, i)-\bar{\sigma}_1(t, i)| = 0$ holds with probability $1$. \end{lem} The proof is a consequence of the martingale version of Strong Law of Large Numbers. It is well known that two Hannan consistent players will eventually converge to NE (see \cite[p. 11]{waugh09d} and \cite{Blum07}). We prove a similar result for the approximate versions of the notions. \begin{lem} \label{g(t) a br}Let $\epsilon>0$ be a real number. If both players in a matrix game with value $v$ are $\epsilon$-Hannan consistent, then the following inequalities hold for the empirical frequencies almost surely: \begin{equation} \underset{t\rightarrow\infty}{\limsup}\, u\left(br, \hat{\sigma}_2(t)\right)\leq v+2\epsilon \mbox{\hspace{0.3cm} and \hspace{0.3cm} } \underset{t\rightarrow\infty}{\liminf}\, u\left(\hat{\sigma}_1(t),br\right)\geq v-2\epsilon . \end{equation} \end{lem} The proof shows that if the value caused by the empirical frequencies was outside of the interval infinitely many times with positive probability, it would be in contradiction with definition of $\epsilon$-HC. The following corollary is than a direct consequence of this lemma. \begin{cor} \label{e-HC a 4eE}If both players in a matrix game are $\epsilon$-Hannan consistent, then there almost surely exists $t_{0}\in\mathbb{N}$, such that for every $t\geq t_{0}$ the empirical frequencies and average strategies form $(4\epsilon+\delta)$-equilibrium for arbitrarly small $\delta > 0$. \end{cor} The constant 4 is caused by going from a pair of strategies with best responses within $2\epsilon$ of the game value guaranteed by Lemma~\ref{g(t) a br} to the approximate NE, which multiplies the distance by two. \subsection{Repeated matrix games with bounded error} After defining the repeated games with error, we present a variant of Lemma~\ref{g(t) a br} for these games. \begin{defn} We define $M(t)=\left(a_{ij}(t)\right)$ to be a game, in which if players chose actions $i$ and $j$, they receive randomized payoffs $a_{ij}\left(t,(i_{1},...i_{t-1}),(j_{1},...j_{t-1})\right)$. We will denote these simply as $a_{ij}(t)$, but in fact they are random variables with values in $[0,1]$ and their distribution in time $t$ depends on the previous choices of actions. We say that $M(t)=\left(a_{ij}(t)\right)$ is a \emph{repeated game with error} $\eta$, if there is a matrix game $M=\left(a_{ij}\right)$ and almost surely exists $t_{0}\in\mathbb{N}$, such that $\left|a_{ij}(t)-a_{ij}\right|<\eta$ holds for all $t\geq t_{0}$. \end{defn} In this context, we will denote $G(t)=\sum_{s \in \{1\dots t\}} a_{i_{s}j_{s}}(s)$ etc. and use tilde for the corresponding variables without errors ($\tilde{G}(t)=\sum a_{i_{s}j_{s}}$ etc.). Symbols $v$ and $u\left(\cdot,\cdot\right)$ will still be used with respect to $M$ without errors. The following lemma states that even with the errors, $\epsilon$-HC algorithms still converge to an approximate NE of the game. \begin{lem} \label{g a eta-chyba}Let $\epsilon>0$ and $c \ge 0$. If $M(t)$ is a repeated game with error $c\epsilon$ and both players are $\epsilon$-Hannan consistent then the following inequalities hold almost surely: \begin{equation} \underset{t\rightarrow\infty}{\limsup}\, u\left(br,\hat{\sigma}_2\right)\leq v+2(c+1)\epsilon, \mbox{\hspace{0.3cm}} \underset{t\rightarrow\infty}{\liminf}\, u\left(\hat{\sigma}_1,br\right)\geq v-2(c+1)\epsilon \end{equation} \begin{equation} \mbox{and\hspace{0.3cm}} v-(c+1)\epsilon\leq\underset{t\rightarrow\infty}{\liminf}\, g(t)\leq\underset{t\rightarrow\infty}{\limsup}\, g(t)\leq v+(c+1)\epsilon . \end{equation} \end{lem} The proof is similar to the proof of Lemma~\ref{g(t) a br}. It needs an additional claim that if the algorithm is $\epsilon$-HC with respect to the observed values with errors, it still has a bounded regret with respect to the exact values. In the same way as in the previous subsection, a direct consequence of the lemma is the convergence to an approximate Nash equilibrium. \begin{thm} \label{e-HC, eta-chyba a 4e-Eq}Let $\epsilon,c>0$ be real numbers. If $M(t)$ is a repeated game with error $c\epsilon$ and both players are $\epsilon$-Hannan consistent, then for any $\delta>0$ there almost surely exists $t_{0}\in\mathbb{N}$, such that for all $t\geq t_{0}$ the empirical frequencies form $\left(4(c+1)\epsilon+\delta\right)$-equilibrium of the game $M$. \end{thm} \subsection{Perfect-information extensive-form games with simultaneous moves} Now we have all the necessary components to prove the main theorem. \begin{thm} \label{main theorem}Let $\left(M^{h}\right)_{h\in H}$ be a game with perfect information and simultaneous moves with maximal depth $D$. Then for every $\epsilon$-Hannan consistent algorithm $A$ with guaranteed exploration and arbitrary small $\delta>0$, there almost surely exists $t_{0}$, so that the average strategies $(\hat{\sigma}_1(t),\hat{\sigma}_2(t))$ form a subgame perfect \[\left(2D^{2}+\delta\right)\epsilon\mbox{-Nash equilibrium for all~} t\geq t_{0}.\] \end{thm} Once we have established the convergence of the $\epsilon$-HC algorithms in games with errors, we can proceed by induction. The games in the leaf nodes are simple matrix game so they will eventually converge and they will return the mean reward values in a bounded distance from the actual value of the game (Lemma~\ref{g a eta-chyba} with $c=0$). As a result, in the level just above the leaf nodes, the $\epsilon$-HC algorithms are playing a matrix game with a bounded error and by Lemma~\ref{g a eta-chyba}, they will also eventually return the mean values within a bounded interval. On level $d$ from the leaf nodes, the errors of returned values will be in the order of $d\epsilon$ and players can gain $2d\epsilon$ by deviating. Summing the possible gain of deviations on each level leads to the bound in the theorem. The subgame perfection of the equilibrium results from the fact that for proving the bound on approximation in the whole game (i.e., in the root of the game tree), a smaller bound on approximation of the equilibrium is proven for all subgames in the induction. The formal proof is presented in the appendix. \section{Introduction} Non-cooperative game theory is a formal mathematical framework for describing behavior of interacting self-interested agents. Recent interest has brought significant advancements from the algorithmic perspective and new algorithms have led to many successful applications of game-theoretic models in security domains~\cite{tambe2011} and to near-optimal play of very large games~\cite{johanson12cfrbr}. We focus on an important class of two-player, zero-sum extensive-form games (EFGs) with perfect information and simultaneous moves. Games in this class capture sequential interactions that can be visualized as a game tree. The nodes correspond to the states of the game, in which both players act simultaneously. We can represent these situations using the normal form ({\it i.e.,}~ as matrix games), where the values are computed from the successor sub-games. Many well-known games are instances of this class, including card games such as Goofspiel \cite{Ross71Goofspiel,Rhoads12Computer}, variants of pursuit-evasion games \cite{Littman94markovgames}, and several games from general game-playing competition \cite{ggp}. Simultaneous-move games can be solved exactly in polynomial time using the backward induction algorithm \cite{buro2003,Rhoads12Computer}, recently improved with alpha-beta pruning \cite{Saffidine12SMAB,Bosansky13Using}. However, the depth-limited search algorithms based on the backward induction require domain knowledge (an evaluation function) and computing the cutoff conditions requires linear programming \cite{Saffidine12SMAB} or using a double-oracle method \cite{Bosansky13Using}, both of which are computationally expensive. For practical applications and in situations with limited domain knowledge, variants of simulation-based algorithms such as Monte Carlo Tree Search (MCTS) are typically used in practice \cite{Finnsson08,Teytaud11Upper,Perick12Comparison,Finnsson12}. In spite of the success of MCTS and namely its variant UCT~\cite{UCT} in practice, there is a lack of theory analyzing MCTS outside two-player perfect-information sequential games. To the best of our knowledge, no convergence guarantees are known for MCTS in games with simultaneous moves or general EFGs. In this paper, we present a general template of MCTS algorithms for zero-sum perfect-information simultaneous move games. It can be instantiated using any regret minimizing procedure for matrix games as a function for selecting the next actions to be sampled. We formally prove that if the algorithm uses an $\epsilon$-Hannan consistent selection function, which assures attempting each action infinitely many times, the MCTS algorithm eventually converges to a subgame perfect $\epsilon$-Nash equilibrium of the extensive form game. We empirically evaluate this claim using two different $\epsilon$-Hannan consistent procedures: regret matching~\cite{Hart00} and Exp3~\cite{Auer2003Exp3}. In the experiments on randomly generated and worst case games, we show that the empirical speed of convergence of the algorithms based on our template is comparable to recently proposed MCTS algorithms for these games. We conjecture that many of these algorithms also converge to $\epsilon$-Nash equilibrium and that our formal analysis could be extended to include them. \subsubsection*{Acknowledgments} This work is partially funded by the Czech Science Foundation (grant no. P202/12/2054), the Grant Agency of the Czech Technical University in Prague (grant no. OHK3-060/12), and the Netherlands Organisation for Scientific Research (NWO) in the framework of the project Go4Nature, grant number 612.000.938. The access to computing and storage facilities owned by parties and projects contributing to the National Grid Infrastructure MetaCentrum, provided under the programme ``Projects of Large Infrastructure for Research, Development, and Innovations'' (LM2010005) is appreciated. \subsubsection*{References} \renewcommand{\section}[2]{} \small{ \bibliographystyle{unsrt} \section{Simultaneous move Monte-Carlo Tree Search \label{sec:sm-mcts}} Monte Carlo Tree Search (MCTS) is a simulation-based state space search algorithm often used in game trees. The nodes in the tree represent game states. The main idea is to iteratively run simulations to a terminal state, incrementally growing a tree rooted at the initial state of the game. In its simplest form, the tree is initially empty and a single leaf is added each iteration. Each simulation starts by visiting nodes in the tree, selecting which actions to take based on a selection function and information maintained in the node. Consequently, it transitions to the successor states. When a node is visited whose immediate children are not all in the tree, the node is expanded by adding a new leaf to the tree. Then, a rollout policy (e.g., random action selection) is applied from the new leaf to a terminal state. The outcome of the simulation is then returned as a reward to the new leaf and the information stored in the tree is updated. \begin{algorithm2e}[b!] SM-MCTS$($node $h$)\\ \label{alg:function} \Indp \lIf{$h \in \mathcal{Z}$}{\Return{$u_1(h)$} \\} \lElse{\If{$h \in T$ {\bf and} $\exists (i,j) \in \mathcal{A}_1(h) \times \mathcal{A}_2(h)$ not previously selected}{ \label{alg:expand} Choose one of the previously unselected $(i,j)$ and $h' \gets \mathcal{T}(h,i,j)$ \\ Add $h'$ to $T$\\ $u_1 \gets$ Rollout($h'$)\\ $X_{h'} \gets X_{h'} + u_1;\; n_{h'} \gets n_{h'} + 1$\\ \underline{Update}($h, i, j, u_1$)\\ \label{alg:update1} {\bf return} RetVal$(u_1, X_{h'}, n_{h'})$\\\label{alg:return1} }} $(i, j) \gets $ \underline{Select}($h$)\\ \label{alg:select} $h' \gets \mathcal{T}(h, i, j)$\\ $u_1 \gets $ SM-MCTS($h'$)\\ \label{alg:reccall} $X_{h} \gets X_{h} + u_1;\; n_{h} \gets n_{h} + 1$\\ \underline{Update}($h, i, j, u_1$) \\ \label{alg:update2} {\bf return} RetVal$(u_1, X_{h}, n_{h})$\\\label{alg:return2} \Indm \vspace{0.2cm} \caption{Simultaneous Move Monte Carlo Tree Search \label{alg:sm-mcts}} \end{algorithm2e} In Simultaneous Move MCTS (SM-MCTS), the main difference is that a joint action of both players is selected. The algorithm has been previously applied, for example in the game of Tron~\cite{Perick12Comparison}, Urban Rivals~\cite{Teytaud11Upper}, and in general game-playing~\cite{Finnsson08}. However, guarantees of convergence to NE remain unknown. The convergence to a NE depends critically on the selection and update policies applied, which are even more non-trivial than in purely sequential games. The most popular selection policy in this context (UCB) performs very well in some games \cite{Perick12Comparison}, but Shafiei et al. \cite{Shafiei09} show that it does not converge to Nash equilibrium, even in a simple one-stage simultaneous move game. In this paper, we focus on variants of MCTS, which provably converge to (approximate) NE; hence we do not discuss UCB any further. Instead, we describe variants of two other selection algorithms after explaining the abstract SM-MCTS algorithm. Algorithm~\ref{alg:sm-mcts} describes a single simulation of SM-MCTS. $T$ represents the MCTS tree in which each state is represented by one node. Every node $h$ maintains a cumulative reward sum over all simulations through it, $X_h$, and a visit count $n_h$, both initially set to 0. As depicted in Figure~\ref{fig:tree}, a matrix of references to the children is maintained at each inner node. The critical parts of the algorithm are the updates on lines \ref{alg:update1} and \ref{alg:update2} and the selection on line \ref{alg:select}. Each variant below will describe a different way to select an action and update a node. The standard way of defining the value to send back is RetVal$(u_1, X_{h}, n_{h}) = u_1$, but we discuss also RetVal$(u_1, X_{h}, n_{h}) = X_{h}/n_{h}$, which is required for the formal analysis in Section~\ref{sec:formal}. We denote this variant of the algorithms with additional ``M'' for mean. Algorithm~\ref{alg:sm-mcts} and the variants below are expressed from player 1's perspective. Player 2 does the same except using negated utilities. \subsection{Regret matching \label{sec:rm}} This variant applies regret-matching \cite{Hart00} to the current estimated matrix game at each stage. Suppose iterations are numbered from $s \in \{ 1, 2, 3, \cdots \}$ and at each iteration and each inner node $h$ there is a mixed strategy $\sigma^s(h)$ used by each player, initially set to uniform random: $\sigma^0(h,i) = 1 / |\mathcal{A}(h)|$. Each player maintains a cumulative regret $r_h[i]$ for having played $\sigma^s(h)$ instead of $i \in \mathcal{A}_1(h)$. The values are initially set to 0. On iteration $s$, the \underline{Select} function (line \ref{alg:select} in Algorithm~\ref{alg:sm-mcts}) first builds the player's current strategies from the cumulative regret. Define $x^+ = \max(x,0)$, \begin{equation} \label{eq:rm} \sigma^s(h,a) = \frac{r^+_h[a]}{R^+_{sum}} \mbox{ if } R^+_{sum} > 0 \mbox{ oth. } \frac{1}{|\mathcal{A}_1(h)|}, \mbox{ where } R^+_{sum} = \sum_{i \in \mathcal{A}_1(h)}{r^+_h[i]}. \end{equation} The strategy is computed by assigning higher weight proportionally to actions based on the regret of having not taken them over the long-term. To ensure exploration, an $\gamma$-on-policy sampling procedure is used choosing action $i$ with probability $\gamma/|\mathcal{A}(h)| + (1-\gamma) \sigma^s(h,i)$, for some $\gamma > 0$. The \underline{Update}s on lines \ref{alg:update1} and \ref{alg:update2} add regret accumulated at the iteration to the regret tables $r_h$. Suppose joint action $(i_1,j_2)$ is sampled from the selection policy and utility $u_1$ is returned from the recursive call on line~\ref{alg:reccall}. Define $x(h, i, j) = X_{h_{ij}}$ if $(i,j) \not= (i_1,j_2)$, or $u_1$ otherwise. The updates to the regret are: \begin{eqnarray*} \forall i' \in \mathcal{A}_1(h), r_h[i'] \leftarrow r_h[i'] + ( x(h, i', j) - u_1 ). \end{eqnarray*} \subsection{Exp3 \label{sec:exp3}} In Exp3~\cite{Auer2003Exp3}, a player maintains an estimate of the sum of rewards, denoted $x_{h,i}$, and visit counts $n_{h,i}$ for each of their actions $i \in \mathcal{A}_1$. The joint action selected on line \ref{alg:select} is composed of an action independently selected for each player. The probability of sampling action $a$ in \underline{Select} is \begin{equation} \label{eq:exp3select} \sigma^s(h,a) = \frac{(1-\gamma) \exp(\eta w_{h,a})}{\sum_{i \in \mathcal{A}_1(h)} \exp(\eta w_{h,i})} + \frac{\gamma}{|\mathcal{A}_1(h)|}, \mbox{ where } \eta = \frac{\gamma}{|\mathcal{A}_1(h)|} \mbox{ and } w_{h,i} = x_{h,i}\footnote{In practice, we set $w_{h,i} = x_{h,i} - \max_{i' \in \mathcal{A}_1(h)} x_{h,i'}$ since $\exp(x_{h,i})$ can easily cause numerical overflows. This reformulation computes the same values as the original algorithm but is more numerically stable.}. \end{equation} The \underline{Update} after selecting actions $(i,j)$ and obtaining a result $(u_1,u_2)$ updates the visits count ($n_{h,i} \leftarrow n_{h,i} + 1$) and adds to the corresponding reward sum estimates the reward divided by the probability that the action was played by the player ($x_{h,i} \leftarrow x_{h,i} + u_1/\sigma^s(h,i)$). Dividing the value by the probability of selecting the corresponding action makes $x_{h,i}$ estimate the sum of rewards over all iterations, not only the once where action $i$ was selected.
1,116,691,501,109
arxiv
\section{Appendix A: Lower bounding the steering Tsirelson bound through a relaxation of the set of quantum assemblages}\label{ap:sdp} In this appendix we present the details of a relaxation of the set of quantum assemblages, which we call `almost quantum' and denote by $\mathcal{\widetilde{Q}}$ \cite{TorNavVer14}. The name comes from its close relation to the definition of set of almost quantum correlations, which is also characterised by an SDP \cite{NPA}. It is this relaxation which will allow us to compute lower bounds on the Tsirelson bound of any steering functional. Similarly to the NPA hierarchy of Ref. \cite{NPA}, consider a moment matrix $\Gamma$ whose rows and columns are labelled by the `words' from the following set: \begin{widetext} \begin{equation}\label{theset} \mathcal{S} := \{ \emptyset \} \cup \left\{ (b|y) \right\}_{\substack{b=1:k_B-1 \\ y=1:m_b}} \cup \left\{ (c|z) \right\}_{\substack{c=1:k_C-1 \\ z=1:m_c}} \cup \left\{ (bc|yz) \right\}_{\substack{b=1:k_B-1, \, c=1:k_C-1\\ y=1:m_b, \, z=1:m_c}}, \end{equation} \end{widetext} where $m_b$ denotes the possible measurement choices by Bob, each with $k_B$ number of outcomes (and similarly for Charlie). In the NPA hierarchy, a matrix with such labels is the object of study for the 1+AB level, where some elements of the matrix are related to the values of a conditional probability distribution $p(bc|yz)$ and its marginals. In our case, however, each of the elements of $\Gamma$ corresponds to a conditional state prepared on Alice's side, in a way that we make explicit below. The elements of the first row of $\Gamma$ are set as follows: \begin{align} \label{frr_i} \Gamma(\emptyset, \emptyset) &:= \rho_\mathrm{A}, \\ \Gamma(\emptyset, b|y) &:= \sigma_{b|y}, \\ \Gamma(\emptyset, c|z) &:= \sigma_{c|z}, \\ \label{frr_f}\Gamma(\emptyset, bc|yz) &:= \sigma_{bc|yz}, \end{align} where the reduced states are as in Eqs (4b) to (4c) of the main text. Once such an identification is done, further constraints are imposed between the elements of $\Gamma$ to enforce some quantum-like properties on the assemblage. In order to make it clearer to the reader, we present first the relation between $\Gamma$ and quantum assemblages, from which the extra constraints on the moment matrix will hopefully arise naturally. A quantum assemblage arises by Bob and Charlie performing measurements on their share of a tripartite quantum system $\rho_{\mathrm{ABC}}$. Let $E_{b|y}$ and $E_{c|z}$ be the POVM elements. Note that we can assume them to be projectors, since in principle we do not impose any constraints on the dimensions of Bob and Charlie's subsystems. The assemblage then arises as: \begin{align} \label{fr_i}\rho_\mathrm{A} &= \mathrm{tr}_{\mathrm{B}\mathrm{C}}\left( \rho_\mathrm{ABC} \right), \\ \sigma_{b|y} &= \mathrm{tr}_{\mathrm{B}\mathrm{C}} \left( \mathbbm{1}_\mathrm{A} \, E_{b|y} \, \mathbbm{1}_\mathrm{C} \, \rho_\mathrm{ABC} \right),\\ \sigma_{c|z} &= \mathrm{tr}_{\mathrm{B}\mathrm{C}} \left( \mathbbm{1}_\mathrm{A} \, \mathbbm{1}_\mathrm{B} \, E_{c|z} \, \rho_\mathrm{ABC} \right),\\ \label{fr_f}\sigma_{bc|yz} &= \mathrm{tr}_{\mathrm{B}\mathrm{C}} \left( \mathbbm{1}_\mathrm{A} \, E_{b|y} \, E_{c|z} \, \rho_\mathrm{ABC} \right). \end{align} Note that we are using the commutativity paradigm, where we do not require that the measurements be of the form $\mathbbm{1}_\mathrm{A} \otimes E_{b|y} \otimes E_{c|z}$, but rather demand that $\left[ E_{b|y}, E_{c|z} \right]= 0$ for all $b,c,y,z$. The tensor product type of measurements is just a particular case of the general form of the latter. Now consider the moment matrix again. To each of its entries we associate the following element: \begin{align} \Gamma(v,w) &= \mathrm{tr}\left( \mathbbm{O}_v^\dagger \, \mathbbm{O}_w \, \rho_\mathrm{ABC} \right), \\ \mathrm{where} \quad \mathbbm{O}_\emptyset &= \mathbbm{1} \\ \mathbbm{O}_{b|y} &= \mathbbm{1}_\mathrm{A} \, E_{b|y} \, \mathbbm{1}_\mathrm{C}, \\ \mathbbm{O}_{c|z} &= \mathbbm{1}_\mathrm{A} \, \mathbbm{1}_\mathrm{B} \, E_{c|z}, \\ \mathbbm{O}_{bc|yz} &= \mathbbm{1}_\mathrm{A} \, E_{b|y} \, E_{c|z}. \end{align} It is clear to see that the elements of the first row $\Gamma(\emptyset,v)$ satisfy eq. (\ref{fr_i}) to (\ref{fr_f}) for all $v$. In addition, the commutation relations between the measurement operators of Bob and Charlie also impose that: \begin{align} \label{p_i}\Gamma(v,v) &= \Gamma(\emptyset ,v),\\ \Gamma(v,w) &= \Gamma(w, v), \quad \mathrm{whenever} \, [\mathbbm{O}_v,\mathbbm{O}_w]=0, \end{align} and constraints of the type: \begin{align} \Gamma(b|y,bc|yz) &= \Gamma(\emptyset, bc|yz),\\ \Gamma(b|y,bc|yz) &= \Gamma(b|y,c|z),\\ \Gamma(bc|yz,bc^\prime|yz^\prime) &= \Gamma(bc|yz, c^\prime|z^\prime). \end{align} Note that these constraints are the ones imposed on the matrix moment of the 1+AB level of the NPA hierarchy. In our case, however, the elements of $\Gamma$ are matrices instead of numbers, and hence some specific properties also arise. These are of the type: \begin{align} \Gamma(b|y, b^\prime|y^\prime) &= \Gamma(b^\prime|y^\prime, b|y)^\dagger,\\ \Gamma(bc|yz, b^\prime c|y^\prime z) &= \Gamma(b^\prime c|y^\prime z, bc|yz)^\dagger,\\ \label{p_f}\Gamma(bc|yz, b^\prime|y^\prime) &= \Gamma(b^\prime c|y^\prime z, b|y)^\dagger. \end{align} Finally, note that such a $\Gamma$ is hermitian and positive semidefinite. The idea now is, given a general assemblage $\left\{ \sigma_{bc|yz} \right\}_{bcyz}$ check whether there exists a PSD moment matrix $\Gamma$ whose first row relates to the assemblage via eq. (\ref{fr_i}) to (\ref{fr_f}), and that satisfies properties (\ref{p_i}) to (\ref{p_f}). This is a well defined semidefinite program, and when it is feasible the assemblage belongs to $\mathcal{\widetilde{Q}}$. Since every quantum assemblage satisfies the properties, such an SDP is always feasible for quantum inputs, hence every quantum assemblage belongs to $\mathcal{\widetilde{Q}}$. Note that the converse may not always be true. We use this set $\mathcal{\widetilde{Q}}$ to find bounds on the Tsirelson bound of a steering functional. Since $\mathcal{\widetilde{Q}}$ may contain post- quantum assemblages, a lower bound on $\beta_\mathcal{Q}$ is obtained by finding the minimum value of the functional over $\mathcal{\widetilde{Q}}$, which is itself an SDP: \begin{align} \mathrm{minimise} \quad & \mathrm{tr} \left( \sum_{bcyz} F_{bcyz} \, \sigma_{bc|yz} \right) \\ \mathrm{such ~that} \quad & \left\{ \sigma_{bc|yz} \right\}_{bcyz} \in \mathcal{\widetilde{Q}}. \end{align} For the scope of this work we only need a bound on $\beta_\mathcal{Q}$, whose violation ensures that the assemblage is post-quantum. We do not need to study different optimal bounds on $\beta_\mathcal{Q}$ or other relaxations of the quantum set of assemblages. For the reader interested in SDPs, however, that may also be a valid question and we comment on it in what follows. A natural step towards studying different relaxations of the quantum set goes in spirit with the NPA hierarchy, similar to the idea by Pusey for bipartite steering scenarios \cite{pusey13}. One could consider then a hierarchy of moment matrices $\Gamma_n$, where $n$ relates to length of the words in the set (\ref{theset}), which is now allowed to contain elements of the form $(b_1 \ldots b_jc_1 \ldots c_k | y_1 \ldots y_jz_1 \ldots z_k)$. In the case of quantum assemblages, such indices would relate to the following: \begin{widetext} \begin{align} &\Gamma(b_1 \ldots b_{j_1}c_1 \ldots c_{k_1} | y_1 \ldots y_{j_1} z_1 \ldots z_{k_1}, b'_1 \ldots b'_{j_2}c'_1 \ldots c'_{k_2} | y'_1 \ldots y'_{j_2} z'_1 \ldots z'_{k_2}) = \\ & \mathrm{tr} \left(\mathbbm{1}_\mathrm{A} \, E^\dagger_{b_{j_1} | y_{j_1}} \ldots E^\dagger_{b_{1} | y_{1}} \, E^\dagger_{c_{k_1} |z_{k_1}} \ldots E^\dagger_{c_{1} | z_{1}} \, E_{b'_{1} | y'_{1}} \ldots E_{b'_{j_2} | y'_{j_2}} \, E_{c'_{1} | z'_{1}} \ldots E_{c'_{k_2} | z'_{k_2}} \, \rho_{\mathrm{ABC}} \right).\nonumber \end{align} \end{widetext} From the commutations relations between Bob and Charlie's measurements arise different constrains that $\Gamma_n$ is asked to satisfy. Note that the longer the words in $\mathcal{S}_n$ are, the more the properties that the moment matrix should satisfy. For each $n$, testing whether those properties are satisfied when some elements of the first row are set to be the conditional states on Alice's side (eq. (\ref{frr_i}) to (\ref{frr_f})) is an SDP, and feasibility of level $n$ implies feasibility of level $m<n$. This last statement follows from the fact that every word in $\mathcal{S}_m$ is a word on $\mathcal{S}_n$, hence the constraints imposed in level $m<n$ are just a subset of those imposed in level $n$. Note also that when the input is a quantum assemblage, the SDP is feasible for any level $n$ by definition. Denote by $\mathcal{Q}_n$ the set of assemblages which satisfy the conditions of the level $n$ SDP. Then, the following SDPs define a sequence of lower bounds to the Tsirelson bound of a steering functional: \begin{align} \mathrm{minimise} \quad & \beta_{\mathcal{Q}_n} = \mathrm{tr} \left( \sum_{bcyz} F_{bcyz} \, \sigma_{bc|yz} \right) \\ \mathrm{such~that} \quad & \left\{ \sigma_{bc|yz} \right\}_{bcyz} \in \mathcal{Q}_n. \end{align} By definition, these lower bounds satisfy $\beta_{\mathcal{Q}_m} \leq \beta_{\mathcal{Q}_n}$ whenever $m<n$. \section{Appendix B: Details for constructing a local model for all projective measurements} The second ingredient in our construction is a method for constructing qubit assemblages $\sigma_{bc|yz}$ that always give rise to quantum-realizable behaviours, that is $p(abc|xyz) = \mathrm{tr}_\mathrm{A}(\Pi_{a|x}\sigma_{bc|yz})$ admits a quantum realization for any possible projective measurement $\Pi_{a|x}$ performed by Alice. To simplify the problem, we restrict to assemblages of real-valued qubit states, i.e. such that all conditional states lie in the $x$-$z$ plane of the Bloch sphere. It follows that we need only consider projective measurements in the $x$-$z$ plane (since the $y$ component identically vanishes), parametrised by \begin{equation}\label{meas_par} \Pi_{a|\theta} = \frac{\left( \openone + (-1)^a (\cos(\theta) \, X + \sin(\theta) \, Z) \right) }{2}, \end{equation} where $\theta \in [0, \pi)$, $a = 0,1$, and $X$ and $Z$ denote the corresponding Pauli matrices (the range $\theta \in [\pi, 2\pi)$ comes from Alice relabelling her outcome $0 \leftrightarrow 1$). Ensuring that the behaviour $p(abc| \theta yz) = \mathrm{tr}_\mathrm{A}(\Pi_{a|\theta}\sigma_{bc|yz})$ admits a quantum realization is a difficult problem in general. Instead, we demand that $p(abc| \theta yz) $ is Bell local, that is, admits a decomposition of the form \begin{equation} p(abc| \theta yz) = \int dÊ\lambda \pi(\lambda) p(a|\theta \lambda) p(b|y \lambda) p(c|z \lambda) \end{equation} where $\lambda$ denotes a shared local variable, distributed according to the density $\pi(\lambda)$ \cite{review}. Indeed, any behaviour which is Bell local is also realizable in quantum theory. Next, we construct an assemblage admitting a local model by adapting the ideas of Ref. \cite{Bowles2015}. Specifically, consider the set of four measurements $\mathcal{E} = \{\Pi_{a|\theta_x} \}_{ax}$, where $\theta_x = x\pi/4$ and $x = 0,\ldots, 3$. Next take an assemblage $\sigma_{bc|yz}$ such that the behaviour $p(abc| \theta_x yz)= \mathrm{tr}_\mathrm{A}(\Pi_{a|\theta_x} \sigma_{bc|yz}) $ is local. For a given assemblage, this can be easily verified using linear programing \cite{review}. Following Ref. \cite{Bowles2015}, this implies that $\sigma_{bc|yz}$ is local for all noisy two-outcome projective measurement in the $x$-$z$ plane: \begin{align} \Pi_{a|\theta}(\mu) &= \mu \, \Pi_{a|\theta} + (1-\mu) \, \openone/2 \end{align} whenever $\mu \leq \cos(\pi/8)$. This follows from the fact that any operator $\Pi_{a|\theta}(\mu)$ can be written as a convex combination of the 4 measurements in $\mathcal{E}$. That is, for all $a$ and $\theta$, we can find coefficients $c_{a'x} \geq 0$ with $\sum_{a'x} c_{a'x} = 1$ such that \begin{equation} \Pi_{a|\theta}(\mu) = \sum_{a'=0}^1 \sum_{x=0}^3 c_{a'x} \Pi_{a'|\theta_x}. \end{equation} This can be seen geometrically; the Bloch vectors of the 8 POVM elements in $\mathcal{E}$ form an octagon in the $x$-$z$ plane of the Bloch sphere, and $\cos(\pi/8)$ is the radius of the largest circle that fits inside this octagon. Next, we note the following equality \begin{equation} \mathrm{tr}\left( \Pi_{a|\theta}(\mu) \, \sigma_{bc|yz} \right) = \mathrm{tr}\left( \Pi_{a|\theta} \, \sigma_{bc|yz}(\mu) \right) \end{equation} where \begin{equation}\label{noisy ass} \sigma_{bc|yz} (\mu)= \mu \, \sigma_{bc|yz} + (1-\mu) \, \mathrm{tr}\left( \sigma_{bc|yz} \right) \openone/2. \end{equation} That is, the statistics of noisy measurements on the assemblage $\sigma_{bc|yz}$ perfectly match the statistics of projective measurements on the noisy assemblage $\sigma_{bc|yz}(\mu)$. Hence, if $\sigma_{bc|yz}$ is local for the four measurements in $\mathcal{E}$, then $\sigma_{bc|yz}(\mu)$ is local for all projective measurements when $\mu \leq \cos(\pi/8)$. We have thus constructed an assemblage, $\sigma_{bc|yz}(\mu)$, which gives rise to behaviours which are Bell local, hence admit a quantum realization. \section{Appendix C: Details of constructing a POVM qutrit model from a projective qubit model} In this appendix we outline how given an qubit assemblage which is local for all projective measurements, we can find a qutrit assemblage which is local for all POVMs. The idea for this construction comes from Ref. \cite{Hirsch13}, which we apply to our scenario. First, assume that an example $\sigma_{bc|yz}$ is given which is a collection of qubits which produces a local behaviour for all projective measurements. Let us consider that in fact $\sigma_{bc|yz}$ is a collection of qutrits, which have support only on the qubit ($\ket{0}$, $\ket{1}$) subspace. It follows then that the assemblage also produces local behaviours for all dichotomic projective qutrit measurements. This follows immediately, since any dichotomic projective qutrit measurement, when restricted to the qubit subspace (the only subspace where Alice's states have support) is a noisy dichotomic qubit measurement. This is a convex combination of projective measurements, and hence can be covered by the local model; see \cite{Hirsch13} for more details. Now, we apply protocol 2 of \cite{Hirsch13}: Without loss of generality we can restrict to POVMs with each element $E_a = \alpha_a \Pi_a$, for $\Pi_a$ a projector, and $\sum_a \alpha_a = 3$. Alice chooses the projector $\Pi_\alpha$ with probability $\alpha_a/3$. She simulates the dichotomic measurement $\{\Pi_a,\openone-\Pi_a\}$ on $\sigma_{bc|yz}$. If the outcome corresponds to $\Pi_a$, she gives as outcome $a$. Otherwise, she gives as outcome $a'$ with probability $\bra{2} E_{a'} \ket{2}$. The totally probability for Alice to give as outcome $a$ is \begin{align*} \frac{\alpha_a}{3}\mathrm{tr}[\Pi_a \sigma_{bc|yz}] + \sum_{a'}\frac{\alpha_{a'}}{3}\mathrm{tr}[(\openone-\Pi_{a'})\sigma_{bc|yz}\bra{2}E_a \ket{2} \nonumber \\ = \frac{1}{3} \mathrm{tr}[E_a\sigma_{bc|yz}] + \frac{2}{3}\mathrm{tr}[\sigma_{bc|yz}]\bra{2}E_a\ket{2} \end{align*} which is the same as would be obtained by measuring the POVM $E_a$ on the assemblage $\sigma'_{bc|yz}$, where \begin{equation} \sigma'_{bc|yz} = \frac{1}{3}\sigma_{bc|yz} + \frac{2}{3}\mathrm{tr}[\sigma_{bc|yz}]\ket{2}\bra{2}. \end{equation} Thus, if Alice has a local model for all dichotomic projective measurements on $\sigma_{bc|yz}$ then she has a local model for all POVMs on $\sigma'_{bc|yz}$. Finally we note that if Alice instead performs a local filter $F=\ket{0}\bra{0}+\ket{1}\bra{1}$, then the assemblage after filtering $F\sigma'_{bc|yz}F^\dagger/\mathrm{tr}[F\sigma_{bc|yz}F^\dagger] = \sigma_{bc|yz}$ is the original qubit assemblage. Local filtering cannot convert an assemblage with a quantum realization to one without a quantum realization (otherwise quantum theory would not be closed under filtering). As a corollary, if the assemblage after filtering has no quantum realization, then the assemblage beforehand also cannot have a quantum realization. Thus, every example of a post-quantum qubit assemblage which has a local model for all projective measurements immediately leads to a post-quantum qutrit assemblage which has a local model for all POVMs using the above construction. \section{Appendix D: Details of numerical search algorithm} In this appendix we give details about how all the ingredients can be put together to search for non-trivial examples of post-quantum steering. We recall that we can restrict to searching for examples which are local for all projective measurements, since the final step of constructing an example which is local for all POVMs then follows analytically. Our task is to find a steering functional with elements $F_{bcyz}$ and almost quantum bound $\beta_\mathcal{\widetilde{Q}}$, and a real-valued assemblage $\sigma_{bc|yz}$ such that the following two constraints are satisfied \begin{align}\label{e:final constraints} & \mathrm{tr}\sum_{bcyz} F_{bcyz} \, \sigma_{bc|yz}^* < \beta_\mathcal{\widetilde{Q}} \\ \label{cond2} & p(abc|\theta_xyz) = \mathrm{tr}\big( \Pi_{a|\theta_x} \, \sigma_{bc|yz} \big) \text{ is local} \quad \forall \Pi_{a|\theta_x} \in \mathcal{E} \nonumber \\ \end{align} where $ \sigma_{bc|yz}^* = \sigma_{bc|yz}(\mu = \cos(\pi/8))$ as defined in Eq. \eqref{noisy ass}. The noisy assemblage $\sigma_{bc|yz}^*$ is then the example we are looking for. From condition \eqref{e:final constraints} it follows that $\sigma_{bc|yz}^*$ does not admit a quantum realization. From condition \eqref{cond2} it follows that the statistics of any projective measurements on $\sigma_{bc|yz}^*$ gives rise to a Bell local distribution, hence realizable in quantum theory. In practice, certifying whether there exists an assemblage $\sigma_{bc|yz}$ which satisfies the conditions \eqref{e:final constraints} and \eqref{cond2} for a given set of operators $F_{bcyz}$ can be checked by solving a semidefinite program (SDP). More precisely, we treat the left hand side of condition \eqref{e:final constraints} as the objective function, while keeping \eqref{cond2} as a constraint, i.e. we solve \begin{align} \beta = \min_{\sigma^*_{bc|yz}} &\quad\mathrm{tr}\sum_{bcyz} F_{bcyz}\, \sigma_{bc|yz}^* \nonumber\\ \text{s.t.} &\quad \mathrm{tr}\big( \Pi_{a|\theta_x} \, \sigma_{bc|yz} \big) \text{ is local} \quad \forall \Pi_{a|\theta_x} \in \mathcal{E} \end{align} The last part of the problem is to judiciously choose the operators $F_{bcyz}$. Here we employed the following method: (i) we generate randomly real-valued operators $F_{bcyz}$ and calculate the lower bound on the Tsirelson bound $\beta_\mathcal{\widetilde{Q}}$, itself an SDP. (ii) we solve the SDP described above with $\mu = 1$. If $\beta \geq \beta_\mathcal{\widetilde{Q}}$ we abort, and restart. If $\beta < \beta_\mathcal{\widetilde{Q}}$ we calculate $\mu$ such that $\mathrm{tr}\sum_{bcyz} F_{bcyz}\sigma_{bc|yz}(\mu) = \beta_\mathcal{\widetilde{Q}}$. If $\mu \leq \cos(\pi/8)$ we have the desired example. Otherwise we return to the beginning, applying standard gradient descent methods in order to generate a new set of operators $F_{bcyz}$. This method was implemented in {\sc matlab} using {\sc cvx} \cite{cvx}. \begin{table}[t!] \begin{tabular}{ll} $\rho_\mathrm{A}^* = \left( \begin{array}{rr}0.3666 &-0.0896 \\-0.0896 &0.6334 \end{array} \right)$ & $\sigma_{00|00}^* = \left( \begin{array}{rr}0.1360 &-0.1257 \\ -0.1257 &0.1360 \end{array} \right)$ \\ $\sigma_{0|0}^\mathrm{B \, *} = \left( \begin{array}{rr}0.1464 &-0.1114 \\-0.1114 &0.1600 \end{array} \right)$ & $\sigma_{00|10}^* = \left( \begin{array}{rr}0.0803 &-0.0523 \\ -0.0523 &0.0982 \end{array} \right)$ \\ $\sigma_{0|1}^\mathrm{B \, *} = \left( \begin{array}{rr}0.2851 &-0.0586 \\-0.0586 &0.2473 \end{array} \right)$ & $\sigma_{00|11}^* = \left( \begin{array}{rr}0.2555 &-0.1192 \\-0.1192 &0.0709 \end{array} \right)$ \end{tabular} \caption{Example of post-quantum assemblage that cannot lead to post-quantum nonlocality (for arbitrary projective measurements). \label{tab:ass}} \begin{tabular}{ll} $ F_A = \left( \begin{array}{rr}1.4622 &0.1773 \\ 0.1773 &-0.4622 \end{array} \right)$ & $F_{00} = \left( \begin{array}{cc}-0.1948 &0.5653 \\ 0.5653 &-0.7229 \end{array} \right)$ \\ $F^\mathrm{B}_{0} = \left( \begin{array}{cc}-0.2894 &0.2468 \\ 0.2468 &0.9767 \end{array} \right)$ & $F_{10} = \left( \begin{array}{cc} 0.5482 &-0.4270 \\ -0.4270 &-0.8690 \end{array} \right)$ \\ $F^\mathrm{B}_{1} = \left( \begin{array}{cc}-1.0943 &-0.4673 \\ -0.4673 &0.0648 \end{array} \right)$ & $F_{11} = \left( \begin{array}{cc}0.2875 & 1.0320 \\ 1.0320 &0.9182 \end{array} \right)$ \end{tabular} \caption{Operators defining an inequality of the form of Eq. (5) of the main text, which witnesses the fact that the assemblage given in Table \ref{tab:ass} is post-quantum.} \label{tab:ineq} \end{table} \section{Appendix E: Details about example of post-quantum steering}\label{ap:2} We give here the details concerning the example of a post-quantum qubit steering without post-quantum nonlocality for projective measurements. This example then leads to the qutrit example without post-quantum nonlocality for all POVMs. The qubit assemblage $\sigma_{bc|yz}^*$ is given explicitly in Table I; in the main text, we represented graphically the assemblage in Fig. 1. Note that we present $ \sigma_{bc|yz}^*$ in a minimal representation, using the no-signalling and normalization conditions (4) (see main text), and symmetry under permutation of Bob and Charlie, i.e. $ \sigma_{bc|yz}^* = \sigma_{cb|zy}^*$. Moreover, in Table II, we give the operators $F_{bcyz}$ for constructing the steering functional of Eq. (5) of the main text. These operators are also given in minimal representation, where $F_A = \sum_{yz} F_{11yz}$, $F^\mathrm{B}_{y} = \sum_{bz} (-1)^b F_{b1yz}$, $F^\mathrm{C}_{z} = \sum_{cy} (-1)^c F_{1cyz}$, and $F_{yz} = \sum_{bc} (-1)^{b+c} F_{bcyz}$. The quantity in Eq. (5) is then calculated as follows: \begin{equation} \beta = \mathrm{tr} \left( F_A \rho_A + \sum_y F^\mathrm{B}_{y} \sigma_{0|y}^\mathrm{B } + \sum_z F^\mathrm{C}_{z} \sigma_{0|z}^\mathrm{C } + \sum_{yz} F_{yz} \sigma_{00|yz} \right). \end{equation} To obtain the final example of a non-trivial post-quantum assemblage we apply the procedure of Appendix C to convert the above qubit assemblage into a qutrit assemblage, which is guaranteed to produce local behaviours for all POVMs whilst still being post-quantum. \end{document}
1,116,691,501,110
arxiv
\section{Introduction} \label{introduction} The ultrarelativistic heavy-ion collisions at the BNL Relativistic Heavy Ion Collider (RHIC) and the CERN Large Hadron Collider (LHC) have created a partonic matter at extreme conditions of temperature and energy densities, the quark-gluon plasma (QGP), which is governed by the quantum chromodynamics (QCD) theory. The first-principles lattice QCD calculation shows that the transition from hadronic to the partonic matter at zero baryon chemical potential $\mu_B$ is a smooth crossover \cite{Aoki:2006we,Bellwied:2015rza,Bazavov:2018mes}. But the calculation of phase transition in the QCD phase diagram at finite baryon chemical potential still has large uncertainties~\cite{Fischer:2014ata,Gao:2015kea,Fu:2019hdw}, especially regarding the conjectured endpoint of the first-order phase transition boundary that is the so-called QCD critical endpoint (CEP) \cite{Gavai:2008zr,Stephanov:2004wx,Mohanty:2009vb}, due to the famous sign problem \cite{Gavai:2003mf,deForcrand:2002hgr,Fodor:2001au}. To explore the nature of the QCD phase diagram, the beam energy scan (BES) program at RHIC is searching for the QCD critical point with Au$+$Au collisions at a large range of collision energies~\cite{Aggarwal:2010wy,Adamczyk:2013dal,Adamczyk:2014fia,Adare:2015aqk,Adamczyk:2017wsl,Adam:2020unf}. The fireballs created in Au$+$Au collisions at different energies freeze-out at different points of the QCD phase diagram. Because certain singularities will appear at the CEP in the thermodynamic limit~\cite{ibook02}, we expect to observe certain nonmonotonic behaviors if the evolution trajectory of the colliding system is close enough to the CEP. For example, event-by-event fluctuations of various conserved quantities are proposed as possible signatures of the existence of the CEP \cite{Koch:2005vg,Asakawa:2000wh,Asakawa:2009aj} because they are proportional to the corresponding susceptibilities and correlation lengths. Many recent experimental results on net-proton fluctuations hint that a critical point might have been reached during the evolution of Au$+$Au collisions at a low collision energy~\cite{Adamczyk:2013dal, Adam:2020unf,Luo:2017faz}, which serves as a main motivation for the upcoming research projects such as those at FAIR in Germany, NICA in Russia, and HIAF in China. On the other hand, it is difficult to connect thermal properties of static QCD matter with the experimental measurements, since relativistic heavy-ion collisions involve different dynamical evolution stages. To study the full evolution history of the thermodynamic properties of the QCD matter with a dynamical transport model may serve as a bridge between the gap~\cite{Zhang:2008zzk, Lin:2014tya}. In this work, we investigate the space-time evolution of the parton matter created in Au$+$Au collisions at different energies, including transverse flow, effective temperature and conserved charge chemical potential by using the string melting version of a multiphase transport (AMPT) model~\cite{Lin:2004en}. The paper is organized as follows. Section~\ref{AMPT} briefly introduces the string melting version of AMPT model and the improvements that we make. Comparison of the space-time evolution of transverse flow at different collision energies are presented in Sec.~\ref{flow}. We then discuss the space-time evolution of the effective temperature and chemical potentials in Sec.~\ref{SPACE-TIME EVOLUTION}. We show the trajectories of Au$+$Au collisions at different energies in the QCD phase diagram in Sec.~\ref{diagram}. We present the space-time evolution of pressure anisotropy to discuss the systems are in equilibrium or nonequilibrium in Sec.~\ref{Pressureanisotropy}. Finally, a summary is given in Sec.~\ref{summary}. \section{A multiphase transport model including the nuclear thickness} \label{AMPT} The string melting version of the AMPT model consists of fluctuating initial conditions from the heavy-ion jet interaction generator (HIJING) model~\cite{Wang:1991hta}. In this model, minijet partons and strings are produced from hard processes and soft processes, respectively. With the string melting mechanism, all parent hadrons from the fragmentation of the excited strings are converted into partons. The interactions among these partons are described by Zhang's parton cascade (ZPC) model \cite{Zhang:1997ej}, which includes elastic two-body scatterings based on the leading order pQCD gg $\rightarrow $ gg cross section: \begin{equation} \frac{d\sigma}{dt}=\frac{9\pi\alpha^{2}_{s}}{2}(1+\frac{\mu^{2}}{s})\frac{1}{(t-\mu^{2})^{2}}. \label{q1} \end{equation} In the above, $\alpha_{s}$ is the strong-coupling constant (taken as 0.33), while $s$ and $t$ are the usual Mandelstam variables. The effective screening mass $\mu$ is taken as a parameter in ZPC for the parton scattering cross section, and we set $\mu$ as 2.265 fm$^{-1}$ leading to a total cross section of about 3 mb for elastic scatterings in the default setting. The AMPT model implements a spatial quark coalescence model, which combines nearby freeze-out partons into mesons or baryons, to describe the transition from the partonic matter to the hadronic matter. The final-stage hadronic evolutions are modeled by an extension of a relativistic transport model (ART) including both elastic and inelastic scatterings for baryon-baryon, baryon-meson and meson-meson interactions~\cite{Li:1995pra}. Our other parameters are taken as same as those from Ref.~\cite{Lin:2014tya,Ma:2016fve}, which can reasonably reproduce many experimental observables such as rapidity distributions, $p_T$ spectra, and anisotropic flows \cite{Lin:2001zk,Chen:2004dv,Ma:2016fve} for both Au$+$Au collisions at RHIC and Pb$+$Pb collisions at LHC energies. To study heavy-ion collisions at low energies, we have improved the string melting AMPT by modeling the finite nuclear thickness, which has been shown to be important for nuclear collisions at lower energies \cite{Lin:2017lcj, Mendenhall:2020fil, Mendenhall:2021maf}. In our convention, the $x$ axis is chosen along the direction of impact parameter $b$ from the target center to the projectile center, the $z$ axis is along the projectile direction, and the $y$ axis is perpendicular to both the $x$ and $z$ directions. We consider the moment when the projectile and target nuclei contact each other as the starting time $t=0$, while the proper time $\tau$ is defined as $(t^2-z^2)^{1/2}$. The spatial density of nucleons inside projectile or target follows the Woods-Saxon distribution. As shown in Fig.~\ref{schematic_diagram}(a), for a nucleon inside a hard-sphere projectile located at an initial position of ($x_i, y_i, z_i$), the thickness length $l$ of target that the projectile nucleon punches through can be calculated as follows, \begin{eqnarray} l(x_i,y_i,b)=2\sqrt{R^2-(x_i\pm b/2)^2-y_i^2}, \label{thickness} \end{eqnarray} where $R$ is the hard-sphere radius of colliding nuclei, and $\pm$ applies to projectile or target nucleons, respectively. As shown in Fig.~\ref{schematic_diagram}(b), the time $t_e$ when the projectile nucleon enters the target in the center-of-mass frame of a Au$+$Au collision can be calculated as follows, \begin{eqnarray} t_e(x_i,y_i,z_i,b)&=&\frac{\sqrt{R^2-b^2/4}-[l(x_i,y_i,b)/2 \pm z_i]}{2\mathrm{sinh}~y_{CM}}, \label{teq} \end{eqnarray} \begin{figure}[htbp] \centering\includegraphics[scale=0.4]{schematic_diagram.eps} \caption{(Color online) The schematic diagrams of a Au$+$Au collision with an impact parameter $b$ in the $x$-$z$ plane. (a) Consider a projectile nucleon $N$ (small open circle) at a location of ($x_i, y_i, z_i$) at the starting time $t=0$; (b) the projectile nucleon enters the target nucleus at $t=t_e(x_i, y_i, z_i, b)$; (c) the wounded nucleon from the projectile produces parent hadrons at a location of ($x_H, y_H, z_H$) at $t=t_H(x_i, y_i, z_i, b)$; (d) the projectile nucleon leaves the target nucleus at $t=t_e(x_i, y_i, z_i, b)+d_t$.} \label{schematic_diagram} \end{figure} where $y_{CM}$ is the projectile rapidity in the center-of-mass frame. Since parent hadrons are produced by interactions between projectile and target nucleons, as shown in Fig.~\ref{schematic_diagram}(c), the production time of parent hadrons, $t_H$, is obtained by sampling according to a time profile based on the probability function ~\cite{Lin:2017lcj}, \begin{equation} \frac{d^2E_T}{dy dt_H}=a_n [(t_H-t_e)(t_e+d_t-t_H)]^n \frac{dE_T}{dy}, t_H\in [t_e, t_e+d_t], \label{time_profile} \end{equation} where we take the power as $n = 4$, $a_n = 1/d_t^{2n+1}/\beta(n+1, n+1)$ is the normalization factor with the $\beta$ function of $\beta(a, b)$, and $d_t=l/(2\mathrm{sinh}~y_{CM})$ is the duration time during which the projectile nucleon completely crosses the target nucleus. The parent hadrons produced by same projectile or target nucleon are assumed to be produced at the same time of $t_H$. Then the longitudinal coordinate of a parent hadron can be obtained as follows: \begin{equation} z_H=z_i\pm t_H\mathrm{sinh}~y_{CM}, \label{zstring} \end{equation} while its transverse coordinates ($x_H, y_H$) are set to the transverse positions of the projectile or target nucleon. In the following, the partons are generated by string melting after a formation time: \begin{equation} t_f=E_{H}/m^2_{T,H}, \label{tf } \end{equation} where $E_{H}$ and $m_{T,H}$ represent the energy and transverse mass of the parent hadron. The initial positions of partons from melted strings are calculated from those of their parent hadrons using straight line trajectories. As a result, the initial condition of partonic matter after considering the finite-thickness effect is used for the parton cascade simulations in this study. To study the thermodynamics properties of partonic matter, we focus on the space-time evolution of partonic matter during the process of parton cascade only in this study. Using the string-melting version of the AMPT model with the finite-thickness effect, 10 000 events of Au$+$Au central collisions ($0 - 5\%$ centrality modeled with b$\leq$3 fm) are generated for each energy ($\sqrt {s_{NN}}$ = 200, 62.4, 39, 27, 19.6, 11.5, 7.7, 4.9, and 2.7 GeV) which can be provided by the RHIC, FAIR, and NICA facilities. \section{Results and discussions} \label{results} \subsection{Space-time evolution of transverse flow} \label{flow} \begin{figure}[htbp] \centering\includegraphics[scale=0.4]{denx3.eps} \caption{(Color online) Proper-time evolution of parton density averaged over the transverse area of the overlap volume within space-time rapidity $|\eta_s|<0.5$ in $b=0$ fm Au$+$Au collisions at different energies.} \label{denx3} \end{figure} First, the densities of formed partons averaged over the transverse area of the overlap volume within space-time rapidity $|\eta_s|<0.5$ as functions of proper time in $b=0$ fm Au$+$Au collisions at different energies are shown in Fig.~\ref{denx3}. The nuclear transverse area $A_T$ \cite{Lin:2017lcj} is defined as: \begin{eqnarray}\label{transverse} A_T= \begin{cases} \pi R^2_A & {t\ge d_t^{nuclei}/2}\\ \pi R^2_A \left [ 1-(1-2t/d_t^{nuclei})^2\right ] & {t<d_t^{nuclei}/2} \end{cases}, \end{eqnarray} with $R_A=1.12A^{1/3}$ fm, $A=197$, and $d_t^{nuclei} = 2R_A/\mathrm{sinh}~y_{CM}$ is the duration time for two nuclei of the same mass number $A$ with $b=0$ fm to cross each other in the center-of-mass frame. The density increases with the proper time at first, because more partons are produced. Higher density is reached at higher collision energy. With the expansion of the fireball, the density decreases gradually. Both the increase and the decrease become slower at lower collision energies, since the nuclei have a larger thickness at lower collision energies which slows down the evolution, especially in the longitudinal direction. \begin{figure*}[htbp] \centering \includegraphics[width=1\linewidth]{tr_flow.eps} \caption{(Color online) Transverse flow component $\beta_x$ along the $x$ axis ($|y| < 0.5$ fm) as functions of $x$ and $\eta_s$ at different proper times in central Au$+$Au collisions at $\sqrt{s_{NN}}$ = 200 GeV (first row) and 7.7 GeV (second row).} \label{Tr_flow} \end{figure*} At the same time, the radial flow is calculated employing $\vec{\beta}=(\sum_{i}\vec{p}_i/\sum_{i}E_i)$, where the sum over index $i$ takes into account all partons in the cell for all events of a given collision system. Flow component along the $x$ direction $\beta_x$ as functions of coordinate $x$ and space-time rapidity $\eta_s$ at different times in cells within $|y| < 0.5$ fm in central Au$+$Au collisions at $\sqrt{s_{NN}}$ = 200 GeV and 7.7 GeV are shown in Fig.~\ref{Tr_flow}. We can see the antisymmetry of the transverse flow along the $x$ axis in space-time rapidity, after averaging over many events of central collisions. The flow is very small at the early time $\tau = 0.2$ fm/$c$ and then develops rather faster, especially at larger $x$~\cite{Lin:2014tya}. \begin{figure}[htbp] \centering\includegraphics[scale=0.4]{flow_t9.eps} \caption{(Color online) Proper time evolution of transverse flow component $\beta_x$ of partons within space-time rapidity $|\eta_s|<0.5$ in the cells at ($x, y$)=(1 fm, 0 fm) (filled symbols) and ($x, y$)=(7 fm, 0 fm) (open symbols) in central Au$+$Au collisions at different energies.} \label{flow_t} \end{figure} Figure~\ref{flow_t} shows the transverse flows of partons in the two selected cells at ($x, y$)=(1 fm, 0 fm) and ($x, y$)=(7 fm, 0 fm) within space-time rapidity $|\eta_s|<0.5$ as functions of proper time in central Au$+$Au collisions at different energies. The transverse flow is bigger further away from the center of the overlap volume of central collisions \cite{Lin:2014tya}. We see that the transverse flow increases with time for both the inner cell and the outer cell in the beginning. Because the parton density increases faster at higher collision energy, the transverse flow grows faster at higher collision energy for both the inner and outer cells. However, compared with the case of parton density in Fig.~\ref{denx3}, the development of transverse flow generally shows a time delay is slower. \subsection{Space-time evolution of temperature and chemical potentials} \label{SPACE-TIME EVOLUTION} In the AMPT model, the energy-momentum tensor, $T^{\mu \nu }$ can be calculated by averaging over particles and events in a volume $V$~\cite{Zhang:2008zzk}, i.e., \begin{equation} T^{\mu \nu }=\frac{1}{V}\sum_{i}\frac{p_{i}^{\mu }p_{i}^{\nu }}{E_{i}}. \end{equation} In the rest frame of a small volume cell, the energy density can be given by $\epsilon = T^{00}$, while the pressure components are related to the energy-momentum tensor by $P_{x} = T^{11}$, $P_{y} = T^{22}$, $P_{z} = T^{33}$. The net conserved charge number densities $n_{B}$, $n_{Q}$, and $n_{S}$ can be calculated for the given volume as well. Therefore, the corresponding chemical potentials $n_{B}$, $n_{Q}$, and $n_{S}$, and $T$ can be obtained by numerical solving Eqs.~(\ref{mu_T}) after the net conserved charge densities $n_{B}$, $n_{Q}$, and $n_{S}$ and $\epsilon$ are obtained through the AMPT model. Note that in this study we only extract $\mu$ and $T$ values for the center cell, for which the rest frame is assumed to be A$+$A collision center-of-mass frame. \begin{figure}[htbp] \includegraphics[scale=0.37]{n_epsilon.eps} \caption{(Color online) Proper-time evolution of net baryon number density $n_{B}$ (first row), net electric charge density $n_{Q}$ (second row), net strangeness number density $n_{S}$ (third row), and energy density $\epsilon$ (fourth row) for the central cell in central Au$+$Au collisions at 200 GeV (left column), 27 GeV (middle column), and 4.9 GeV (right column) with (solid) and without (dashed) including the finite nuclear thickness.}\label{nB_epsilon} \end{figure} Figure~\ref{nB_epsilon} shows the proper-time evolution of net baryon number density $n_{B}$, net electric charge density $n_{Q}$, net strangeness number density $n_{S}$, and energy density $\epsilon$ for the central cell, defined as the cell within ($|x| < 0.5$ fm, $|y| < 0.5$ fm) and the space-time rapidity range of $|\eta_s| < 0.5$, in central Au$+$Au collisions at three selected beam energies from the AMPT-SM model. At the top RHIC energy of 200 GeV, the results with and without the finite nuclear thickness are almost the same \cite{Lin:2017lcj, Mendenhall:2020fil}. With the decrease of the beam energy, the peak energy and charge densities are reached later due to the longer time that two nuclei take to cross each other. Therefore, it is important to consider the finite nuclear thickness effect for simulating heavy-ion collisions at low beam energies \cite{Lin:2017lcj, Mendenhall:2020fil, Mendenhall:2021maf}. Note that we show the results with the finite nuclear thickness effect in the rest of this paper, unless stated otherwise. In addition, we see that the net strangeness number density can be negative at low energies in the central cell. This is because of the large baryon density, which leads to most $s$ in $\Lambda$ but most $\Bar{s}$ in K. Since the quark formation time is inversely proportional to the parent hadron transverse mass in AMPT's string melting, $s$ from $\Lambda$ has a smaller formation time than $\Bar{s}$, which produces negative $n_S$ at early times. \begin{figure}[htbp] \includegraphics[width=1\linewidth]{contours.eps} \caption{(Color online) Contour plots of the effective temperature from Boltzmann statistics as a function of the $x$ coordinate and space-time rapidity $\eta_{s}$ at different proper times in central Au$+$Au collisions at $\sqrt{s_{NN}}$ = 200 GeV (top row) and 7.7 GeV (bottom row) for the parton matter within $|y|<$0.5 fm.}\label{contours} \end{figure} The two-dimensional (2D) distributions of extracted local temperature from Boltzmann statistics as functions of coordinate $x$ and space-time rapidity at different proper times in central Au$+$Au collisions at $\sqrt{s_{NN}}$ = 200 and 7.7 GeV are shown in Fig.~\ref{contours}. We can see that the highest temperature is reached at the center of the overlap region after the two nuclei overlap completely ($\tau \approx $ 0.2 and 4 fm/$c$ for 200 and 7.7 GeV, respectively). After that moment, the temperature decreases with the evolution of the expanding system. \begin{figure}[htbp] \centering\includegraphics[scale=0.4]{T_mub.eps} \caption{(Color online) Proper time evolution of (a) baryon chemical potential $\mu_{B}$ and (b) temperature $T$ for the central cell in central Au$+$Au collisions at different energies.} \label{T_mub} \end{figure} The proper-time evolutions of the baryon chemical potential $\mu_{B}$ and temperature $T$ for the central cell in central Au$+$Au collisions at different beam energies from the AMPT-SM model are shown in Fig.~\ref{T_mub}(a) and \ref{T_mub}(b), respectively. We can see that both baryon chemical potential and temperature increase with time at first, then they decrease with time, which indicates that the collision system is first compressed and heated, and then becomes dilute and cools down due to the expansion. However, the energy dependencies of the baryon chemical potential and temperature are different. Figure.~\ref{T_mub}(b) shows that a higher temperature is reached at a higher collision energy; in contrast, the highest baryon chemical potential is achieved at an intermediate energy of $\sqrt{s_{NN}}$ = 7.7 GeV, as shown in Fig.~\ref{T_mub}(a). In general, the time evolution at lower energies is slower than that at higher energies due to the influence of the finite nuclear thickness. \begin{figure}[htbp] \centering\includegraphics[scale=0.4]{muq_mus.eps} \caption{(Color online) Proper-time evolution of (a) chemical potentials of strangeness $\mu_{S}$ and (b) electric charge $\mu_{Q}$ for the central cell in central Au$+$Au collisions at different energies.} \label{muq_nus} \end{figure} The proper-time evolutions of the chemical potentials of electric charge $\mu_{Q}$ and strangeness $\mu_{S}$ for the central cell in central Au$+$Au collisions at different beam energies from the AMPT-SM model are shown in Fig.~\ref{muq_nus}(a) and \ref{muq_nus}(b), respectively. We obtain positive $\mu_{S}$ but negative $\mu_{Q}$. The $\mu_{S}$ is seen to be roughly proportional to $\mu_{B}$, i.e., $\mu_S \approx 1/3\mu_B$, while the magnitude of $\mu_Q$ is very small. We observe that the magnitudes of two chemical potentials increase with time at first, and then decrease with time, which follow a similar trend as $\mu_B$. \subsection{Trajectories in the QCD phase diagram} \label{diagram} \begin{figure}[htbp] \centering\includegraphics[scale=0.4]{phase_diagram1.eps} \caption{(Color online) AMPT results on the average trajectories of the central cell in central Au$+$Au collisions at different energies in the QCD phase diagram. Three cases are compared: (I) $\mu_Q = 0$, $\mu_S = 0$, $m_q = 0$ (open symbols); (II) $\mu_Q \ne 0$, $\mu_S \ne 0$, $m_q = 0$ (half open symbols); (III) $\mu_Q \ne 0$, $\mu_S \ne 0$, $m_q \ne 0$ (filled symbols). The black curve shows the crossover phase boundary with the critical endpoint obtained from the functional renormalization group approach with $N_f = 2+1$ \cite{Fu:2019hdw}. The corresponding lifetime during which each trajectory stays in the QGP phase is also shown. } \label{phase_diagram} \end{figure} In Fig.~\ref{phase_diagram}, we present the event-averaged evolution trajectory of the central cell of the partonic matter produced in central Au$+$Au collisions at different beam energies from the moment when the baryon chemical potential reaches the maximum value to the moment when it reaches the crossover curve in the QCD phase diagram of temperature and baryon chemical potential. Note that the crossover phase boundary is obtained from the functional renormalization group (FRG) method with $N_f = 2+1$, which agrees well with the phase boundary from the lattice QCD \cite{Fu:2019hdw}. From the filled symbols that represent the full consideration in which all chemical potentials and quark mass are included, we find that the partonic stage can last 3.4-4.8 fm/$c$ if the time when the system stays above the phase boundary is counted, which is consistent with the previous AMPT results for mid-central Au$+$Au collisions~\cite{Chen:2009cju}, but longer than the lifetime for the matter averaged over the transverse area from a semi-analytical calculation \cite{Mendenhall:2021maf}. If we take the location of the critical endpoint at ($T_{CEP}, \mu_{B_{CEP}}$) = (107, 635) MeV from the FRG calculation, the beam energies lower than 4.9 GeV~\cite{Fu:2019hdw, Andronic:2017pug} seem to be the most promising to reach the CEP, which could be accessed at fixed-target experiments at RHIC. Note that it has been found that the chemical and kinetic freeze-out parameters extracted from the AMPT model agree with the RHIC experimental measurements~\cite{Wang:2020wvu}. We further study the influences of the $\mu_Q$, $\mu_S$, and quark current mass $m_q$ on the event-averaged evolution trajectories (see Appendix~\ref{Boltzmann statistics}), as shown by half open and open symbols in Fig.~\ref{phase_diagram}. We can see that the influence of the quark mass is so small that the filled and half open symbols most overlap, because the current quark masses we use here are very small compared with the temperature and baryon chemical potential. However, we can observe that there is a large difference between filled or half open and open symbols, which indicates that $\mu_Q$ and $\mu_S$ are important for drive the evolution of the system. \begin{figure}[htbp] \centering\includegraphics[scale=0.4]{phase_diagram_statistics.eps} \caption{(Color online) The average trajectory of the central cell in central Au$+$Au collisions at different energies in the QCD phase diagram of temperature versus baryon chemical potential from Boltzmann statistics (filled symbols) and the quantum statistics (open symbols). } \label{phase_diagram_statistics} \end{figure} Furthermore, we check whether the different statistics (see Appendices~\ref{Boltzmann statistics} and \ref{Quantum statistics}) can result in a difference of trajectories in the QCD phase diagram. We compare the results from Boltzmann statistics (filled symbols) and the quantum statistics (open symbols) in Fig.~\ref{phase_diagram_statistics}. We can see that with the decrease of collision energy, the difference between the two trajectories from the two statistics becomes larger. In general, a higher $\mu_B$ is obtained by the quantum statistics than that obtained by Boltzmann statistics, since the Pauli exclusion begins to play a role as $\mu_B$ increases, while this effect is absent in the Boltzmann statistics. Because the AMPT model assumes Boltzmann statistics, the results in the rest of this paper are presented using Boltzmann statistics. \begin{figure}[h] \centering\includegraphics[scale=0.35]{phase_diagram_thickness.eps} \caption{(Color online) The average trajectory of the central cell in central Au$+$Au collisions at different energies in the QCD phase diagram from Boltzmann statistics with (filled symbols) and without (open symbols) including the finite nuclear thickness.} \label{phase_diagram_thickness} \end{figure} In addition, the finite thickness of nuclei is expected to affect the evolution trajectories in the QCD phase diagram, especially at low energies \cite{Lin:2017lcj, Mendenhall:2020fil, Mendenhall:2021maf}. In Fig.~\ref{phase_diagram_thickness} we compare the average trajectories with and without including the finite thickness for the central cell in central Au$+$Au collisions at different energies in the QCD phase diagram based on full consideration of Boltzmann statistics. We do not see any obvious change of evolution trajectory for the top RHIC energy, but the difference becomes more and more significant with the decrease of collision energy. For lower energies, the results without considering the finite-thickness effect start at much higher temperature and larger baryon chemical potential. For example, when considering the finite-thickness effect, the trajectory for 2.7 GeV disappears below the phase-transition boundary. Therefore, it is clearly necessary to properly include the finite nuclear thickness effect, especially for simulating heavy-ion collisions at low beam energies. \begin{figure}[htbp] \centering\includegraphics[scale=0.4]{byevent.eps} \caption{(Color online) AMPT results on event-by-event trajectories of the central cell in central Au$+$Au collisions at different beam energies in the QCD phase diagram.} \label{byevent} \end{figure} We note that the above results come from the average of 10 000 central Au$+$Au events. However, event-by-event fluctuations can not be neglected, and these fluctuations could affect the search for the CEP of the QCD phase diagram. Figure.~\ref{byevent} shows the event-by-event trajectories of central Au$+$Au collisions at different beam energies from the AMPT-SM model. To suppress the effect from volume fluctuation, a multiplicity cut is further applied, in which we divide the total central events into 100 bins by multiplicity and only use the events in one middle bin around the average. Even so, we can see that the fluctuation of evolution trajectory is still large, especially at high energies, which could be due to larger volume fluctuations at higher energies. It should be noted that the QGP created in high-energy heavy-ion collisions, which may consist of gluons and quarks in or near chemical and thermal equilibrium, should be governed by nonperturbative QCD interactions, which are missing in our model. Furthermore, the method that we used to extract temperature and baryon chemical potential only works for a noninteracting parton system in principle. Our extraction method assumes that all partons in the cell are in full thermal and chemical equilibrium~\cite{Lin:2014tya}; therefore, the extracted temperature and chemical potentials are the effective values if the system is in partial of thermal and/or chemical equilibrium. In addition, we focus on the central space-time rapidity and only study the partonic matter without the subsequent phase transition and hadronic evolution. \subsection{Equilibrium or nonequilibrium} \label{Pressureanisotropy} In the central cell of central Au$+$Au collisions, due to the cylindrical symmetry around the beam axis, the two transverse pressure components $P_{x}$ and $P_{y}$ are equal. Therefore, the transverse pressure can be defined to be $P_{T} = (P_{x}+P_{y})/2$~\cite{Zhang:2008zzk}, while the longitudinal pressure $P_{L}$ is just $P_{z}$. For a system in thermal equilibrium, its pressure must be isotropic, which satisfies the relation of $P_{T} = P_{L}=P$; otherwise, we define the total pressure as $P = (P_{x}+P_{y}+P_{z})/3$. Therefore, a pressure anisotropy parameter, $P_{L}/P_{T}$, is defined to describe the degree of pressure anisotropy of the system. The closer the value of $P_{L}/P_{T}$ is to unity, the closer the system is to the state of thermal equilibrium. \begin{figure}[htbp] \centering\includegraphics[scale=0.4]{PLPT.eps} \caption{(Color online) AMPT results for the time evolution of the pressure anisotropy parameter when its temperature and baryon chemical potential are above (filled symbols) and below (dotted curves) the phase boundary in the QCD phase diagram in the central cell in central Au$+$Au collisions at different beam energies.} \label{PLPT} \end{figure} Figure.~\ref{PLPT} shows how the pressure anisotropy parameter in the central cell evolves with proper time in central Au$+$Au collisions at different beam energies. For Au$+$Au collisions at 200 GeV, we can see that $P_{L}/P_{T}$ keeps increasing, but still can not reach unity up to 5 fm/$c$. It indicates that even for the top RHIC energy, the central cell of the system actually does not reach thermal equilibrium when it arrives at the phase boundary in the AMPT model, which is consistent with previous results~\cite{Zhang:2008zzk, Lin:2014tya}. For lower energies, $P_{L}/P_{T}$ first increases up to a peak and decreases into a valley, and finally increases gradually due to the finite nuclear thickness. However, none of them reaches thermalization during the partonic stage. It shows that it is indeed different from the equilibrium evolution of hydrodynamical models. \begin{figure}[htbp] \centering\includegraphics[scale=0.4]{Pressure.eps} \caption{(Color online) AMPT results on the time evolution of (a) three diagonal components of the energy-momentum tensor ($P_{DC}$; open symbols) and the pressure from the Boltzmann statistical model ($P_{Boltzmann}$; filled symbols) in the central cell in central Au$+$Au energies, and (b) the ratio of $P_{DC}$ and $P_{Boltzmann}$.} \label{Pressure} \end{figure} The total pressure can be extracted from the Boltzmann statistical model via: \begin{eqnarray} P(T)=\sum_i d_i \int \frac{d^3p}{(2\pi)^3}\frac{p^2}{3E_i(p, T)}f_B(p, T), \label{p_Boltzmann} \end{eqnarray} where $d_i$ is the degeneracy of partonic matter, $f_B(p, T)$ is the Boltzmann statistical distribution function, and $T$ is the temperature extracted from the Boltzmann statistical model. Figure.~\ref{Pressure} compares the pressure from three diagonal components of the energy-momentum tensor ($P_{DC}$) and from the Boltzmann statistical model ($P_{Boltzmann}$) in the central cell for central Au$+$Au energies. One can find that they are different especially for earlier time at lower energies, which indicates more extreme nonequilibrium of the system exists there. \begin{figure}[htbp] \centering\includegraphics[scale=0.4]{T_comparison.eps} \caption{(Color online) AMPT results on the time evolution of the effective temperature extracted from transverse (dotted curves), three (dashed curves) diagonal components of the energy-momentum tensor, and the Boltzmann statistical model (solid curves) in the central cell in central Au$+$Au collisions at (a)200 GeV, (b) 27 GeV, (c) 11.5 GeV, and (d) 4.9 GeV.} \label{T_comparison} \end{figure} The effective temperature can be defined locally by the ratio between the average of the diagonal components of the energy-momentum tensor and the density of all particles~\cite{Sorge:1995pw}. The effective temperature extracted from the diagonal components of the energy-momentum tensor and the Boltzmann statistical model in the central cell are shown in Fig.~\ref{T_comparison}. One can see that the effective temperatures extracted from the diagonal components of the energy-momentum tensor are different from our temperature especially for lower energies, although they give consistent trends. It is not only due to the nonequilibrium of the system, but also because our temperature extraction also considers the chemical potentials of conserved charges, especially the baryon chemical potential. In this sense, we should emphasize again that since the parton systems in Au$+$Au collisions at different energies from the AMPT model are not in complete equilibrium, the thermodynamic properties that we extracted above could be only approximate. \section{Summary} \label{summary} We have studied the space-time evolution of the parton matter produced in central Au$+$Au collisions at different collision energies using the AMPT model with string melting and the finite nuclear thickness effect. The space-time evolutions of parton density and transverse flow is first presented for different collision energies. Then we extract the effective temperature and chemical potentials of the partons in the central cell based on Boltzmann statistics and quantum statistics. The temperature and baryon chemical potential first increase and then decrease with time, but their dependencies on the collision energy are opposite. By investigating the evolution of the partonic matter created in Au$+$Au collisions from 2.7 to 200 GeV, we obtain their evolution trajectories in the QCD phase diagram. The results indicate that the partonic state in the central cell exists for 3.4-4.8 fm/$c$ over this wide range of energies, and the trajectory depends on the statistics and whether the finite nuclear thickness is considered. We observe that the event-by-event trajectory fluctuates widely in the phase diagram. However, the evolution of pressure anisotropy indicates that only partial thermalization can be achieved when the partonic systems reach the predicted QCD phase boundary. Further studies of the evolution and the thermodynamic properties of the matter in heavy-ion collisions are indispensable for studying the QCD phase structure and the search for the critical point in experiments. \begin{acknowledgments} We thank Todd Mendenhall for checking the results in the Appendices. This work is supported in part by the National Natural Science Foundation of China under Grants No. 12147101, No. 11961131011, No. 11890710, No. 11890714, and No. 11835002, the Strategic Priority Research Program of Chinese Academy of Sciences under Grant No. XDB34030000, and the Guangdong Major Project of Basic and Applied Basic Research under Grant No. 2020B0301030008 (H.-S. W. and G.-L.M.), the National Science Foundation under Grant No. PHY-2012947 (Z.-W.L.), and the National Natural Science Foundation of China under Contract No. 11775041 (W.-j. F.). \end{acknowledgments}
1,116,691,501,111
arxiv
\section{Introduction} Please follow the steps outlined below when submitting your manuscript to the IEEE Computer Society Press. This style guide now has several important modifications (for example, you are no longer warned against the use of sticky tape to attach your artwork to the paper), so all authors should read this new version. \subsection{Language} All manuscripts must be in English. \subsection{Dual submission} Please refer to the author guidelines on the ICCV 2017 web page for a discussion of the policy on dual submissions. \subsection{Paper length} For ICCV 2017, the rules about paper length have changed, so please read this section carefully. Papers, excluding the references section, must be no longer than eight pages in length. One additional page containing {\em only} cited references is allowed, for a total maximal length of nine pages. Overlength papers will simply not be reviewed. This includes papers where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font. The reason such papers will not be reviewed is that there is no provision for supervised revisions of manuscripts. The reviewing process cannot determine the suitability of the paper for presentation in eight pages if it is reviewed in eleven. \subsection{The ruler} The \LaTeX\ style defines a printed ruler which should be present in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document using a non-\LaTeX\ document preparation system, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment the \verb'\iccvfinalcopy' command in the document preamble.) Reviewers: note that the ruler measurements do not align well with lines in the paper --- this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. Just use fractional references (e.g.\ this line is $095.5$), although in most cases one would expect that the approximate location will be adequate. \subsection{Mathematics} Please number all of your sections and displayed equations. It is important for readers to be able to refer to any particular equation. Just because you didn't refer to it in the text doesn't mean some future reader might not need to refer to it. It is cumbersome to have to use circumlocutions like ``the equation second from the top of page 3 column 1''. (Note that the ruler will not be present in the final copy, so is not an alternative to equation numbers). All authors will benefit from reading Mermin's description of how to write mathematics: \url{http://www.pamitc.org/documents/mermin.pdf}. \subsection{Blind review} Many authors misunderstand the concept of anonymizing for blind review. Blind review does not mean that one must remove citations to one's own work---in fact it is often impossible to review a paper unless the previous citations are known and available. Blind review means that you do not use the words ``my'' or ``our'' when citing previous work. That is all. (But see below for techreports.) Saying ``this builds on the work of Lucy Smith [1]'' does not say that you are Lucy Smith; it says that you are building on her work. If you are Smith and Jones, do not say ``as we show in [7]'', say ``as Smith and Jones show in [7]'' and at the end of the paper, include reference 7 as you would any other cited work. An example of a bad paper just asking to be rejected: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of our previous paper [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Removed for blind review \end{quote} An example of an acceptable paper: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of the paper of Smith \etal [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Smith, L and Jones, C. ``The frobnicatable foo filter, a fundamental contribution to human knowledge''. Nature 381(12), 1-213. \end{quote} If you are making a submission to another conference at the same time, which covers similar or overlapping material, you may need to refer to that submission in order to explain the differences, just as you would if you had previously published related work. In such cases, include the anonymized parallel submission~\cite{Authors14} as additional material and cite it as \begin{quote} [1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324, Supplied as additional material {\tt fg324.pdf}. \end{quote} Finally, you may feel you need to tell the reader that more details can be found elsewhere, and refer them to a technical report. For conference submissions, the paper must stand on its own, and not {\em require} the reviewer to go to a techreport for further details. Thus, you may say in the body of the paper ``further details may be found in~\cite{Authors14b}''. Then submit the techreport as additional material. Again, you may not assume the reviewers will read this material. Sometimes your paper is about a problem which you tested using a tool which is widely known to be restricted to a single institution. For example, let's say it's 1969, you have solved a key problem on the Apollo lander, and you believe that the ICCV70 audience would like to hear about your solution. The work is a development of your celebrated 1968 paper entitled ``Zero-g frobnication: How being the only people in the world with access to the Apollo lander source code makes us a wow at parties'', by Zeus \etal. You can handle this paper like any other. Don't write ``We show how to improve our previous work [Anonymous, 1968]. This time we tested the algorithm on a lunar lander [name of lander removed for blind review]''. That would be silly, and would immediately identify the authors. Instead write the following: \begin{quotation} \noindent We describe a system for zero-g frobnication. This system is new because it handles the following cases: A, B. Previous systems [Zeus et al. 1968] didn't handle case B properly. Ours handles it by including a foo term in the bar integral. ... The proposed system was integrated with the Apollo lunar lander, and went all the way to the moon, don't you know. It displayed the following behaviours which show how well we solved cases A and B: ... \end{quotation} As you can see, the above text follows standard scientific convention, reads better than the first version, and does not explicitly name you as the authors. A reviewer might think it likely that the new paper was written by Zeus \etal, but cannot make any decision based on that guess. He or she would have to be sure that no other authors could have been contracted to solve problem B. FAQ: Are acknowledgements OK? No. Leave them for the final copy. \begin{figure}[t] \begin{center} \fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}} \end{center} \caption{Example of caption. It is set in Roman so that mathematics (always set in Roman: $B \sin A = A \sin B$) may be included without an ugly clash.} \label{fig:long} \label{fig:onecol} \end{figure} \subsection{Miscellaneous} \noindent Compare the following:\\ \begin{tabular}{ll} \verb'$conf_a$' & $conf_a$ \\ \verb'$\mathit{conf}_a$' & $\mathit{conf}_a$ \end{tabular}\\ See The \TeX book, p165. The space after \eg, meaning ``for example'', should not be a sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided \verb'\eg' macro takes care of this. When citing a multi-author paper, you may save space by using ``et alia'', shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.) However, use it only when there are three or more authors. Thus, the following is correct: `` Frobnication has been trendy lately. It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.'' This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...'' because reference~\cite{Alpher03} has just two authors. If you use the \verb'\etal' macro provided, then you need not worry about double periods when used at the end of a sentence as in Alpher \etal. For this citation style, keep multiple citations in numerical (not chronological) order, so prefer \cite{Alpher03,Alpher02,Authors14} to \cite{Alpher02,Alpher03,Authors14}. \begin{figure*} \begin{center} \fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}} \end{center} \caption{Example of a short caption, which should be centered.} \label{fig:short} \end{figure*} \section{Formatting your paper} All text must be in a two-column format. The total allowable width of the text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54 cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the first page) should begin 1.0 inch (2.54 cm) from the top edge of the page. The second and following pages should begin 1.0 inch (2.54 cm) from the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4 paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the page. \subsection{Margins and page numbering} All printed material, including text, illustrations, and charts, must be kept within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm) high. Page numbers should be in footer with page numbers, centered and .75 inches from the bottom of the page and make it start at the correct page number rather than the 4321 in the example. To do this fine the line (around line 23) \begin{verbatim} \setcounter{page}{4321} \end{verbatim} where the number 4321 is your assigned starting page. Make sure the first page is numbered by commenting out the first page being empty on line 46 \begin{verbatim} \end{verbatim} \subsection{Type-style and fonts} Wherever Times is specified, Times Roman may also be used. If neither is available on your word processor, please use the font closest in appearance to Times to which you have access. MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of the first page. The title should be in Times 14-point, boldface type. Capitalize the first letter of nouns, pronouns, verbs, adjectives, and adverbs; do not capitalize articles, coordinate conjunctions, or prepositions (unless the title begins with such a word). Leave two blank lines after the title. AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title and printed in Times 12-point, non-boldface type. This information is to be followed by two blank lines. The ABSTRACT and MAIN TEXT are to be in a two-column format. MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use double-spacing. All paragraphs should be indented 1 pica (approx. 1/6 inch or 0.422 cm). Make sure your text is fully justified---that is, flush left and flush right. Please do not place any additional blank lines between paragraphs. Figure and table captions should be 9-point Roman type as in Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred. \noindent Callouts should be 9-point Helvetica, non-boldface type. Initially capitalize only the first word of section titles and first-, second-, and third-order headings. FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction}) should be Times 12-point boldface, initially capitalized, flush left, with one blank line before, and one blank line after. SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements}) should be Times 11-point boldface, initially capitalized, flush left, with one blank line before, and one after. If you require a third-order heading (we discourage it), use 10-point Times, boldface, initially capitalized, flush left, preceded by one blank line, followed by a period and your text on the same line. \subsection{Footnotes} Please use footnotes\footnote {This is what a footnote looks like. It often distracts the reader from the main flow of the argument.} sparingly. Indeed, try to avoid footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence). If you wish to use a footnote, place it at the bottom of the column on the page on which it is referenced. Use Times 8-point type, single-spaced. \subsection{References} List and number all bibliographical references in 9-point Times, single-spaced, at the end of your paper. When referenced in the text, enclose the citation number in square brackets, for example~\cite{Authors14}. Where appropriate, include the name(s) of editors of referenced books. \begin{table} \begin{center} \begin{tabular}{|l|c|} \hline Method & Frobnability \\ \hline\hline Theirs & Frumpy \\ Yours & Frobbly \\ Ours & Makes one's heart Frob\\ \hline \end{tabular} \end{center} \caption{Results. Ours is better.} \end{table} \subsection{Illustrations, graphs, and photographs} All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the paper. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Many readers (and reviewers), even of an electronic copy, will choose to print your paper in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic. When placing figures in \LaTeX, it's almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below {\small\begin{verbatim} \usepackage[dvips]{graphicx} ... \includegraphics[width=0.8\linewidth] {myfile.eps} \end{verbatim} } \subsection{Color} Please refer to the author guidelines on the ICCV 2017 web page for a discussion of the use of color in your document. \section{Final copy} You must include your signed IEEE copyright release form when you submit your finished paper. We MUST have this form before your paper can be published in the proceedings. {\small \bibliographystyle{ieee} \section{Introduction} While category-level classification and detection from images has recently experienced a tremendous leap forward thanks to deep learning, the same has not yet happened for what concerns 3D model localization and 6D object pose estimation. In contrast to large-scale classification challenges such as PASCAL VOC \cite{Everingham2014} or ILSVRC \cite{Russakovsky2015}, the domain of 6D pose estimation requires instance detection of known 3D CAD models with high precision and accurate poses, as demanded by applications in the context of augmented reality and robotic manipulation. Most of the best performing 3D detectors follow a view-based paradigm, in which a discrete set of object views is generated and used for subsequent feature computation \cite{Ulrich2012, Hinterstoisser2012}. During testing, the scene is sampled at discrete positions, features computed and then matched against the object database to establish correspondences among training views and scene locations. Features can either be an encoding of image properties (color gradients, depth values, normal orientations) \cite{Hinterstoisser2012a, Hodan2015, Kehl2015} or, more recently, the result of learning \cite{Brachmann2014, Tejani2014, Brachmann2016, Doumanoglou2016, Kehl2016a}. In either case, the accuracy of both detection and pose estimation hinges on three aspects: (1) the coverage of the 6D pose space in terms of viewpoint and scale, (2) the discriminative power of the features to tell objects and views apart and (3) the robustness of matching towards clutter, illumination and occlusion. CNN-based category detectors such as YOLO \cite{Redmon2016} or SSD \cite{Liu2016} have shown terrific results on large-scale 2D datasets. Their idea is to inverse the sampling strategy such that scene sampling is not anymore a set of discrete input points leading to continuous output. Instead, the input space is dense on the whole image and the output space is discretized into many overlapping bounding boxes of varying shapes and sizes. This inversion allows for smooth scale search over many differently-sized feature maps and simultaneous classification of all boxes in a single pass. In order to compensate for the discretization of the output domain, each bounding box regresses a refinement of its corners. The goal of this work is to develop a deep network for object detection that can accurately deal with 3D models and 6D pose estimation by assuming an RGB image as unique input at test time. To this end, we bring the concept of SSD over to this domain with the following contributions: (1) a training stage that makes use of synthetic 3D model information only, (2) a decomposition of the model pose space that allows for easy training and handling of symmetries and (3) an extension of SSD that produces 2D detections and infers proper 6D poses. We argue that in most cases, color information alone can already provide close to perfect detection rates with good poses. Although our method does not need depth data, it is readily available with RGB-D sensors and almost all recent state-of-the-art 3D detectors make use of it for both feature computation and final pose refinement. We will thus treat depth as an optional modality for hypothesis verification and pose refinement and will assess the performance of our method with both 2D and 3D error metrics on multiple challenging datasets for the case of RGB and RGB-D data. Throughout experimental results on multiple benchmark datasets, we demonstrate that our color-based approach is competitive with respect to state-of-the-art detectors that leverage RGB-D data or can even outperform them, while being many times faster. Indeed, we show that the prevalent trend of overly relying on depth for 3D instance detection is not justified when using color correctly. \begin{figure*} \includegraphics[width=17.5cm]{pipeline2.png} \caption{Schematic overview of the SSD-style network prediction. We feed our network with a $299\times299$ RGB image and produce six feature maps at different scales from the input image using branches from InceptionV4. Each map is then convolved with trained prediction kernels of shape (4 + C + V + R) to determine object class, 2D bounding box as well as scores for possible viewpoints and in-plane rotations that are parsed to build 6D pose hypotheses. Thereby, C denotes the number of object classes, V the number of viewpoints and R the number of in-plane rotation classes. The other 4 values are utilized to refine the corners of the discrete bounding boxes to tightly fit the detected object.} \label{fig:network} \end{figure*} \section{Related work} We will first focus on recent work in the domain of 3D detection and 6D pose estimation before taking a closer look at SSD-style methods for category-level problems. To cover the upper hemisphere of one object with a small degree of in-plane rotation at multiple distances, the authors in \cite{Hinterstoisser2012} need 3115 template views over contour gradients and interior normals. Hashing of such views has been used to achieve sub-linear matching complexity \cite{Kehl2015, Hodan2015}, but this usually trades speed for accuracy. Related scale-invariant approaches \cite{Hodan2015, Brachmann2014, Tejani2014, Doumanoglou2016, Kehl2016a} employ depth information as an integral part for either feature learning or extraction, thus avoiding scale-space search and cutting down the number of views by around an order of magnitude. Since they require depth to work, they can fail when depth is missing or erroneous. While scale can be inferred with RGB-D data, there has not been yet any convincing work to eradicate the requirement of in-plane rotated views. Rotation-invariant methods are based on local keypoints in either 2D \cite{Yi2016} or 3D \cite{Drost2010,Birdal2015,Tombari2010} by explicitly computing or voting for an orientation or a local reference frame, but they fail for objects of poor geometry or texture. Although rarely mentioned, all of the view-based methods cover only a very small, predefined 6D pose space. Placing the object differently, e.g. on its head, would lead to failure if this view had not been specifically included during training. Unfortunately, additional views increase computation and add to overall ambiguity in the matching stage. Even worse, for all discussed methods, scene sampling is crucial. If too coarse, objects of smaller scale can be missed whereas a fine-grained sampling increases computation and often leads to more false positive detections. Therefore, we explore a path similar to works on large-scale classification where dense feature maps on multiple scales have produced state-of-the-art results. Instead of relying on classifying proposed bounding boxes \cite{Girshick, He2015, Lin2016}, whose performance hinges on the proposals' quality, recent single-shot detectors \cite{Redmon2016, Liu2016} classify a (large) discrete set of fixed bounding boxes. This streamlines the network architecture and gives freedom to the a-priori placement of boxes. As for works regressing the pose from RGB images, the related works of \cite{Poirson2016,Mousavian2016} recently extended SSD to include pose estimates for categories. \cite{Mousavian2016} infers 3D bounding boxes of objects in urban traffic and regresses 3D box corners and an azimuth angle whereas \cite{Poirson2016} introduces an additional binning of poses to express not only the category but also a notion of local orientation such as 'bike from the side' or 'plane from below'. The difference to us is that they train on real images to predict poses in a very constrained subspace. Instead, our domain demands training on synthetic model-based data and the need to encompass the full 6D pose space to accomplish tasks such as grasping or AR. \section{Methodology} The input to our method is an RGB image that is processed by the network to output localized 2D detections with bounding boxes. Additionally, each 2D box is provided with a pool of the most likely 6D poses for that instance. To represent a 6D pose, we parse the scores for viewpoint and in-plane rotation that have been inferred from the network and use projective properties to instantiate 6D hypotheses. In a final step, we refine each pose in every pool and select the best after verification. This last step can either be conducted in 2D or optionally in 3D if depth data is available. We present each part now in more detail. \subsection{Network architecture} Our base network is derived from a pre-trained InceptionV4 instance \cite{Szegedy2016} and is fed with a color image (resized to $299 \times 299$) to compute feature maps at multiple scales. In order to get our first feature map of dimensionality $71 \times 71 \times 384$, we branch off before the last pooling layer within the stem and append one 'Inception-A' block. Thereafter, we successively branch off after the 'Inception-A' blocks for a $35 \times 35 \times 384$ feature map, after the 'Inception-B' blocks for a $17 \times 17 \times 1024$ feature map and after the 'Inception-C' blocks for a $9 \times 9 \times 1536$ map.\footnote{We changed the padding of Inception-B s.t. the next block contains a map with odd dimensionality to always contain a central position.} To cover objects at larger scale, we extend the network with two more parts. First, a 'Reduction-B' followed by two 'Inception-C' blocks to output a $5 \times 5 \times 1024$ map. Second, one 'Reduction-B' and one 'Inception-C' to produce a $3 \times 3 \times 1024$ map. From here we follow the paradigm of SSD. Specifically, each of these six feature maps is convolved with prediction kernels that are supposed to regress localized detections from feature map positions. Let $(w_s, h_s, c_s)$ be the width, height and channel depth at scale $s$. For each scale, we train a $3 \times 3 \times c_s$ kernel that provides for each feature map location the scores for object ID, discrete viewpoint and in-plane rotation. Since we introduce a discretization error by this grid, we create $B_s$ bounding boxes at each location with different aspect ratios. Additionally, we regress a refinement of their four corners. If $C, V, R$ are the numbers of object classes, sampled viewpoints and in-plane rotations respectively, we produce a $(w_s, h_s, B_s \times (C +V + R + 4) )$ detection map for the scale $s$. The network has a total number of 21222 possible bounding boxes in different shapes and sizes. While this might seem high, the actual runtime of our method is remarkably low thanks to the fully-convolutional design and the good true negative behavior, which tend to yield a very confident and small set of detections. We refer to Figure \ref{fig:network} for a schematic overview. \paragraph{Viewpoint scoring versus pose regression} The choice of viewpoint classification over pose regression is deliberate. Although works that do direct rotation regression exist \cite{Kendall2015, Tan2015}, early experimentation showed clearly that the classification approach is more reliable for the task of detecting poses. In particular, it seems that the layers do a better job at scoring discrete viewpoints than at outputting numerically accurate translations and rotations. The decomposition of a 6D pose in viewpoint and in-plane rotation is elegant and allows us to tackle the problem more naturally. While a new viewpoint exhibits a new visual structure, an in-plane rotated view is a non-linear transformation of the same view. Furthermore, simultaneous scoring of all views allows us to parse multiple detections at a given image location, \eg by accepting all viewpoints above a certain threshold. Equally important, this approach allows us to deal with symmetries or views of similar appearance in a straight-forward fashion. \subsection{Training stage} \begin{figure} \subfloat{\includegraphics[width=4cm]{linemod_training.png}} \subfloat{\includegraphics[width=4cm]{linemod_training_2.png}} \\ \vspace*{-0.4cm} \subfloat{\includegraphics[width=4cm]{tejani_training.png}} \subfloat{\includegraphics[width=4cm]{tejani_training_2.png}} \caption{Exemplary training images for the datasets used. Using MS COCO images as background, we render object instances with random poses into the scene. The green boxes visualize the network's bounding boxes that have been assigned as positive samples for training.} \label{fig:training} \end{figure} We take random images from MS COCO \cite{Lin2014} as background and render our objects with random transformations into the scene using OpenGL commands. For each rendered instance, we compute the IoU (intersection over union) of each box with the rendered mask and every box $b$ with IoU $> 0.5$ is taken as a positive sample for this object class. Additionally, we determine for the used transformation its closest sampled discrete viewpoint and in-plane rotation as well as set its four corner values to the tightest fit around the mask as a regression target. We show some training images in Figure \ref{fig:training}. Similar to SSD \cite{Liu2016}, we employ many different kinds of augmentation, such as changing the brightness and contrast of the image. Differently to them, though, we do not flip the images since it would lead to confusion between views and to wrong pose detections later on. We also make sure that each training image contains a 1:2 positives-negatives ratio by selecting hard negatives (unassigned boxes with high object probability) during back-propagation. Our loss is similar to the MultiBox loss of SSD or YOLO, but we extend the formulation to take discrete views and in-plane rotations into account. Given a set of positive boxes $Pos$ and hard-mined negative boxes $Neg$ for a training image, we minimize the following energy: \begin{eqnarray} \label{eq:loss} L(Pos, Neg) := \sum_{b \in Neg} L_{class} + \hspace{1.8cm} \nonumber \\ \sum_{b \in Pos} \left( L_{class} + \alpha L_{fit} + \beta L_{view} + \gamma L_{inplane} \right) \end{eqnarray} \begin{figure} \includegraphics[width=8cm]{hemi.png} \caption{Discrete 6D pose space with each point representing a classifiable viewpoint. If symmetric, we use only the green points for view ID assignment during training whereas semi-symmetric objects use the red points as well.} \label{fig:sphere} \end{figure} As it can be seen from (\ref{eq:loss}), we sum over positive and negative boxes for class probabilities ($L_{class}$). Additionally, each positive box contributes weighted terms for viewpoint ($L_{view}$) and in-plane classification ($L_{inplane}$), as well as a fitting error of the boxes' corners ($L_{fit}$). For the classification terms, i.e., $L_{class}$, $L_{view}$, $L_{inplane}$, we employ a standard softmax cross-entropy loss, whereas a more robust smooth L1-norm is used for corner regression ($L_{fit}$). \paragraph{Dealing with symmetry and view ambiguity} Our approach demands the elimination of viewpoint confusion for proper convergence. We thus have to treat symmetrical or semi-symmetrical (constructible with plane reflection) objects with special care. Given an equidistantly-sampled sphere from which we take our viewpoints, we discard positions that lead to ambiguity. For symmetric objects, we solely sample views along an arc, whereas for semi-symmetric objects we omit one hemisphere entirely. This approach easily generalizes to cope with views which are mutually indistinguishable although this might require manual annotation for specific objects in practice. In essence, we simply ignore certain views from the output of the convolutional classifiers during testing and take special care of viewpoint assignment in training. We refer to Figure \ref{fig:sphere} for a visualization of the pose space. \subsection{Detection stage} We run a forward-pass on the input image to collect all detections above a certain threshold, followed by non-maximum suppression. This yields refined and tight 2D bounding boxes with an associated object ID and scores for all views and in-plane rotations. For each detected 2D box we thus parse the most confident views as well as in-plane rotations to build a pool of 6D hypotheses from which we select the best after refinement. See Figure \ref{fig:pool} for the pooled hypotheses and Figure \ref{fig:refinement} for the final output. \subsubsection{From 2D bounding box to 6D hypothesis } \begin{figure} \includegraphics[width=8cm]{translation.png} \caption{For each object we precomputed the perfect bounding box and the 2D object centroid with respect to each possible discrete rotation in a prior offline stage. To this end, we rendered the object at a canonical centroid distance $z_r=0.5m$. Subsequently, the object distance $z_s$ can be inferred from the projective ratio according to $z_s = \frac{l_r}{l_s} z_r$, where $l_r$ denotes diagonal length of the precomputed bounding box and $l_s$ denotes the diagonal length of the predicted bounding box on the image plane.} \label{fig:translation} \end{figure} So far, all computation has been conducted on the image plane and we need to find a way to hypothesize 6D poses from our network output. We can easily construct a 3D rotation, given view ID and in-plane rotation ID, and can use the bounding box to infer 3D translation. To this end, we render all possible combinations of discrete views and in-plane rotations at a canonical centroid distance $z_r=0.5m$ in an offline stage and compute their bounding boxes. Given the diagonal length $l_r$ of the bounding box during this offline stage and the one predicted by the network $l_r$, we can infer the object distance $z_s = \frac{l_r}{l_s} z_r$ from their projective ratio, as illustrated in Figure \ref{fig:translation}. In a similar fashion, we can derive the projected centroid position and back-project to a 3D point with known camera intrinsics. \begin{figure} \subfloat{\includegraphics[width=4cm]{in.png}} \subfloat{\includegraphics[width=4cm]{pool.png}} \\ \vspace*{-0.4cm} \subfloat{\includegraphics[width=4cm]{linemod_pred.png}} \subfloat{\includegraphics[width=4cm]{linemod_pool.png}} \caption{Prediction output and 6D pose pooling of our network on the Tejani dataset and the multi-object dataset. Each 2D prediction builds a pool of 6D poses by parsing the most confident views and in-plane rotations. Since our networks are trained with various augmentations, they can adapt to different global illumination settings.} \label{fig:pool} \end{figure} \subsubsection{Pose refinement and verification} The obtained poses are already quite accurate, yet can in general benefit from a further refinement. Since we will regard the problem for both RGB and RGB-D data, the pose refinement will either be done with an edge-based or cloud-based ICP approach. If using RGB only, we render each hypothesis into the scene and extract a sparse set of 3D contour points. Each 3D point $X_i$, projected to $\pi(X_i)=x_i$, then shoots a ray perpendicular to its orientation to find the closest scene edge $y_i$. We seek the best alignment of the 3D model such that the average projected error is minimal: \begin{equation} \argmin _{R,t} \sum_i \bigg( || \pi(R \cdot X_i + t) - y_i || ^2 \bigg). \end{equation} We minimize this energy with an IRLS approach (similar to \cite{Drummond2002}) and robustify it using Geman-McLure weighting. In the case of RGB-D, we render the current pose and solve with standard projective ICP with a point-to-plane formulation in closed form \cite{Besl1992}. In both cases, we run multiple rounds of correspondence search to improve refinement and we use multi-threading to accelerate the process. The above procedure provides multiple refined poses for each 2D box and we need to choose the best one. To this end, we employ a verification procedure. Using only RGB, we do a final rendering and compute the average deviation of orientation between contour gradients and overlapping scene gradients via absolute dot products. In case RGB-D data is available, we render the hypotheses and estimate camera-space normals to measure the similarity again with absolute dot products. \begin{figure*} \subfloat[2D Detections]{\includegraphics[width=4.3cm]{dets.png}} \hspace{0.001cm} \subfloat[Unrefined]{\includegraphics[width=4.3cm]{pose_orig.png}} \hspace{0.001cm} \subfloat[RGB refinement]{\includegraphics[width=4.3cm]{pose2d.png}} \hspace{0.001cm} \subfloat[RGB-D refinement]{\includegraphics[width=4.3cm]{pose3d.png}} \caption{After predicting 2D detections (a), we build 6D hypotheses and run pose refinement and a final verification. While the unrefined poses (b) are rather approximate, contour-based refinement (c) produces already visually acceptable results. Occlusion-aware projective ICP with cloud data (d) leads to a very accurate alignment.} \label{fig:refinement} \end{figure*} \section{Evaluation} We implemented our method in C++ using TensorFlow 1.0 \cite{Abadi2016} and cuDNN 5 and ran it on a [email protected] with an NVIDIA GTX 1080. Our evaluation has been conducted on three datasets. The first, presented in Tejani et al. \cite{Tejani2014}, consists of six sequences where each sequence requires the detection and pose estimation of multiple instances of the same object in clutter and with different levels of mild occlusion. The second dataset, presented in \cite{Hinterstoisser2012}, consists of 15 sequences where each frame presents one instance to detect and the main challenge is the high amount of clutter in the scene. As others, we will skip two sequences since they lack a meshed model. The third dataset, presented in \cite{Brachmann2014} is an extension of the second where one sequence has been annotated with instances of multiple objects undergoing heavy occlusions at times. \paragraph{Network configuration and training} To get the best results it is necessary to find an appropriate sampling of the model view space. If the sampling is too coarse we either miss an object in certain poses or build suboptimal 6D hypotheses whereas a very fine sampling can lead to a more difficult training. We found an equidistant sampling of the unit sphere into 642 views to work well in practice. Since the datasets only exhibit the upper hemisphere of the objects, we ended up with 337 possible view IDs. Additionally, we sampled the in-plane rotations from -45 to 45 degrees in steps of 5 to have a total of 19 bins. Given the above configuration, we trained the last layers of the network and the predictor kernels using ADAM and a constant learning rate of $0.0003$ until we saw convergence on a synthetic validation set. The balancing of the loss term weights proved to be vital to provide both good detections and poses. After multiple trials we determined $\alpha=1.5$, $\beta=2.5$ and $\gamma=1.5$ to work well for us. We refer the reader to the supplementary material to see the error development for different configurations. \subsection{Single object scenario} \begin{table} \scalebox{0.9}{ \begin{tabular}{c|c|c|c|c} Sequence & LineMOD \cite{Hinterstoisser2012a} & LC-HF \cite{Tejani2014} & Kehl \cite{Kehl2016a} & Us \\ \hline Camera & 0.589 & 0.394 & 0.383 & \textbf{0.741} \\ Coffee & 0.942 & 0.891 &0.972 & \textbf{0.983} \\ Joystick & 0.846 & 0.549 & 0.892 & \textbf{0.997}\\ Juice & 0.595 & 0.883 & 0.866 & \textbf{0.919} \\ Milk & 0.558 & 0.397 & 0.463 & \textbf{0.780} \\ Shampoo & \textbf{0.922} & 0.792 & 0.910 & 0.892 \\ \hline Total & 0.740 & 0.651 & 0.747 & \textbf{0.885}\\ \end{tabular} \caption{F1-scores on the re-annotated version of \cite{Tejani2014}. Although our method is the only one to solely use RGB data, our results are considerably higher than all related works.} \label{table:tejani_f1} } \end{table} \begin{table*} \begin{center} \begin{tabular}{@{}c|c|c|c|c|c|c|c|c|c|c|c|c|c@{}} & ape & bvise & cam & can & cat & driller & duck & box & glue & holep & iron & lamp & phone \\ \hline Us & 76.3 & \textbf{97.1} & 92.2 & \textbf{93.1} & 89.3 & \textbf{97.8} & 80.0 & 93.6 & \textbf{76.3} & 71.6 & \textbf{98.2} & 93.0 & \textbf{92.4} \\ LineMOD \cite{Hinterstoisser2012} & 53.3 & 84.6 & 64.0 & 51.2 & 65.6 & 69.1 & 58.0 & 86.0 & 43.8 & 51.6 & 68.3 & 67.5 & 56.3 \\ LC-HF \cite{Tejani2014} & 85.5 & 96.1 & 71.8 & 70.9 & 88.8 & 90.5 & 90.7 & 74.0 & 67.8 & 87.5 & 73.5 & 92.1 & 72.8 \\ Kehl \cite{Kehl2016a} & \textbf{98.1}& 94.8 & \textbf{93.4} & 82.6 & \textbf{98.1} & 96.5 & \textbf{97.9} & \textbf{100} & 74.1 & \textbf{97.9} & 91.0 & \textbf{98.2} & 84.9 \\ \end{tabular} \end{center} \caption{F1-scores for each sequence of \cite{Hinterstoisser2012}. Note that the LineMOD scores are supplied from \cite{Tejani2014} with their evaluation since \cite{Hinterstoisser2012} does not provide them. Using color only we can easily compete with the other RGB-D based methods.} \label{table:linemod_f1} \end{table*} Since 3D detection is a multi-stage pipeline for us, we first evaluate purely the 2D detection performance between our predicted boxes and the tight bounding boxes of the rendered groundtruth instances on the first two datasets. Note that we always conduct proper detection and not localization, \ie we do not constrain the maximum number of allowed detections but instead accept all predictions above a chosen threshold. We count a detection to be correct when the IoU score of a predicted bounding box with the groundtruth box is higher than 0.5. We present our F1-scores in Tables \ref{table:tejani_f1} and \ref{table:linemod_f1} for different detection thresholds. It is important to mention that the compared methods, which all use RGB-D data, allow a detection to survive after rigorous color- and depth-based checks whereas we use simple thresholding for each prediction. Therefore, it is easier for them to suppress false positives to increase their precision whereas our confidence comes from color cues only. On the Tejani dataset we outperform all related RGB-D methods by a huge margin of $13.8 \%$ while using color only. We analyzed the detection quality on the two most difficult sequences. The 'camera' has instances of smaller scale which are partially occluded and therefore simply missed whereas the 'milk' sequence exhibits stronger occlusions in virtually every frame. Although we were able to detect the 'milk' instances, our predictors could not overcome the occlusions and regressed wrongly-sized boxes which were not tight enough to satisfy the IoU threshold. These were counted as false positives and thus lowered our recall\footnote{We refer to the supplement for more detailed graphs.}. On the second dataset we have mixed results where we can outperform state-of-the-art RGB-D methods on some sequences while being worse on others. For larger feature-rich objects like 'benchvise', 'iron' or 'driller' our method performs better than the related work since our network can draw from color and textural information. For some objects, such as 'lamp' or 'cam', the performance is worse than the related work. Our method relies on color information only and thus requires a certain color similarity between synthetic renderings of the CAD model and their appearance in the scene. Some objects exhibit specular effects (\ie changing colors for different camera positions) or the frames can undergo sensor-side changes of exposure or white balancing, causing a color shift. Brachmann et al. \cite{Brachmann2016} avoid this problem by training on a well-distributed subset of real sequence images. Our problem is much harder since we train on synthetic data only and must generalize to real, unseen imagery. Our performance for objects of smaller scale such as 'ape', 'duck' and 'cat' is worse and we observed a drop both in recall and precision. We attribute the lower recall to our bounding box placement, which can have 'blind spots' at some locations and consequently, leading to situations where a small-scale instance cannot be covered sufficiently by any box to fire. The lower precision, on the other hand, stems from the fact that these objects are textureless and of uniform color which increases confusion with the heavy scene clutter. \subsubsection{Pose estimation} We chose for each object the threshold that yielded the highest F1-score and run all following pose estimation experiments with this setting. We are interested in the pose accuracy for all correctly detected instances. \begin{table} \scalebox{1.0}{ \begin{tabular}{c|c|c|c|c} Sequence & IoU-2D & IoU-3D & VSS-2D & VSS-3D \\ \hline Camera & 0.973 & 0.904 & 0.693 & 0.778 \\ Coffee & 0.998 & 0.996 &0.765 & 0.931 \\ Joystick & 1 & 0.953 & 0.655 & 0.866\\ Juice & 0.994 & 0.962 & 0.742 & 0.865 \\ Milk & 0.970 & 0.990 & 0.722 & 0.810 \\ Shampoo & 0.993 & 0.974 & 0.767 & 0.874 \\ \hline Total & 0.988 & 0.963 & 0.724 & 0.854 \\ \end{tabular} \caption{Average pose errors for the Tejani dataset.} \label{table:tejani_pose} } \end{table} \begin{table} \scalebox{0.85}{ \begin{tabular}{c|c|c|c} & \multicolumn{3}{c}{RGB} \\ & Us & LineMOD\cite{Hinterstoisser2011} & Brachmann \cite{Brachmann2016} \\ \hline IoU & 99.4 \% & 86.5\% & 97.5\% \\ ADD & 76.3\% & 24.2\% & 50.2\% \\ \end{tabular} } \end{table} \begin{table} \scalebox{0.85}{ \vspace*{5mm} \begin{tabular}{c|c|c|c} & \multicolumn{3}{c}{RGB-D} \\ & Ours & Brachmann 2016 \cite{Brachmann2016} & Brachmann 2014 \cite{Brachmann2014} \\ \hline IoU & 96.5 \% & 99.6\% & 99.1\% \\ ADD \cite{Hinterstoisser2012a} & 90.9\% & 99.0\% & 97.4\% \\ \end{tabular} \caption{Average pose errors for the LineMOD dataset.} \label{table:linemod_pose} } \end{table} \paragraph{Error metrics} To measure 2D pose errors we will compute both an IoU score and a Visual Surface Similarity (VSS) \cite{Hodan2016}. The former is different than the detection IoU check since it measures the overlap of the rendered masks' bounding boxes between groundtruth and final pose estimate and accepts a pose if the overlap is larger than $0.5$. VSS is a tighter measure since it counts the average pixel-wise overlap of the mask. This measure assesses well the suitability for AR applications and has the advantage of being agnostic towards the symmetry of objects. To measure the 3D pose error we use the ADD score from \cite{Hinterstoisser2012}. This assesses the accuracy for manipulation tasks by measuring the average deviation between transformed model point clouds of groundtruth and hypothesis. If it is smaller than $\frac{1}{10}th$ of the model diameter, it is counted as a correct pose. \paragraph{Refinement with different parsing values} \begin{figure} \includegraphics[width=8cm]{refinement.png} \caption{Average VSS scores for the 'coffee' object for different numbers of parsed views and in-plane rotations as well as different pose refinement options.} \label{fig:parsing} \end{figure} As mentioned, we parse the most confident views and in-plane rotations to build a pool of 6D hypotheses for each 2D detection. Here, we want to assess the final pose accuracy when changing the number of parsed views $V$ and rotations $R$ for different refinement strategies We present in Figure \ref{fig:parsing} the results on Tejani's 'coffee' sequence for the cases of no refinement, edge-based and cloud-based refinement (see Figure \ref{fig:refinement} for an example). To decide for the best pose we employ verification over contours for the first two cases and normals for the latter. As can be seen, the final poses without any refinement are imperfect but usually provide very good initializations for further processing. Additional 2D refinement yields better poses but cannot cope well with occluders whereas depth-based refinement leads to perfect poses in practice. The figure gives also insight for varying $V$ and $R$ for hypothesis pool creation. Naturally, with higher numbers the chances of finding a more accurate pose improve since we evaluate a larger portion of the 6D space. It is evident, however, that every additional parsed view $V$ gives a larger benefit than taking more in-plane rotations $R$ into the pool. We explain this by the fact that our viewpoint sampling is coarser than our in-plane sampling and thus reveals more uncovered pose space when parsed, which in turn helps especially depth-based refinement. Since we create a pool of $V \cdot R$ poses for each 2D detection, we fixed $V=3,R=3$ for all experiments as a compromise between accuracy and refinement runtime. \paragraph{Performance on the two datasets} We present our pose errors in Tables \ref{table:tejani_pose} and \ref{table:linemod_pose} after 2D and 3D refinement. Note that we do not compute the ADD scores for Tejani since each object is of (semi-)symmetric nature, leading always to near-perfect ADD scores of 1. The poses are visually accurate after 2D refinement and furthermore are boosted by an additional depth-based refinement stage. On the second dataset we are actually able to come very close to Brachmann et al. which is surprising since they have a huge advantage of real data training. For the case of pure RGB-based poses, we can even overtake their results. We provide more detailed error tables in the supplement. \begin{figure} \includegraphics[width=4cm, height=4cm]{multi.png} \includegraphics[width=4cm, height=4cm]{pred_time_2.png} \caption{Left: Detection scores on the multi-object dataset for a different global threshold. Right: Runtime increase for the network prediction with an increased number of objects.} \label{fig:multi} \end{figure} \subsection{Multiple object detection} The last dataset has annotations for 9 out of the 15 objects and is quite difficult since many instances undergo heavy occlusion. Different to the single object scenario, we have now a network with one global detection threshold for all objects and we present our scores in Figure \ref{fig:multi} when varying this threshold. Brachmann et al. \cite{Brachmann2016} can report an impressive Average Precision (AP) of 0.51 whereas we can report an AP of 0.38. It can be observed that our method degrades gracefully as the recall does not drop suddenly from one threshold step to the next. Note again that Brachmann et al. have the advantage of training on real images of the sequence whereas we must detect heavily-occluded objects from synthetic training only. \subsection{Runtime and scalability} For a single object in the database, Kehl et al. \cite{Kehl2016a} report a runtime of around 650ms per frame whereas Brachmann et al. \cite{Brachmann2014, Brachmann2016} report around 450ms. Above methods are scalable and thus have a sublinear runtime growth with an increasing database size. Our method is a lot faster than the related work while being scalable as well. In particular, we can report a runtime of approximately 85ms for a single object. We show our prediction times in Figure \ref{fig:multi} which reveals that we scale very well with an increasing number of objects in the network. While the prediction is fast, our pose refinement takes more time since we need to refine every pose of each pool. On average, given that we have about 3 to 5 positive detections per frame, we need a total of an additional 24ms for refinement, leading to a total runtime of around 10Hz. \begin{figure} \includegraphics[width=4cm]{det_milk.png} \includegraphics[width=4cm]{pool_milk.png} \caption{One failure case where incorrect bounding box regression, induced by occlusion, led to wrong 6D hypothesis creation. In such cases a subsequent refinement cannot always recover the correct pose anymore.} \label{fig:failure} \end{figure} \subsection{Failure cases} The most prominent issue is the difference in colors between synthetic model and scene appearance, also including local illumination changes such as specular reflections. In these cases, the object confidence might fall under the detection threshold since the difference between the synthetic and the real domain is too large. A more advanced augmentation would be needed to successfully tackle this problem. Another possible problem can stem from the bounding box regression. If the regressed corners are not providing a tight fit, it can lead to translations that are too offset during 6D pose construction. An example of this problem can be seen in Figure \ref{fig:failure} where the occluded milk produces wrong offsets. We also observed that small objects are sometimes difficult to detect which is even more true after resizing the input to $299 \times 299$. Again, designing a more robust training as well as a larger network input could be of benefit here. \section*{Conclusion} To our knowledge, we are the first to present an SSD-style detector for 3D instance detection and full 6D pose estimation that is trained on synthetic model information. We have shown that color-based detectors are indeed able to match and surpass current state-of-the-art methods that leverage RGB-D data while being around one order of magnitude faster. Future work should include a higher robustness towards color deviation between CAD model and scene appearance. Avoiding the problem of proper loss term balancing is also an interesting direction for future research. {\small \bibliographystyle{ieee}
1,116,691,501,112
arxiv
\section{Introduction} \label{sec:intro} \vspace{-.1cm} Humans are able to focus on a source of interest within a complex acoustic scene, a task referred to as the cocktail party problem~\cite{cherry1953some,mcdermott2009cocktail}. % Research in audio source separation has been dedicated to enabling machines to solve this task, with many studies taking a stab at various slices of the problem, such as the separation of speech from non-speech in speech enhancement~\cite{WDL2018, reddy2020interspeech}, speech from other speech in speech separation~\cite{Hershey2016,Drude2019, wichern2019wham}, or separation of individual musical instruments~\cite{rafii2017musdb, stoter19, manilow2019slakh} or non-speech sound events (or sound effects)~\cite{kavalerov2019universal,Tzinis_ICASSP2020, pishdadian2020finding,ochiai2020listen}. % However, separation of sound mixtures involving speech, music, and sound effects/events has been left largely unexplored, despite its relevance to most produced audio content, such as podcasts, radio broadcasts, and video soundtracks. % We here intend to bite into this smaller chunk of the cocktail party problem by proposing to separate such soundtracks into these three broad categories. We refer to this task as the \emph{cocktail fork problem}, as illustrated in Fig.~\ref{fig:cocktail_fork}. While there has been much work on labeling recordings based on these three categories~\cite{theodorou2014overview, melendez2019open, venkatesh2021artificially}, the ability to separate audio signals into these streams has the potential to support a wide range of novel applications. For example, an end-user could take over the final mixing process by applying independent gains to the separated speech, music, and sound effects signals to support their specific listening environment and preferences. Furthermore, this three-stream separation could be a front-end for total transcription~\cite{moritz2020all} or audio-visual video description~\cite{hori2017attention} where we want to not only transcribe speech but also semantically describe in great detail the non-speech sounds present in an auditory scene. A recent concurrent work~\cite{zhang2021multitask} also explores the task of speech, music, and sound effects (therein referred to as noise) separation, but only considers the unrealistic case of fully-overlapped mixtures of the three streams, and a low sampling rate of 16 kHz. This sampling rate is not conducive to applications where humans may listen to the separated signals, and it is often difficult or impractical to transition systems trained only on fully-overlapped mixtures to real-world scenarios~\cite{chen2020continuous}. \begin{figure}[t] \centering \includegraphics[width=.97\linewidth]{figs/cocktail_fork_new3_slim-crop.pdf}\vspace{-.1cm} \caption{Illustration of the cocktail fork problem: given a soundtrack consisting of an audio mixture of speech, music, and sound effects, the goal is to separate it into the three corresponding stems.}\vspace{-.6cm} \label{fig:cocktail_fork} \end{figure} To provide a realistic high-quality dataset for the cocktail fork problem, we introduce the Divide and Remaster (DnR) dataset, which is built upon LibriSpeech \cite{librispeech_dataset} for speech, Free Music Archive (FMA) \cite{fma_dataset} for music, and Freesound Dataset 50k (FSD50K) \cite{fsd50k_dataset} for sound effects. % DnR pays particular attention to the mixing process, specifically the relative level of each of the sources and the amount of inter-class overlap, both of which we hope will ease the transition of models trained with DnR to real-world applications. Furthermore, DnR includes comprehensive speech, music genre, and sound event annotations, making it potentially useful for research in speech transcription, music classification, sound event detection, and audio segmentation in addition to source separation. In this paper, we provide a detailed description of the DnR dataset, and benchmark various source separation models. We find the CrossNet unmix (XUMX) architecture~\cite{sawata2021all}, originally proposed for music source separation, also works well for DnR. We further propose a multi-resolution extension of XUMX, to better handle the wide variety of audio characteristics in the sound sources we are trying to separate. We also address several important practical questions often ignored in the source separation literature, such as the impact of sampling rate on model performance, predicted energy in regions where a source should be silent~\cite{schulze2019weakly}, and performance in various overlapping conditions. While we only show here objective evaluations based on synthetic data due to the lack of realistic data with stems, we confirmed via informal listening tests that the trained models perform well on real-world soundtracks from YouTube. Our dataset and real-world examples % are available online.\footnote{\url{cocktail-fork.github.io}} \section{The Cocktail Fork Problem} \label{sec:cocktail} \vspace{-.1cm} We consider an audio soundtrack $y$ such that \begin{equation} y=\sum_{j=1}^3 x_j, \end{equation} where $x_1$ is the submix containing all music signals, $x_2$ that of all speech signals, and $x_3$ that of all sound effects. We use the term sound effects (SFX) to broadly cover all sources not categorized as speech or music, and choose it over alternatives such as sound events or noise, as the term is especially relevant to our target application where $y$ is a soundtrack. We here define the cocktail fork problem as that of recovering, from the audio soundtrack $y$, its music, speech, and sound effect submixes, as opposed to extracting individual musical instruments, speakers, or sound effects. Our goal is to train a machine learning model to obtain estimates $\hat{x}_1$, $\hat{x}_2$, and $\hat{x}_3$ of these submixes. We explore two general classes of models for estimating $\hat{x}_j$. The first one, exemplified by Conv-TasNet~\cite{luo2019convTasNet}, takes as input the time-domain mixture $y$, and outputs time-domain estimates $\hat{x}_j$. The second one operates on the time-frequency (TF) domain mixture, i.e., $Y=\text{STFT}(y)$, and estimates a real-valued mask $\hat{M}_j$ for each source, obtaining time-domain estimates via inverse STFT as $\hat{x}_j=\mathrm{iSTFT}(\hat{M}_j \odot Y)$. % \vspace{-.2cm} \section{Multi-resolution CrossNet (MRX)} \vspace{-.1cm} In our benchmark of various network architectures in Section~\ref{sec:results}, we find consistently strong performance from CrossNet unmix (XUMX)~\cite{sawata2021all}, which uses multiple parameter-less averaging operations when simultaneously extracting multiple stems (musical instruments in ~\cite{sawata2021all}). XUMX is an STFT masking-based architecture, and choosing appropriate transform parameters is a key design choice. Longer STFT windows provide better frequency resolution at the cost of poorer time resolution, and vice versa for shorter windows. Mixtures of signals with diverse acoustic characteristics could thus benefit from multiple STFT resolutions in their TF encoding. Previous research has proven the efficacy of multi-resolution systems for audio-related tasks, such as in the context of speech enhancement~\cite{koizumi2019trainable}, music separation~\cite{grais2018multi}, speech recognition~\cite{toledano2018multi}, and sound event detection ~\cite{benito2021multi}. We thus introduce a multi-resolution extension of XUMX which addresses the typical limitations brought by a single-resolution architecture. In \cite{sawata2021all}, the authors show that using multiple parallel branches to process the input can help in the separation task. We here apply this reasoning further towards multiple STFT resolutions. \begin{figure}[t] \centering \includegraphics[width=.9\linewidth]{figs/xumx_mixed_fig_final-crop.pdf}\vspace{-.1cm} \caption{Multi-resolution CrossNet (MRX) architecture.}\vspace{-.5cm} \label{fig:xumx_mixed} \end{figure} Our proposed architecture % takes a time-domain input mixture and encodes it into $I$ complex spectrograms $Y_{L_i}$ with different STFT resolutions, where $L_i$ denotes the $i$-th window length in milliseconds. Figure~\ref{fig:xumx_mixed} shows an example with $I=3$ and $\{L_i\}_i=\{32,64,256\}$. % We use the same hop size (e.g., 8 ms in the example of Fig.~\ref{fig:xumx_mixed}) for all resolutions, so they remain synchronized in time, and $N$ denotes the number of STFT frames for all resolutions. In practice, we set the window size in samples to the nearest power of $2$, and the number of unique frequency bins is denoted as $F_{L_i}$. Each resolution is then passed to a fully connected block to convert the magnitude spectrograms of dimension ${N \times F_{L_i}}$ into a consistent dimension of $512$ across the resolution branches. This allows us to average them together prior to the bidirectional long short-term memory (BLSTM) stacks, whose outputs are averaged once again. While the averaging operators in XUMX were originally intended to efficiently bridge independent architectures for multiple sources, in our case, the input averaging allows the network to efficiently combine inputs with multiple resolutions. The average inputs and outputs of the BLSTM stacks are concatenated and decoded back into magnitude soft masks $\hat{M}_{j,i}$, one for each of the three sources $j$ and each of the $I$ original input resolutions $i$. The decoder consists of two stacks of fully-connected layers, each followed by batch normalization (BN) and rectified linear units (ReLU). For a given source $j$, each magnitude mask $\hat{M}_{j,i}$ is multiplied element-wise with the original complex mixture spectrogram $Y_{L_i}$ for the corresponding resolution, a corresponding time-domain signal $\hat{x}_{j,i}$ is obtained via inverse STFT, and the estimated time-domain signal $\hat{x}_{j}$ is obtained by summing the time-domain signals: \begin{equation} \hat{x}_{j} = \sum_{i=1}^{I} \hat{x}_{j,i}= \sum_{i=1}^{I}\text{iSTFT}( \hat{M}_{j,i} \odot Y_{L_i} ). \end{equation} For the cocktail fork problem, the network has to estimate a total of $3I$ masks (9 in the example of Fig.~\ref{fig:xumx_mixed}). % Since ReLU is used as the final mask decoder nonlinearity, the network can freely learn weights for each resolution that best reconstruct the time-domain signal. \vspace{-.2cm} \section{DnR Dataset} \label{sec:dnr} \vspace{-.1cm} \subsection{Dataset Building Blocks} In selecting existing speech, music, and sound effects audio datasets for the cocktail fork problem, we had three primary objectives: (1) the data should be freely available under a Creative Commons license; (2) the sampling rate of the audio should be high enough to cover the full range of human hearing (e.g., 44.1 kHz) to support listening applications (one can always downsample as needed); % and (3) the audio should contain metadata labels such that it can also be used to explore the impact of separation on downstream tasks, such as transcribing speech and/or providing time-stamped labels for sound effects and music. We selected the following three datasets. \noindent {\bf FSD50K - Sound effects:} The Freesound Dataset 50k (FSD50K) \cite{fsd50k_dataset} contains 44.1 kHz mono audio, and clips are tagged using a vocabulary of 200 class labels from the AudioSet ontology~\cite{gemmeke2017audioset}. For mixing purposes, we manually classify each of the 200 class labels in FSD50K into one of 3 groups: foreground sounds (e.g., dog bark), background sounds (e.g., traffic noise), and speech/musical instruments (e.g., guitar, speech). Speech and musical instrument clips are filtered out to avoid confusion with our speech and music datasets, and we use different mixing rules for foreground and background events as described in Section~\ref{sec:creation_proc}. We also remove any leading or trailing silence from each sound event prior to mixing. \noindent {\bf Free Music Archive - Music:} The Free Music Archive (FMA)~\cite{fma_dataset} is a music dataset including over 100,000 stereo songs across 161 musical genres at 44.1 kHz sampling rate. FMA was originally proposed to address music information retrieval (MIR) tasks and thus includes a wide variety of % musical metadata. In the context of DnR, we only use track genre as the music metadata. We use the medium subset of FMA which contains 30 second clips from 25,000 songs in 16 unbalanced genres, and is of a comparable size to FSD50K. \noindent {\bf LibriSpeech - Speech:} DnR's speech class is drawn from the LibriSpeech dataset~\cite{librispeech_dataset}, an automatic speech recognition corpus based on public-domain audio books. % We use the 100 h \textsc{train-clean-100} subset for training, chosen over \textsc{train-clean-360} because it is closer in size to FSD50K and FMA-medium. For validation and test, we use the clean subsets \textsc{dev-clean} and \textsc{test-clean} to avoid noisy speech being confused with music or sound effects. % We incorporate the provided speech transcription for each utterance as part of the DnR metadata. LibriSpeech provides its data as clips containing a single speech utterance at 16 kHz. Fortunately, the original 44.1 kHz mp3 audio files containing the unsegmented audiobook recordings harvested from the LibriVox project are also available along with the metadata mapping each LibriSpeech utterance to the original LibriVox filename and corresponding time-stamp, which we use to create a high sampling rate version of LibriSpeech. \vspace{-.2cm} \subsection{Mixing procedure} \label{sec:creation_proc} \vspace{-.1cm} In order to create realistic mixtures of synthetic soundtracks, we focused our effort in two main areas, class overlap and relative level between the different sources in the mixture. Multi-channel spatialization is another important aspect of the mixing process, however, we were unable to find widely agreed upon rules for this process, and therefore focus exclusively on the single-channel case. We also note that trained single-channel models can be applied independently to each channel of a multi-channel recording, and the outputs combined with a multi-channel Wiener filter for post-processing~\cite{nugraha2016multichannel}. For the purposes of the mixing procedure described in this section, there are four classes: speech, music, foreground effects, and background effects, but the foreground and background sounds are combined into a single submix in the final version of the DnR dataset. In order to ensure that a mixture could contain multiple full speech utterances and feature a sufficient number of onsets and offsets between the different classes, we decided to make each mixture 60 seconds long. We do not allow within-class overlap between clips, i.e., two music files will not overlap, but foreground and background sound effects can overlap. The number of files for each class is sampled % from a zero-truncated Poisson distribution with expected value $\lambda$. The values of $\lambda$ are chosen based on the average file length of each class, e.g., music and background effects tend to be longer (see Table~\ref{table:creation_params}). For speech files, we always include the entire utterance so that the corresponding transcription remains relevant, while for other classes, we randomly sample the amount of silence between clips of the same class, the clip length, and the internal start time of each clip. Using this mixing procedure, the ``all sources active'' frames account for $\approx 55\%$ of the DnR test set, the ``two sources'' frames for $\approx 32\%$, and ``one source'' frames for $\approx 10\%$, leaving silent frames at $\approx 3\%$ (See Table~\ref{table:overlap_results} for more details). Regarding the relative amplitude levels across the three classes, after analyzing studies such as~\cite{chaudhuri2018ava} and informal mixing rules from industries such as motion pictures, video games, and podcasting, we found that levels remain fairly consistent across classes, where speech is generally found at the forefront of the mix, followed by foreground sound effects, then music, and finally background ambiances. Table~\ref{table:creation_params} depicts the levels used in the DnR dataset in loudness units full-scale (LUFS) \cite{grimm2010lufs}. To add variability while keeping a realistic consistency over an entire mixture, we first sample an average LUFS value for each class in each mixture, uniformly from a range of $\pm 2.0$ % around the corresponding Target LUFS. Then each sound file added to the mix has its individual gain further adjusted by uniformly sampling from a range of $\pm 1.0$. We base our training, validation, and test splits off of those provided by each of the dataset building blocks. The number of test set mixtures is determined such that we exhaust all utterances from the LibriSpeech \textsc{test-clean} set twice. We then choose the number of training and validation set mixtures to correspond to a .7/.1/.2 split between training/validation/test, which is roughly in line with the split percentages for FMA (.8/.1/.1) and FSD50k (.7/.1/.2). In the end, DnR consists of 3,406 mixtures ($\approx 57$ h) for the training set, 487 mixtures ($\approx 8$ h) for the validation set, and 973 mixtures ($\approx16$ h) for the test set, along with their isolated ground-truth stems. % \begin{table}[t] \scriptsize \centering \sisetup{table-format=2.1,round-mode=places,round-precision=1,table-number-alignment = center,detect-weight=true,detect-inline-weight=math} \caption{Parameters used in the DnR creation procedure.}\label{table:creation_params}\vspace{-.3cm} \begin{tabular}[t]{lcccc} \toprule &{Music}&{Speech}&{SFX-FG}&{SFX-BG}\\ \midrule $\lambda$ & $\phantom{-2}7\phantom{.0}$ & $\phantom{-1}8\phantom{.0}$ & $\phantom{-}12\phantom{.0}$ & $\phantom{-2}6\phantom{.0}$ \\ Target LUFS & $-24.0$ & $-17.0$ & $-21.0$ &$-29.0$\\ \bottomrule \end{tabular}\vspace{-.4cm} \end{table} \vspace{-.2cm} \section{Experimental validation} \label{sec:experiment} \vspace{-.1cm} \subsection{Setup} \vspace{-.1cm} We benchmark the performance of several source separation models in terms of scale-invariant signal-to-distortion ratio (SI-SDR)~\cite{leroux2019sdr} for the cocktail fork problem on the DnR dataset, both in the original 44.1 kHz version and in a downsampled 16 kHz version. Unless otherwise noted, we compute the SI-SDR on each 60 second mixture, and average over all tracks in the test set. \noindent{\bf XUMX and MRX models}: We consider single-resolution XUMX baselines with various STFT resolutions. We opt to cover a wide range of window lengths $L$ (between 32 and 256 ms) to assess the impact of resolution on performance. % For our proposed MRX model, we use three STFT resolutions of 32, 64, and 256 ms, which we found to work best on the validation set. We use XUMX$_\text{L}$ to denote a model with an $L$ ms window. We set the hop size to a quarter of the window size. For the MRX model, % we determine hop size based on the shortest window. % To parse the contributions of the multi-resolution and multi-decoder features of MRX, we also evaluate an architecture adding MRX's multi-decoder to the best single-resolution model (XUMX$_\text{64}$), referred to as XUMX$_\text{64,multi-dec}$. This results in an architecture of the same size (i.e., same number of parameters) as our proposed MRX model. % In all architectures, each BLSTM layer has 256 hidden units and input/output dimension of 512, and the hidden layer in the decoder has dimension 512. \noindent{\bf Other benchmarks}: % We also evaluate our own implementations of Conv-TasNet \cite{luo2019convTasNet} and a temporal convolution network (TCN) with mask inference (MaskTCN). MaskTCN uses an identical TCN to the one used internally by Conv-Tasnet, but the learned encoder/decoder are replaced with STFT/iSTFT operations. For MaskTCN, we use an STFT window/hop of 64/16 ms, and for the learned encoder/decoder of Conv-TasNet, we use 500 filters with a window size of 32 samples and a stride of 16 at 16 kHz, and a window size of 80 samples and a stride of 40 at 44.1 kHz. All TCN parameters in both Conv-TasNet and MaskTCN follow the best configuration of~\cite{luo2019convTasNet}. Additionally, we evaluate Open-Unmix (UMX)~\cite{stoter19}, the predecessor to XUMX, by training a separate model for each source, but without the parallel branches and averaging operations introduced by XUMX. We also explore a new multi-resolution UMX (MRU), which uses the same settings as the MRX model in Fig.~\ref{fig:xumx_mixed}, but features a single BLSTM stack and a single decoder and is trained separately for each source. \noindent{\bf Training setup}: The Conv-TasNet, XUMX$_L$, XUMX$_\text{64,multi-dec}$, UMX, MRU, and MRX models all use SI-SDR \cite{luo2019convTasNet,leroux2019sdr} as loss function, while MaskTCN uses the waveform domain $L_1$ loss. % All models are trained on 9 s chunks, except MaskTCN, trained on 6 s chunks, and Conv-TasNet, trained on 4 s chunks at 16 kHz and 2 s chunks at 44.1 kHz; we found these values to lead to best performance under our GPU memory constraints. All models % are trained for 300 epochs using ADAM. The learning rate is initialized to $10^{-3}$, and halved if the validation loss is not improved over 3 epochs. % \vspace{-.2cm} \subsection{Results and Discussion} \label{sec:results} \vspace{-.1cm} \noindent{\bf Model comparisons}: Table~\ref{table:model_results} presents the SI-SDR of various models trained and tested on DnR, in addition to the no processing condition (lower bound, using the mixture as estimate) and oracle phase sensitive mask~\cite{erdogan2015psf} (upper bound). For each model, SI-SDR improvements are fairly consistent across source types, despite the differences in their relative levels in the mix, which can be seen in the ``No Processing'' SI-SDR. For both sampling rates, we observe that our proposed MRX model outperforms all single resolution baselines on all source types. % This implies that the network learns to effectively combine information from different STFT resolutions to more accurately reconstruct the target sources. The performance of XUMX$_\text{64,multi-dec}$ further confirms this hypothesis by performing nearly identically to XUMX$_\text{64}$, showing that the use of multiple decoders alone does not improve performance. We also observe that the single-source models (UMX, MRU) tend to perform comparably to the cross-source models (XUMX, MRX) for speech, but perform worse for music and SFX. We speculate that because music and SFX are quieter in the mix, % it is harder for the network to isolate them effectively without the support of the other sources, while louder sources (here, speech) do not benefit from joint estimation. \input{tables/results} \noindent{\bf Sampling rate comparisons}: Table \ref{table:sr_results} compares the average SI-SDR performance of the MRX model across sampling rates. In the 44.1 kHz column, we observe a reduction in SI-SDR for all source classes when upsampling the 16 kHz model output to 44.1 kHz (``Resampled''). This is to be expected as all frequencies above 8 kHz are zero in the upsampled 44.1 kHz signal, but we see that these frequencies only contribute a small amount to the difference in SI-SDR scores, as there is comparatively little energy there. % In the 16 kHz column, we observe a minor performance gain in the ``Resampled'' row where the 44.1 kHz model output is downsampled to 16 kHz, showing that the model can make use of information above 8 kHz to improve separation under 8 kHz. This result could be beneficial for transcription applications where ASR or sound event detection models are pre-trained at 16 kHz, but a front-end source separation model can obtain better separated signals using 44.1 kHz input. \input{tables/sampling_rate} \input{tables/overlap} \noindent{\bf Overlap scenario comparisons}: We here compute metrics over 1 s segments to evaluate performance independently in regions where only certain source classes overlap. Metrics % such as SI-SDR or SDR \cite{vincent2006bss} are undefined for signals containing silent target and/or estimated sources. This limitation is usually circumvented by disregarding the % problematic frames in the evaluation process \cite{stoter19} (i.e., frames containing one or more silent target(s) and/or estimated source(s)). For example, in the MUSDB test set~\cite{rafii2017musdb}, it is reported that at least 45 minutes out of a total of 210 minutes of test data are systematically ignored for that reason~\cite{schulze2019weakly}. Although these regions with fewer active sources may be seen as less challenging, we believe it is important to also evaluate performance when not all sources are present, and here consider all types of overlap. Table \ref{table:overlap_results} shows the overall results for each of the three sources in the seven possible overlapping scenarios. For regions where a source is not active in the ground truth, we report predicted energy at silence (PES)~\cite{schulze2019weakly} to quantify the energy incorrectly assigned to a silent source. We note that speech has smaller PES values than music or SFX in Table~\ref{table:overlap_results}, even though it is the loudest source on average. SI-SDR in the single-source cases are very high, especially for speech, showing that few artifacts are introduced. Among the two-source cases, we note that SI-SDRi is substantially lower for music and SFX (\{{M,$\emptyset$,X}\}), indicating that these two sets of varied sources are more difficult to separate from each other than from speech. \vspace{-.3cm} \section{Conclusion} \label{sec:conclusion} \vspace{-.2cm} In this paper, we formalized the task of three-stem soundtrack separation as the cocktail fork problem, and introduced DnR, a high-quality dataset built on top of three well-established sound collections: LibriSpeech (speech), FSD50K (SFX), and FMA (music). % We benchmarked several source separation algorithms on DnR and showed that our proposed multi-resolution model performed best. In the future, we plan to combine the separation models developed in this paper with speech recognition and sound classification systems for automatic caption generation of speech and non-speech sounds, and explore remixing strategies that minimize perceptual artifacts~\cite{torcoli2021controlling}. \bibliographystyle{IEEEtran}
1,116,691,501,113
arxiv
\section{Introduction} Cluster algebras and quiver mutation were introduced by Fomin and Zelevinsky \cite{FZ}, and (additive) categorification of such structures, often in terms of triangulated categories, have successfully contributed to the development of a rich theory, see e.g. the surveys by Keller\cite{Kel3,Kel4} or Reiten \cite{R}. Derksen-Weyman-Zelevinsky \cite{DWZ} introduced quivers with potential (QP) and the corresponding Jacobian algebras, and studied mutation of quivers with potential. Keller-Yang \cite{KY} studied the categorification of such mutations via Ginzburg dg algebras \cite{G}. One of the applications of their categorification is motivic Donaldson-Thomas invariants, using quantum cluster algebras \cite{K11}. Additive categorification is deeply related to classical tilting theory \cite{BMRRT,A}. Algebras related by tilting are derived equivalent, while (Jacobian) algebras related by mutation of quivers with potential are in general not. However, Keller-Yang constructed in \cite{KY} an equivalence between the derived category $\D(\Gamma(Q,W))$ of the Ginzburg dg algebra $\Gamma(Q,W)$ of a QP $(Q,W)$ and that of $\D(\Gamma(\widetilde{Q},\widetilde{W}))$ of a QP $(\widetilde{Q},\widetilde{W})$, obtained by a single mutation from $(Q,W)$. The equivalence also restricts to the subcategories of dg modules of finite dimensional homology $\D_{fd}(\Gamma(Q,W))$ and $\D_{fd}(\Gamma(\widetilde{Q},\widetilde{W}))$. Note that these subcategories are 3-Calabi-Yau, by \cite{K8}. However, in general there is no canonical choice for such equivalences, basically because mutation of QPs is only well-defined up to a non-canonical {\em choice of decomposition} of a QP into a {\em trivial part} and a {\em reduced part} (see Section~\ref{subsec:KY}). We consider a special class of quivers with potential, that is, those arising from (unpunctured) marked surfaces $\surf$. This class of examples was first introduced in cluster theory by Fomin-Shapiro-Thurston \cite{FST} and further studied by many authors, including Labardini-Fragoso \cite{LF} who gave the interpretation in terms of corresponding QPs. When studying the 3-Calabi-Yau categories and stability conditions, it is natural to decorate the marked surface $\surf$ with a set $\Tri$ of decorating points (which are zeroes of the corresponding quadratic differentials, cf. \cite{BS,QQ}). More details about motivation and background can be found in \cite{QQ}. Building on the prequels \cite{QQ,QZ2}, we prove a class of intrinsic derived equivalences that are compatible with Keller-Yang's and are stronger in this special case. More precisely, this class of equivalences implies the following main result. \begin{thmx}[see Theorem~\ref{thm:comp}]\label{thma} There is a unique canonical 3-Calabi-Yau category $\D_{fd}(\surfo)$ associated to a decorated marked surface $\surfo$. \end{thmx} Given a Ginzburg dg algebra $\Gamma$, one can consider the {\em spherical twist group} $\ST(\Gamma)$ of $\D_{fd}(\Gamma)$ in $\Aut\D_{fd}(\Gamma)$. In particular for a decorated marked surface $\surfo$, we study the spherical twist group $\ST(\surfo)$ and the principal component $\Stap(\surfo)$ of the space of stability conditions on $\D_{fd}(\surfo)$ (see Section 5 for details). We then obtain the following, as an application of our main theorem. \begin{thmx}[Theorem~\ref{thmbb}]\label{thmb} The spherical twist group $\ST(\surfo)$ acts faithfully on the principal component $\Stap(\surfo)$ of the space of stability conditions on $\D_{fd}(\surfo)$. \end{thmx} We give preliminary results and background in Section 2, we give an explicit description of Keller-Yang's equivalence on the finite derived category in Section 3, we prove our main result Theorem~\ref{thma} in Section 4, and we give background for and proof of Theorem~\ref{thmb} in Section 5. Throughout the paper, a composition $fg$ of morphisms $f$ and $g$ means first $g$ and then $f$. But a composition $ab$ of arrows $a$ and $b$ means first $a$ then $b$. Any (dg) module is a right (dg) module. \subsection*{Acknowledgements} The second author would like to thank A.~King and J.~Grant for interesting discussions. This work was supported by the Research Council of Norway, grant No.NFR:231000. \section{Preliminaries}\label{sec:bg} \subsection{Decorated marked surfaces} Throughout the paper, $\surf$ denotes a \emph{marked surface} without punctures in the sense of Fomin-Shapiro-Thurston \cite{FST}. That is, $\surf$ is a connected compact surface with a fixed orientation and with a finite set $\M$ of marked points on the (non-empty) boundary $\partial\surf$ having the property that each connected component of $\partial\surf$ contains at least one marked point. Up to homeomorphism, $\surf$ is determined by the following data\new{:} \begin{itemize} \item the genus $g$; \item the number $|\partial\surf|$ of boundary components; \item the integer partition of $|\M|$ into $|\partial\surf|$ parts describing the number of marked points on each boundary component. \end{itemize} We require that \begin{gather}\label{eq:n} n=6g+3|\partial\surf|+|\M|-6 \end{gather} is at least one. A triangulation of $\surf$ is a maximal collection of non-crossing and non-homotopic simple curves on $\surf$, whose endpoints are in $\M$. It is well-known that any triangulation of $\surf$ consists of $n$ simple curves (\cite[Proposition~2.10]{FST}) and divides $\surf$ into \begin{gather}\label{eq:Tri} \aleph=\frac{2n+|\M|}{3} \end{gather} triangles (\cite[(2.9)]{QQ}). \begin{definition}[{\cite[Definition~3.1]{QQ}}]\label{def:arcs} A \emph{decorated marked surface} $\surfo$ is a marked surface $\surf$ together with a fixed set $\Tri$ of $\aleph$ `decorating' points in the interior of $\surf$ (where $\aleph$ is defined in \eqref{eq:Tri}), which serve as punctures. Moreover, a (simple) \emph{open arc} in $\surfo$ is (the isotopy class of) a (simple) curve in $\surfo-\Tri$ that connects two marked points in $\M$, which is neither isotopic to a boundary segment nor to a point. \end{definition} A triangulation of $\TT$ of $\surfo$ is a collection of simple open arcs that divides $\surfo$ into $\aleph$ triangles, each containing exactly one decorating point inside (cf. \cite[\S~3]{QQ}). We also have the notion of (forward/backward) flips of triangulations of $\surfo$, cf. Figure~\ref{fig:flips}. Denote by $\EG(\surfo)$ the exchange graph of triangulations of $\surfo$, that is the oriented graph whose vertices are the triangulations and whose edges are the forward flips between them. From now on, we will fix a connected component $\EGp(\surfo)$. When we say a triangulation of $\surfo$ later, we always mean that it is in this component. \begin{figure}[ht]\centering \begin{tikzpicture}[scale=.4] \path (-135:4) coordinate (v1) (-45:4) coordinate (v2) (45:4) coordinate (v3); \draw[NavyBlue,very thick](v1)to(v2)node{$\bullet$}to(v3); \path (-135:4) coordinate (v1) (45:4) coordinate (v2) (135:4) coordinate (v3); \draw[NavyBlue,very thick](v2)node{$\bullet$}to(v3)node{$\bullet$}to(v1)node{$\bullet$} (45:1)node[above]{$\gamma$}; \draw[>=stealth,NavyBlue,thick](-135:4)to(45:4); \draw[red,thick](135:1.333)node{\tiny{$\circ$}}(-45:1.333)node{\tiny{$\circ$}}; \end{tikzpicture} \begin{tikzpicture}[scale=1.2, rotate=180] \draw[blue,<-,>=stealth](3-.6,.7)to(3+.6,.7); \draw(3,.7)node[below,black]{\footnotesize{in $\surfo$}}; \draw[blue](3-.25,.5-.5)rectangle(3+.25,.5);\draw(3,1.5)node{}; \draw[blue,->,>=stealth](3-.25,.5-.5)to(3+.1,.5-.5); \draw[blue,->,>=stealth](3+.25,.5)to(3-.1,.5); \end{tikzpicture} \begin{tikzpicture}[scale=.4]; \path (-135:4) coordinate (v1) (-45:4) coordinate (v2) (45:4) coordinate (v3); \draw[NavyBlue,very thick](v1)to(v2)node{$\bullet$}to(v3) (45:1)node[above right]{$\gamma^\sharp$}; \path (-135:4) coordinate (v1) (45:4) coordinate (v2) (135:4) coordinate (v3); \draw[NavyBlue,very thick](v2)node{$\bullet$}to(v3)node{$\bullet$}to(v1)node{$\bullet$}; \draw[>=stealth,NavyBlue,thick](135:4).. controls +(-10:2) and +(45:3) ..(0,0) .. controls +(-135:3) and +(170:2) ..(-45:4); \draw[red,thick](135:1.333)node{\tiny{$\circ$}}(-45:1.333)node{\tiny{$\circ$}}; \end{tikzpicture} \caption{A forward flip} \label{fig:flips} \end{figure} \subsection{Quivers with potential and Ginzburg dg algebras}\label{sec:QP} Let $Q$ be a quiver without loops or oriented 2-cycles. A potential $W$ is a linear combination of cycles in $Q$. Denote by $Q_0$ the set of vertices of $Q$ and by $Q_1$ the set of arrows of $Q$. Denote by $s(a)$ (resp. $t(a)$) the source (resp. target) of an arrow $a$. Denote by $e_i$ the trivial path at a vertex $i\in Q_0$. Fix an algebraically closed field $\k$. All categories considered are $\k$-linear. Denote by $\Gamma=\Gamma(Q,W)$ the \emph{Ginzburg dg algebra (of degree 3)} associated to a quiver with potential $(Q,W)$, which is constructed as follows (cf. \cite{G,KY}): \begin{itemize} \item Let $\overline{Q}$ be the graded quiver whose vertex set is $Q_0$ and whose arrows are: \begin{itemize} \item the arrows in $Q_1$ with degree $0$; \item an arrow $a^*:j\to i$ with degree $-1$ for each arrow $a:i\to j$ in $Q_1$; \item a loop $e_i^*:i\to i$ with degree $-2$ for each vertex $i$ in $Q_0$. \end{itemize} The underlying graded algebra of $\Gamma$ is the completion of the graded path algebra $\k \overline{Q}$ in the category of graded vector spaces with respect to the ideal generated by the arrows of $\overline{Q}$. \item The differential $\diff$ of $\Gamma$ is the unique continuous linear endomorphism, homogeneous of degree $1$, which satisfies the Leibniz rule and takes the following values \begin{itemize} \item $\diff a = 0$ for any $a\in Q_1$, \item $\diff a^* = \partial_a W$ for any $a\in Q_1$ and \item $\diff \sum_{i\in Q_0} e_i^*=\sum_{a\in Q_1} \, [a,a^*]$. \end{itemize} \end{itemize} Denote by $\D(\Gamma)$ the derived category of $\Gamma$. We will focus on studying the finite-dimensional derived category $\D_{fd}(\Gamma)$ of $\Gamma$, which is the full subcategory of $\D(\Gamma)$ consisting of the dg $\Gamma$-modules whose total homology is finite dimensional. This category is 3-Calabi-Yau \cite{K8}, that is, for any pair of objects $L,M$ in $\D_{fd}(\Gamma)$, we have a natural isomorphism \begin{equation} \Hom_{\D_{fd}(\Gamma)}(L,M)\cong D\Hom_{\D_{fd}(\Gamma)}(M,L[3]) \end{equation} where $D=\Hom_\k(-,\k)$. Following \cite{FST,LF}, one can associate a quiver with potential $(Q_\TT,W_\TT)$ to each triangulation $\TT$ of $\surfo$ as follows: \begin{itemize} \item the vertices of $Q_\TT$ are indexed by the open arcs in $\TT$; \item each clockwise angle in a triangle of $\TT$ gives an arrow between the vertices indexed by the edges of the angle; \item each triangle in $\TT$ with three edges being open arcs gives a 3-cycle (up to cyclic permutation) and the potential $W_\TT$ is the sum of such 3-cycles. \end{itemize} Then we have the corresponding Ginzburg dg algebra $\Gamma_\TT=\Gamma(Q_\TT,W_\TT)$ and the 3-Calabi-Yau category $\D_{fd}(\Gamma_\TT)$. \subsection{Mutations and Keller-Yang's equivalences}\label{subsec:KY} Let $(Q,W)$ be a quiver with potential. For a vertex $k$ in $Q$, the \emph{pre-mutation} $\widetilde{\mu}_k(Q,W)=(\widetilde{Q},\widetilde{W})$ at $k$ is a new quiver with potential, defined as follows. The new quiver $\widetilde{Q}$ is obtained from $Q$ by \begin{enumerate} \item[Step 1] For any composition $ab$ of two arrows with $t(a)=s(b)=k$, add a new arrow $[ab]$ from $s(a)$ to $t(b)$. \item[Step 2] Replace each arrow $a$ with $s(a)=k$ or $t(a)=k$ by an arrow $a'$ with $s(a')=t(a)$ and $t(a')=s(a)$. \end{enumerate} The new potential \[\widetilde{W}=\widetilde{W}_1+\widetilde{W}_2\] where $\widetilde{W}_1$ is obtained from $W$ by replacing each composition $ab$ of arrows with $t(a)=s(b)=k$ by $[ab]$, and $\widetilde{W}_2$ is the sum of the 3-cycles of the form $[ab]b'a'$. Denote by $\widetilde{\Gamma}=\Gamma(\widetilde{Q},\widetilde{W})$ the corresponding Ginzburg dg algebra. Let $P_i=e_i\Gamma$ be the indecomposable direct summand of $\Gamma$ corresponding to a vertex $i$. Denote by $P_i^?$ a copy of $P_i$, where $?$ can be an arrow, or a pair of arrows. The \emph{forward mutation} of $\Gamma$ at $P_k$ in $\per\Gamma$ is $\mu^\sharp_k(\Gamma)=\bigoplus_{i\in Q_0} \widetilde{P}_i$, where $\widetilde{P}_i=P_i$ if $i\neq k$, and $\widetilde{P}_k$ has the underlying graded space \[|\widetilde{P}_k|=P_k[1]\oplus\bigoplus_{\rho\in Q_1:t(\rho)=k}P^\rho_{s(\rho)}\] with the differential \[d_{\widetilde{P}_k}=\begin{pmatrix} d_{P_k[1]}&0\\ \rho & d_{P^\rho_{s(\rho)}} \end{pmatrix}.\] \begin{construction}[\cite{KY}]\label{con:KY} There is a map between dg algebras \[f_{?}:\widetilde{\Gamma}\to\Rhom_\Gamma(\mu^\sharp_k(\Gamma),\mu^\sharp_k(\Gamma))\] constructed as follows, where $\xrightarrow{a}$ means the left multiplication by $a$: \begin{enumerate} \item for an arrow $\alpha\in Q_1$ with $t(\alpha)=k$, \begin{itemize} \item $f_{\alpha'}:P_{s(\alpha)}\to \widetilde{P}_k$ of degree 0 is given by \[P_{s(\alpha)}\xrightarrow{\begin{pmatrix} 0 \\ \delta_{\alpha,\rho} \end{pmatrix}} P_k[1]\oplus\bigoplus_{\rho\in Q_1:t(\rho)=k}P^\rho_{s(\rho)} \] where $\delta_{\alpha,\rho}=1$ if $\alpha=\rho$ and 0 else; \item $f_{\alpha'^\ast}:\widetilde{P}_k\to P_{s(\alpha)}$ of degree -1 is given by \[P_k[1]\oplus\bigoplus_{\rho\in Q_1:t(\rho)=k}P^\rho_{s(\rho)}\xrightarrow{\begin{pmatrix} -\alpha e_k^\ast & -\alpha\rho^\ast \end{pmatrix}} P_{s(\alpha)}\] \end{itemize} \item for an arrow $\beta\in Q_1$ with $s(\beta)=k$, \begin{itemize} \item $f_{\beta'}:\widetilde{P}_k\to P_{t(\beta)}$ of degree 0 is given by \[P_k[1]\oplus\bigoplus_{\rho\in Q_1:t(\rho)=k}P^\rho_{s(\rho)}\xrightarrow{\begin{pmatrix} \beta^\ast & \partial_{\rho\beta}W \end{pmatrix}}P_{t(\beta)}\] \item $f_{\beta'^\ast}:P_{t(\beta)}\to \widetilde{P}_k$ of degree -1 is given by \[P_{t(\beta)}\xrightarrow{\begin{pmatrix} -\beta \\ 0 \end{pmatrix}}P_k[1]\oplus\bigoplus_{\rho\in Q_1:t(\rho)=k}P^\rho_{s(\rho)}\] \end{itemize} \item for a pair of arrows $\alpha,\beta\in Q_1$ with $t(\alpha)=k=s(\beta)$, \[f_{[\alpha\beta]}:P_{t(\beta)}\xrightarrow{-\alpha\beta}P_{s(\alpha)}\] and \[f_{[\alpha\beta]^\ast}:P_{s(\alpha)}\xrightarrow{0}P_{t(\beta)}\] \item for an arrow $\gamma$ in $Q_1$ not incident to $k$, $f_{\gamma}:P_{t(\gamma)}\xrightarrow{\gamma} P_{s(\gamma)}$ and $f_{\gamma^\ast}:P_{s(\gamma)}\xrightarrow{\gamma^\ast} P_{t(\gamma)}$; \item for a vertex $i\in Q_0$ different from $k$, $f_{{e'^\ast}_i}: P_i\xrightarrow{e^\ast_i} P_i$; \item $f_{e'^\ast_k}:\widetilde{P}_k\to \widetilde{P}_k$ of degree -2 is given by \[P_k[1]\oplus\bigoplus_{\rho\in Q_1:t(\rho)=k}P^\rho_{s(\rho)} \xrightarrow{\begin{pmatrix} -e^\ast_k & -\rho^\ast \\ 0 & 0 \end{pmatrix}} P_k[1]\oplus\bigoplus_{\rho\in Q_1:t(\rho)=k}P^\rho_{s(\rho)} \] \end{enumerate} \end{construction} The main result in \cite{KY} is the following derived equivalence. \begin{theorem}[{\cite[Proposition~3.5 and Theorem~3.2]{KY}}]\label{thm:KY} The map $f_?$ is a homomorphism of dg algebras. In this way, $\mu^\sharp_k(\Gamma)$ becomes a left dg $\widetilde{\Gamma}$-module. Moreover, the $\widetilde{\Gamma}$-$\Gamma$-bimodules $\mu^\sharp_k(\Gamma)$ induces a triangle equivalence $F=?\overset{L}{\otimes}_{\widetilde{\Gamma}}\mu^\sharp_k(\Gamma): \D(\widetilde{\Gamma})\to\D(\Gamma)$, with inverse $\mathcal{H}om_{\Gamma}(\mu^\sharp_k(\Gamma),?):\D(\Gamma)\to\D(\widetilde{\Gamma})$. \end{theorem} Introduced in \cite{DWZ}, the \emph{mutation} $\mu_k(Q,W)$ of $(Q,W)$ at $k$ is obtained from $(\widetilde{Q},\widetilde{W})$ by taking its reduced part $(\widetilde{Q}_{\mbox{red}},\widetilde{W}_{\mbox{red}})$. That is, there is a right-equivalence between $(\widetilde{Q},\widetilde{W})$ and the direct sum of quivers with potential $(\widetilde{Q}_{\mbox{triv}},\widetilde{W}_{\mbox{triv}})\oplus(\widetilde{Q}_{\mbox{red}},\widetilde{W}_{\mbox{red}})$ such that $(\widetilde{Q}_{\mbox{triv}},\widetilde{W}_{\mbox{triv}})$ is trivial (in the sense that its Jacobian algebra is the path algebra of the vertices) and $(\widetilde{Q}_{\mbox{red}},\widetilde{W}_{\mbox{red}})$ is reduced (in the sense that $\widetilde{W}_{\mbox{red}}$ contains no 2-cycles). Here, the direct sum of two quivers with potential is a quiver with potential, whose quiver is the union of arrows in the two quivers and whose potential is the sum of the two potentials. A right equivalence between two quivers with potential is a homomorphism of path algebras of the quivers, which sends the first potential to the second. In general, the choice of such right-equivalence is not unique. However, for the quiver with potential $(Q_\TT,W_\TT)$ associated to a triangulation $\TT$ of a decorated surface, the right-equivalence can be the identity, which is a canonical choice. This is because any 2-cycle in the potential $\widetilde{W}$ of the pre-mutation $(\widetilde{Q},\widetilde{W})=\widetilde{\mu}_k(Q_\TT,W_\TT)$ contains no common arrows with any other terms in $\widetilde{W}$ (see Case 1 in the proof of \cite[Theorem~30]{LF}). So one can remove all of the 2-cycles from $\widetilde{W}$ and remove the arrows in these 2-cycles from $\widetilde{Q}$ to get the reduced part. This means that the mutation $\mu_k(Q_\TT,W_\TT)$ is a direct summand of the pre-mutation $\widetilde{\mu}_k(Q_\TT,W_\TT)$. Then there is a canonical quasi-isomorphism between $\widetilde{\Gamma}$ and $\Gamma(\mu_k(Q_\TT,W_\TT))$. Moreover, by \cite[Theorem~30]{LF}, $\mu_k(Q_\TT,W_\TT)$ is the same as $(Q_{\TT'},W_{\TT'})$, where $\TT'=f^\sharp_k(\TT)$ is the forward flip of $\TT$ w.r.t. $k$. Then we have a canonical quasi-isomorphism between $\widetilde{\Gamma}$ and $\Gamma_{\TT'}$, which makes $\mu^\sharp_k(\Gamma_\TT)$ a $\Gamma_{\TT'}$-$\Gamma_{\TT}$-bimodules. By Theorem~\ref{thm:KY}, we have the following notion. \begin{definition}[Keller-Yang's equivalence] Using the above notation, we call the triangle equivalence \[\kappa_{\TT'}^{\TT}:=?\overset{L}{\otimes}_{\Gamma_{\TT'}}\mu^\sharp_k(\Gamma_\TT):\D(\Gamma_{\TT'})\to \D(\Gamma_\TT).\] the \emph{Keller-Yang's equivalence} from $\TT$ to $\TT'$. \end{definition} \section{Keller-Yang's equivalences on finite-dimensional derived categories}\label{sec:KYsurf} \subsection{Hearts and spherical objects}\label{sec:bi} A \emph{bounded t-structure} \cite{BBD} on a triangulated category $\hua{D}$ is a full subcategory $\hua{P} \subset \hua{D}$ with $\hua{P}[1] \subset \hua{P}$ such that \begin{itemize} \item if one defines \[ \hua{P}^{\perp}=\{ G\in\hua{D}\mid \Hom_{\hua{D}}(F,G)=0, \forall F\in\hua{P} \}, \] then, for every object $E\in\hua{D}$, there is a (unique) triangle $F \to E \to G\to F[1]$ in $\hua{D}$ with $F\in\hua{P}$ and $G\in\hua{P}^{\perp}$. \item for every object $M$, the shifts $M[k]$ are in $\hua{P}$ for $k\gg0$ and in $\hua{P}^{\perp}$ for $k\ll0$. \end{itemize} The \emph{heart} of a bounded t-structure $\hua{P}$ is the full subcategory \[ \h= \hua{P}^\perp[1]\cap\hua{P} \] and any bounded t-structure is determined by its heart. Note that any heart of a triangulated category is abelian \cite{BBD}. Let $(\hua{T},\hua{F})$ be a torsion pair in a heart $\h$, that is, $\Hom_{\h}(\hua{T},\hua{F})=0$ and for any object $X\in\h$, there exists a short exact sequence $0\to T\to X\to F\to 0$ with $T\in\hua{T}$ and $F\in\hua{F}$. Then there are hearts $\h^\sharp$ and $\h^\flat$, called forward/backward tiltings of $\h$ with respect to this torsion pair (in the sense of Happel-Reiten-Smal\o~\cite{HRS}). In particular, the forward (resp. backward) tilting is simple if $\hua{F}$ (resp. $\hua{T}$) is generated by a single rigid simple in $\h$. See \cite[\S~3]{KQ} for details. The \emph{exchange graph} $\EG(\D)$ of a triangulated category $\hua{D}$ is the oriented graph whose vertices are all hearts in $\hua{D}$ and whose edges correspond to simple forward tiltings between them. Let $\TT$ be a triangulation in $\EGp(\surfo)$. Denote by $\EGp(\Gamma_\TT)$ the principal component of the exchange graph $\EG(\D_{fd}(\Gamma_\TT))$, that is the connected component containing the canonical heart $\h_\TT$. Denote by \begin{gather}\label{eq:sph} \Sph(\Gamma_\TT)=\bigcup_{\h\in\EGp(\Gamma_\TT)}\Sim\h, \end{gather} the set of reachable spherical objects (cf. Definition~\ref{def:sph}), where $\Sim\h$ is the set of simple objects in $\h$. By \cite[Proposition~3.2 and (3.3)]{Q}, there is an isomorphism of oriented graphs \begin{equation}\label{eq:cong} \EGp(\surfo)\cong\EGp(\Gamma_\TT) \end{equation} which sends $\TT$ to the canonical heart $\h_\TT$. We denote by $\h_\TT^{\TT'}$ the heart corresponding to $\TT'\in\EGp(\surfo)$. \subsection{Koszul duality}\label{subsec:koszul} Let $\Gamma$ be the Ginzburg dg algebra associated to a quiver with potential $(Q,W)$. Let $\h$ be a heart obtained from the canonical heart by a sequence of simple tiltings. Denote by $S$ the direct sum of non-isomorphic simples in $\h$. Consider the dg endomorphism algebra \begin{gather}\label{eq:REnd} \ee(S)=\Rhom_{\Gamma}(S, S). \end{gather} Since $S$ generates $\D_{fd}(\Gamma)$ (by taking extensions, shifts in both directions and direct summands), by \cite{Kel} (cf. also \cite[Section~8]{Kel2}), we have the following triangle equivalence: \begin{gather}\label{eq:DE} \xymatrix@C=4pc{ \D_{fd}(\Gamma) \ar[rr]^{ \Rhom_{\Gamma}(S, ?) } && \per\ee(S), }\end{gather} The homology of $\ee(S)$ is the Ext-algebra \[\E(\h):=\Ext^{\mathbb{Z}}_{\D_{fd}(\Gamma)}(S,S)=\bigoplus_{n\in\mathbb{Z}}\Hom_{\D_{fd}(\Gamma)}(S,S[n]).\] In general, one needs to consider a certain $A_\infty$-structure on $\E(\h)$ (which is induced from the potential $W$, see \cite[Appendix]{K8}) such that it is derived equivalent to $\ee(S)$. However, in the surface case, only ordinary multiplication in the induced $A_\infty$-structure is non-trivial (see \cite[Lemma~A.2]{QZ2}). So we have that for any $\TT,\TT'\in\EGp(\surfo)$, there is a triangle equivalence \begin{gather}\label{eq:DE2} \xymatrix@C=4pc{ \D_{fd}(\Gamma_\TT) \ar[rr]^{ \Ext^\mathbb{Z}_{\Gamma_\T}(S_\TT^{\TT'}, ?)\qquad } && \per\E(\h_\TT^{\TT'}), }\end{gather} where $S_\TT^{\TT'}$ is the direct sum of non-isomorphic simples in the heart $\h_\TT^{\TT'}$. \subsection{Keller-Yang's equivalences on simples} Let $\Gamma$ be the Ginzburg dg algebra associated to a quiver with potential $(Q,W)$. Denote by $S_i$ the simple $\Gamma$-module corresponding to a vertex $i$ of $Q$. There is a short exact sequence of dg $\Gamma$-modules \[ \xymatrix{0\ar[r] & \ker(\zeta_i)\ar[r] & P_i\ar[r]^{\zeta_i} & S_i\ar[r] & 0,} \] where $\zeta_i$ is the canonical projection from $P_i$ to $S_i$ and \[ \ker(\zeta_i)=\bigoplus_{\alpha:i\to j\in \overline{Q}_1} \alpha P_j \] with the induced differential. Therefore, $S_i$ has a cofibrant resolution (see \cite[Section~2.12]{KY} for definition and properties of this notion) $\mathbf{p}S_i$ with underlying graded vector space \begin{equation}\label{eq:under} |\mathbf{p}S_i|=P_i[3]\oplus\bigoplus_{\rho\in Q_1:t(\rho)=i}P^\rho_{s(\rho)}[2]\oplus\bigoplus_{\tau\in Q_1:s(\tau)=i}P^\tau_{t(\tau)}[1]\oplus P_i \end{equation} and with the differential \begin{equation}\label{eq:diff} d_{\mathbf{p}S_i}=\left(\begin{smallmatrix} d_{P_i[3]} & 0 & 0 & 0 \\ \rho & d_{P_{s(\rho)}[2]} & 0 & 0\\ -\tau^\ast & -\partial_{\rho\tau}w&d_{P_{t(\tau)}[1]}&0\\ e_i^\ast& \rho^\ast & \tau & d_{P_i} \end{smallmatrix}\right) \end{equation} Note that any morphism from $S_i$ to $S_j$ in $\D_{fd}(\Gamma)$ is induced by a homomorphism of dg $\Gamma$-modules from $\mathbf{p}S_i$ to $S_j$. Hence each arrow in $\overline{Q}_1$ starting at $i$ or the trivial path $e_i$ at $i$ induces a morphism $\pi_\alpha$ in $\D_{fd}(\Gamma)$ starting at $S_i$ as follows \begin{itemize} \item $\pi_{e_i}:S_i\to S_i$ is the identity induced by the projection from $P_i$ to $S_i$; \item $\pi_\tau: S_i\to S_j[1]$ for $\tau:i\to j\in Q_1$ is induced by the projection from $P_{t(\tau)}[1]$ to $S_j[1]$; \item $\pi_{\rho^\ast}:S_i\to S_j[2]$ for $\rho:j\to i\in Q_1$ is induced by the projection $P_{s(\rho)}[2]$ to $S_j[2]$; \item $\pi_{e^\ast_i}:S_i\to S_i[3]$ is induced by the projection from $P_i[3]$ to $S_i[3]$. \end{itemize} The morphisms $\pi_?$ above can be extended naturally to elements in $\Ext^{\mathbb{Z}}_{\D_{fd}(\Gamma)}(S,S)$. Moreover, they form a basis. \begin{proposition}\cite[Lemma~2.15 and its proof]{KY}\label{prop:basis} The morphisms $\pi_\alpha$, where $\alpha$ is a trivial path or an arrow in $\overline{Q}$, form a basis of $\E(\h_\Gamma)$, where $\h_\Gamma$ is the canonical heart. \end{proposition} Let $(\widetilde{Q},\widetilde{W})$ be the pre-mutation of $(Q,W)$ at a vertex $k$ and $\widetilde{\Gamma}$ the corresponding Ginzburg dg algebra. Let $F:\D(\widetilde{\Gamma})\to \D(\Gamma)$ be the triangle equivalence given in Theorem~\ref{thm:KY}. Denote by $\widetilde{S}_i$ the simple $\widetilde{\Gamma}$-module corresponding to $i\in \widetilde{Q}_0=Q_0$. \begin{construction}\label{con:sharp} We define objects $S_i^\sharp$, $i\in Q_0$, in $\D(\Gamma)$ as follows. For $i\neq k$, define $S^\sharp_i$ by the triangle \[S_i[-1]\xrightarrow{\pi_\rho[-1]}\bigoplus\limits_{\begin{smallmatrix} \rho\in Q_1\\ s(\rho)=i\\t(\rho)=k \end{smallmatrix}}S^{\rho}_{t(\rho)}\to S_i^\sharp\to S_i\] where $S^\rho_k$ is a copy of $S_k$; for $i=k$, define $S^\sharp_k$ to be $S_k[1]$. Note that for a vertex $j\in Q_0$, if there is no arrow from $j$ to $k$ then $S^\sharp_j=S_j$. \end{construction} By replacing $P_j$'s by $\widetilde{P}_j$'s in \eqref{eq:under} and \eqref{eq:diff}, we get the cofibrant resolution $\mathbf{p}\widetilde{S}_i$ of $\widetilde{S}_i$. For $i\neq k$, by Construction~\ref{con:KY} (cf. also the proof of \cite[Lemma 3.12]{KY}), we have that $F(\mathbf{p}\widetilde{S}_i)$ has the underlying graded space \[\begin{array}{cccccccccc} &P_i[3] \oplus\bigoplus\limits_{\begin{smallmatrix} \alpha\in Q_1\\s(\alpha)\neq k\\t(\alpha)=i \end{smallmatrix}}P^\alpha_{s(\alpha)}[2] \oplus \bigoplus\limits_{\begin{smallmatrix} a,b\in Q_1\\t(a)=s(b)=k\\t(b)=i \end{smallmatrix}}P^{a,b}_{s(a)}[2] \oplus\bigoplus\limits_{\begin{smallmatrix} c\in Q_1\\ s(c)=i\\ t(c)=k \end{smallmatrix}}P^c_k[3] \oplus\bigoplus\limits_{\begin{smallmatrix} p,q\in Q_1\\ s(p)=i\\t(p)=t(q)=k \end{smallmatrix}}P^{p,q}_{s(q)}[2]\\ \oplus&\bigoplus\limits_{\begin{smallmatrix} \beta\in Q_1\\s(\beta)=i\\t(\beta)\neq k \end{smallmatrix}}P^\beta_{t(\beta)}[1] \oplus\bigoplus\limits_{\begin{smallmatrix} l,g\in Q_1\\ s(l)=i\\ t(l)=s(g)=k \end{smallmatrix}}P^{l,g}_{t(g)}[1] \oplus\bigoplus\limits_{\begin{smallmatrix} h\in Q_1\\ s(h)=k\\ t(h)=i \end{smallmatrix}}P^h_k[2] \oplus\bigoplus\limits_{\begin{smallmatrix} x,y\in Q_1\\ t(x)=s(y)=k\\ t(y)=i \end{smallmatrix}}P^{x,y}_{s(x)}[1] \oplus P_i \end{array} \] with the differential \[ \left(\begin{smallmatrix} d_{P_i[3]}&\\ \alpha&d_{P^\alpha_{s(\alpha)}[2]}\\ ab&0&d_{P^{a,b}_{s(a)}[2]}\\ 0&0&0&d_{P^c_k[3]}\\ \delta_{p,q}&0&0&\delta_{c,p}q&d_{P^{p,q}_{s(q)}[2]}\\ -\beta^\ast&-\partial_{\alpha\beta}W&-\partial_{ab\beta}W&0&0&d^{\beta}_{P_{t(\beta)}[1]}\\ 0&-\partial_{\alpha lg}W&0&-\delta_{c,l}g^\ast&-\delta_{p,l}\partial_{qg}W&0&d_{P^{l,g}_{t(g)}[1]}\\ -h&0&0&0&0&0&0&d_{P^h_k[2]}\\ 0&0&-\delta_{a,x}\delta_{b,y}&0&0&0&0&-\delta_{h,y}x&d_{P^{x,y}_{s(x)}[1]}\\ e_i^\ast&\alpha^\ast&0&-c e_k^\ast&-pq^\ast&\beta&lg&-h^\ast&-\partial_{xy}W&d_{P_i} \end{smallmatrix}\right)\] On the other hand, as a dg $\Gamma$-module, $S^\sharp_i$ has a cofibrant resolution $\mathbf{p}S^\sharp_i$ whose underlying graded space is \[\begin{array}{rl} &P_i[3] \oplus\bigoplus\limits_{\begin{smallmatrix} \widetilde{\alpha}\in Q_1\\s(\widetilde{\alpha})\neq k\\t(\widetilde{\alpha})=i \end{smallmatrix}}P^{\widetilde{\alpha}}_{s(\widetilde{\alpha})}[2] \oplus\bigoplus\limits_{\begin{smallmatrix} \widetilde{h}\in Q_1\\ s(\widetilde{h})=k\\ t(\widetilde{h})=i \end{smallmatrix}}P^{\widetilde{h}}_{k}[2] \oplus\bigoplus\limits_{\begin{smallmatrix} \widetilde{\beta}\in Q_1\\s(\widetilde{\beta})=i\\t(\widetilde{\beta})\neq k \end{smallmatrix}}P^{\widetilde{\beta}}_{t(\widetilde{\beta})}[1] \oplus\bigoplus\limits_{\begin{smallmatrix} \sigma\in Q_1\\s(\sigma)=i\\t(\sigma)=k \end{smallmatrix}}P^\sigma_{k}[1] \oplus P_i\\ \oplus&\bigoplus\limits_{\begin{smallmatrix} \widetilde{c}\in Q_1\\ s(\widetilde{c})=i\\ t(\widetilde{c})=k \end{smallmatrix}}P^{\widetilde{c}}_k[3] \oplus\bigoplus\limits_{\begin{smallmatrix} \widetilde{p},\widetilde{q}\in Q_1\\ s(\widetilde{p})=i\\t(\widetilde{p})=t(\widetilde{q})=k \end{smallmatrix}}P^{\widetilde{p},\widetilde{q}}_{s(\widetilde{q})}[2] \oplus\bigoplus\limits_{\begin{smallmatrix} \widetilde{l},\widetilde{g}\in Q_1\\ s(\widetilde{l})=i\\ t(\widetilde{l})=s(\widetilde{g})=k \end{smallmatrix}}P^{\widetilde{l},\widetilde{g}}_{t(\widetilde{g})}[1] \oplus\bigoplus\limits_{\begin{smallmatrix} \tau\in Q_1\\ s(\tau)=i\\ t(\tau)=k \end{smallmatrix}}P^\tau_{k} \end{array}\] with the differential \[\left(\begin{smallmatrix} d_{P_i[3]}\\ \widetilde{\alpha}&d_{P^{\widetilde{\alpha}}_{s(\widetilde{\alpha})}[2]}\\ \widetilde{h}&0&d_{P^{\widetilde{h}}_{k}[2]}\\ -\widetilde{\beta}^\ast&-\partial_{\widetilde{\alpha}\widetilde{\beta}}W&-\partial_{\widetilde{h}\widetilde{\beta}}W&d_{P^{\widetilde{\beta}}_{t(\widetilde{\beta})}[1]}\\ -\sigma^\ast&-\partial_{\widetilde{\alpha}\sigma}W&0&0&d_{P^\sigma_{k}[1]}\\ e_i^\ast&\widetilde{\alpha}^\ast&\widetilde{h}^\ast&\widetilde{\beta}&\sigma&d_{P_i}\\ 0&0&0&0&0&0&d_{P_k^{\widetilde{c}}[3]}\\ \delta_{\widetilde{p},\widetilde{q}}&0&0&0&0&0&\delta_{\widetilde{c},\widetilde{p}}\widetilde{q}&d_{P^{\widetilde{p},\widetilde{q}}_{s(\widetilde{q})}[2]}\\ 0&\partial_{\widetilde{\alpha}\widetilde{l}\widetilde{g}}W&0&0&0&0&-\delta_{\widetilde{c},\widetilde{l}}\widetilde{g}^\ast&-\delta_{\widetilde{p},\widetilde{l}}\partial_{\widetilde{q}\widetilde{g}}W&d_{P^{\widetilde{l},\widetilde{g}}_{t(\widetilde{g})}[1]}\\ 0&0&0&0&\delta_{\sigma,\tau}&0&\delta_{\widetilde{c},\tau}e_k^\ast&\delta_{\widetilde{p},\tau}\widetilde{q}^\ast&\delta_{\widetilde{l},\tau}\widetilde{g}&d_{P_k^\tau} \end{smallmatrix}\right) \] We have a homomorphisms of dg $\Gamma$-modules $\varphi_i:F(\mathbf{p}\widetilde{S}_i)\to \mathbf{p}S_i^\sharp$ as follows. \[ \varphi_i=\left(\begin{smallmatrix} 1&0&0&0&0&0&0&0&0&0\\ 0&\delta_{\alpha,\widetilde{\alpha}}&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&-\delta_{h,\widetilde{h}}&0&0\\ 0&0&0&0&0&\delta_{\beta,\widetilde{\beta}}&0&0&-\partial_{xy\widetilde{\beta}}W&0\\ 0&0&0&-\delta_{c,\sigma}e_k^\ast&-\delta_{p,\sigma}q^\ast&0&\delta_{l,\sigma}g&0&0&0\\ 0&0&0&0&0&0&0&0&0&1\\ 0&0&0&\delta_{c,\widetilde{c}}&0&0&0&0&0&0\\ 0&0&0&0&\delta_{p,\widetilde{p}}\delta_{q,\widetilde{q}}&0&0&0&0&0\\ 0&0&0&0&0&0&-\delta_{l,\widetilde{l}}\delta_{g,\widetilde{g}}&0&0&0\\ 0&0&0&0&0&0&0&0&0&0 \end{smallmatrix}\right) \] Similarly, for $i=k$, $F(\mathbf{p}\widetilde{S}_k)$ has the underlying graded space \[P_k[4] \oplus\bigoplus\limits_{\begin{smallmatrix} \rho\in Q_1\\t(\rho)=k \end{smallmatrix}}P_{s(\rho)}^\rho[3] \oplus\bigoplus\limits_{\begin{smallmatrix} \gamma\in Q_1\\s(\gamma)=k \end{smallmatrix}}P_{t(\gamma)}^\gamma[2] \oplus\bigoplus\limits_{\begin{smallmatrix} w\in Q_1\\t(w)=k \end{smallmatrix}}P^w_{s(w)}[1] \oplus P_k[1] \oplus\bigoplus\limits_{\begin{smallmatrix} z\in Q_1\\ t(z)=k \end{smallmatrix}}P_{s(z)}^z \] with the differential \[\left(\begin{smallmatrix} d_{P_k[4]}\\ -\rho&d_{P^\rho_{s(\rho)}[3]}\\ -\gamma^\ast&-\partial_{\rho\gamma}W&d_{P^\gamma_{t(\gamma)}[2]}\\ w e_k^\ast&w\rho^\ast&-w\gamma&d_{P^w_{s(w)}[1]}\\ -e_k^\ast&-\rho^\ast&\gamma&0&d_{P_k[1]}\\ 0&0&0&\delta_{w,z}&z&d_{P^z_{s(z)}} \end{smallmatrix}\right) \] Then there is a homomorphism of dg $\Gamma$-modules $\varphi_k:F(\mathbf{p}\widetilde{S}_k)\to \mathbf{p}S^\sharp_k$, where $\mathbf{p}S^\sharp_k=\mathbf{p}S_k[1]$ has the underlying graded space \[P_k[4] \oplus\bigoplus\limits_{\begin{smallmatrix} \widetilde{\rho}\in Q_1\\t(\widetilde{\rho})=k \end{smallmatrix}}P_{s(\widetilde{\rho})}^{\widetilde{\rho}}[3] \oplus\bigoplus\limits_{\begin{smallmatrix} \widetilde{\gamma}\in Q_1\\s(\widetilde{\gamma})=k \end{smallmatrix}}P_{t(\widetilde{\gamma})}^{\widetilde{\gamma}}[2] \oplus P_k[1] \] with the differential \[\left(\begin{smallmatrix} d_{P_k[4]}\\ -\widetilde{\rho}&d_{P^{\widetilde{\rho}}_{s(\widetilde{\rho})}[3]}\\ \widetilde{\gamma}^\ast&\partial_{\widetilde{\rho}\widetilde{\gamma}}W&d_{P^{\widetilde{\gamma}}_{t(\widetilde{\gamma})}[2]}\\ -e_k^\ast&-\widetilde{\rho}^\ast&-\widetilde{\gamma}&d_{P_k[1]} \end{smallmatrix}\right) \] and the homomorphism \[\varphi_k=\left(\begin{smallmatrix} 1&0&0&0&0&0\\ 0&\delta_{\rho,\widetilde{\rho}}&0&0&0&0\\ 0&0&-\delta_{\gamma,\widetilde{\gamma}}&0&0&0\\ 0&0&0&0&1&0 \end{smallmatrix}\right)\] It is straightforward to check that the above $\varphi_i$, $i\in Q_0$, are quasi-isomorphisms. Hence we have the following result. \begin{lemma} There are isomorphisms in $\D(\Gamma)$: \[F(\widetilde{S}_i)\xrightarrow{\varphi_i} S_i^\sharp.\] \end{lemma} In this subsection, we will further describe the image of morphisms between simples under $F$. \begin{construction} For any arrow $\mathfrak{a}:i\to k\in Q_1$ and any arrow $\mathfrak{b}:k\to j\in Q_1$, define \begin{itemize} \item $\pi^\sharp_{\mathfrak{a}'}:S^\sharp_k\to S^\sharp_i[1]$ to be the morphism from $S_k[1]$ to $S^\sharp_i[1]$ given by the identity from $S_k$ to $S^{\mathfrak{a}}_{t(\mathfrak{a})}$; \item $\pi^\sharp_{\mathfrak{a}'^\ast}:S^\sharp_i\to S^\sharp_k[2]$ to be the morphism from $S^\sharp_i$ to $S_k[3]$ given by $\pi_{e^\ast_k}:S^{\mathfrak{a}}_{t(\mathfrak{a})}\to S_k[3]$; \item $\pi^\sharp_{\mathfrak{b}'}:S_j^\sharp\to S_k^\sharp[1]$ to be $\pi_{b^\ast}:S_j\to S_k[2]$; \item $\pi^\sharp_{\mathfrak{b}'^\ast}:S_k^\sharp\to S_j^\sharp[2]$ to be $\pi_{b}[1]:S_k[1]\to S_j[2]$; \item $\pi^\sharp_{[\mathfrak{a}\mathfrak{b}]}:S^\sharp_i\to S^\sharp_j[1]$ to be the morphism from $S^\sharp_i$ to $S_j[1]$ given by $\pi_{\mathfrak{b}}:S^{\mathfrak{a}}_{t(\mathfrak{a})}\to S_j[1]$; \item $\pi^\sharp_{[\mathfrak{a}\mathfrak{b}]^\ast}:S^\sharp_j\to S^\sharp_i[2]$ to be the morphism from $S_j$ to $S^\sharp_i[2]$ given by $\pi_{\mathfrak{b}^\ast}:S_j\to S^\mathfrak{a}_{t(\mathfrak{a})}[2]$. \end{itemize} For any other arrows $\mathfrak{c}$ of $Q$, $\pi^\sharp_{\mathfrak{c}}$ and $\pi^\sharp_{\mathfrak{c}^\ast}$ are given by $\pi_{\mathfrak{c}}$ and $\pi_{\mathfrak{c}^\ast}$, respectively. \end{construction} \begin{proposition}\label{prop:simple} For any arrow $R:s\to t\in \widetilde{Q'}_1$, we have the following commutative diagrams \[\xymatrix{ F(S'_s)\ar[d]_{\varphi_s}\ar[r]^{F(\pi_R)}& F(S'_t[1])\ar[d]^{\varphi_t[1]}\\ S^\sharp_s\ar[r]^{\pi^\sharp_{R}}& S^\sharp_t[1] }\qquad \xymatrix{ F(S_t)\ar[d]_{\varphi_t}\ar[r]^{F(\pi_{R^\ast})}& F(S_s[2])\ar[d]^{\varphi_s[2]}\\ S^\sharp_t\ar[r]^{\pi^\sharp_{R^\ast}}& S^\sharp_s[2] }\] \end{proposition} \begin{proof} We lift the homomorphisms in the diagrams between simples to homomorphisms between their cofibrant resolutions. Then we only need to show that the difference of the two compositions in one diagram is null-homotopic. \begin{enumerate} \item The case $R=\mathfrak{a}'$ for some $\mathfrak{a}:i\to k\in Q_1$. By definition, the morphism $F(\pi_{\mathfrak{a}'})$ is given by the map $\mathfrak{p}\pi_{\mathfrak{a}'}:F(\mathbf{p}S'_k)\to F(\mathbf{p}S'_i[1])$, where \[ \mathfrak{p}\pi_{\mathfrak{a}'}=\left(\begin{smallmatrix} 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ \delta_{c,\mathfrak{a}}&0&0&0&0&0\\ 0&\delta_{p,\mathfrak{a}}\delta_{\rho,q}&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&\delta_{l,\mathfrak{a}}\delta_{\gamma,g}&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&\delta_{w,\mathfrak{a}}&0&0\\ \end{smallmatrix}\right) \] So \[\varphi_i\circ \mathfrak{p}\pi_{\mathfrak{a}'}= \left(\begin{smallmatrix} 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ -\delta_{\mathfrak{a},\sigma}e_k^\ast & -\delta_{\mathfrak{a},\sigma}\rho^\ast & \delta_{\mathfrak{a},\sigma}\gamma&0&0&0\\ 0&0&0&\delta_{\mathfrak{a},w}&0&0\\ \delta_{\mathfrak{a},\widetilde{c}}&0&0&0&0&0\\ 0&\delta_{\mathfrak{a},\widetilde{p}}\delta_{\rho,\widetilde{q}}&0&0&0&0\\ 0&0&-\delta_{\mathfrak{a},\widetilde{l}}\delta_{\gamma,\widetilde{g}}&0&0&0\\ 0&0&0&0&0&0 \end{smallmatrix}\right) \] and \[ \pi^\sharp_{\mathfrak{a}'}\circ\varphi_k= \left(\begin{smallmatrix} 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0 &0&0&0\\ 0&0&0&0&0&0\\ \delta_{\mathfrak{a},\widetilde{c}}&0&0&0&0&0\\ 0&\delta_{\mathfrak{a},\widetilde{p}}\delta_{\rho,\widetilde{q}}&0&0&0&0\\ 0&0&-\delta_{\mathfrak{a},\widetilde{l}}\delta_{\gamma,\widetilde{g}}&0&0&0\\ 0&0&0&0&\delta_{\mathfrak{a},\tau}&0 \end{smallmatrix}\right) \] Then the difference $\varphi_i\circ\mathfrak{p}\pi_{\mathfrak{a}'}-\pi^\sharp_{\mathfrak{a}'}\circ\varphi_k=\theta\circ d+d\circ\theta$, where \[\theta=\left(\begin{smallmatrix} 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&\delta_{\mathfrak{a},\sigma}&0\\ 0&0&0&0&0&\delta_{\mathfrak{a},z}\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0 \end{smallmatrix}\right) \] of degree -1, and hence this difference is null-homotopic. For the second diagram, note that \[F(\mathbf{p}\pi_{\mathfrak{a}'^\ast})=\left(\begin{smallmatrix} 0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0\\ \delta_{\mathfrak{a},w}&0&0&0&0&0&0&0&0&0\\ 0&0&0&\delta_{\mathfrak{a},c}&0&0&0&0&0&0\\ 0&0&0&0&\delta_{\mathfrak{a},p}\delta_{q,z}&0&0&0&0&0 \end{smallmatrix}\right) \] and \[\mathbf{p}\pi^\sharp_{\mathfrak{a}'^\ast}=\left(\begin{smallmatrix} 0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&\delta_{\mathfrak{a},\widetilde{c}}&0&0&0 \end{smallmatrix}\right) \] So \[\varphi_k[2]\circ F(\mathbf{p}\pi_{\mathfrak{a}'^\ast})=\left(\begin{smallmatrix} 0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0\\ 0&0&0&\delta_{\mathfrak{a},c}&0&0&0&0&0&0 \end{smallmatrix}\right) \] and \[\mathbf{p}\pi^\sharp_{\mathfrak{a}'^\ast}\circ\varphi_i=\left(\begin{smallmatrix} 0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0\\ 0&0&0&\delta_{\mathfrak{a},c}&0&0&0&0&0&0 \end{smallmatrix}\right) \] Then the difference is zero. \item The case $R=\mathfrak{b}'$ for some $\mathfrak{b}:k\to i\in Q_1$. Note that \[\mathbf{p}F(\pi_{\mathfrak{b}'})= \left(\begin{smallmatrix} 0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0\\ \delta_{\mathfrak{b},\gamma}&0&0&0&0&0&0&0&0&0\\ 0&0&\delta_{\mathfrak{b},b}\delta_{a,w}&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&\delta_{\mathfrak{b},h}&0&0\\ 0&0&0&0&0&0&0&0&\delta_{\mathfrak{b},x}\delta_{y,z}&0 \end{smallmatrix}\right)\] and \[\mathbf{p}\pi^\sharp_{\mathfrak{b}'}= \left(\begin{smallmatrix} 0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0\\ \delta_{\mathfrak{b},\widetilde{\gamma}}&0&0&0&0&0&0&0&0&0\\ 0&0&\delta_{\mathfrak{b},\widetilde{h}}&0&0&0&0&0&0&0 \end{smallmatrix}\right) \] So \[\mathbf{p}\varphi_k[1]\circ\mathbf{p}F(\pi_{\mathfrak{b}'})=\left(\begin{smallmatrix} 0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0\\ -\delta_{\mathfrak{b},\widetilde{\gamma}}&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&\delta_{\mathfrak{b},h}&0&0&0 \end{smallmatrix}\right)\] and \[\mathbf{p}\pi^\sharp_{\mathfrak{b}'}\circ(\varphi_i)=\left(\begin{smallmatrix} 0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0\\ -\delta_{\mathfrak{b},\widetilde{\gamma}}&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&\delta_{\mathfrak{b},h}&0&0&0 \end{smallmatrix}\right)\] Then the difference $\mathbf{p}\varphi_k[1]\circ\mathbf{p}F(\pi_{\mathfrak{b}'})-\mathbf{p}\pi^\sharp_{\mathfrak{b}'}\circ(\varphi_i)=0$. For the second diagram, note that \[F(\mathbf{p}\pi_{\mathfrak{b}'^\ast})=\left(\begin{smallmatrix} 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ \delta_{\mathfrak{b},h}&0&0&0&0&0\\ 0&\delta_{\mathfrak{b},x}\delta_{\rho,y}&0&0&0&0\\ 0&0&\delta_{\mathfrak{b},\gamma}&0&0&0 \end{smallmatrix}\right) \] and \[\mathbf{p}\pi^\sharp_{\mathfrak{b}'^\ast}=\left(\begin{smallmatrix} 0&0&0&0\\ 0&0&0&0\\ \delta_{\mathfrak{b},\widetilde{h}}&0&0&0\\ 0&\partial_{\widetilde{\rho}\mathfrak{b}\widetilde{\beta}}W&0&0\\ 0&0&0&0\\ 0&0&\delta_{\mathfrak{b},\widetilde{\gamma}}0&\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0 \end{smallmatrix}\right) \] So \[(\varphi_i[2])\circ F(\mathbf{p}\pi_{\mathfrak{b}'^\ast})= \left(\begin{smallmatrix} 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ \delta_{\mathfrak{b},\widetilde{h}}&0&0&0&0&0\\ 0&\partial_{\rho\mathfrak{b}\widetilde{\beta}}W&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&-\delta_{\mathfrak{b},\gamma}&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0 \end{smallmatrix}\right) \] and \[\mathbf{p}\pi^\sharp_{\mathfrak{b}'^\ast}\circ\varphi_k= \left(\begin{smallmatrix} 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ \delta_{\mathfrak{b},\widetilde{h}}&0&0&0&0&0\\ 0&\partial_{\rho\mathfrak{b}\widetilde{\beta}}W&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&-\delta_{\mathfrak{b},\gamma}&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0 \end{smallmatrix}\right) \] Then the difference is zero. \item It is more straightforward to calculate the cases $R=[\mathfrak{a}\mathfrak{b}]$, $R\in Q_1$. \end{enumerate} \end{proof} \subsection{Geometric interpretation of Keller-Yang's equivalence} Let $\surfo$ be a decorated marked surface. Denote by $\cA(\surfo)$ the set of simple closed arcs in $\surfo$. Here, closed arcs mean curves in $\surfo-\Tri$ connecting different decorating points. For an arc $\gamma$ in a triangulation $\TT\in\EGp(\surfo)$, its dual (w.r.t. $\TT$) is the unique closed arc (up to homotopy) which intersects $\gamma$ once and does not cross any other arcs in $\TT$. The dual of $\TT$, denote by $\TT^*$, is defined to be the set of duals of arcs in $\TT$. For any oriented closed arc $\eta\in\cA(\surfo)$, there is an associated object $X_\eta=X^\TT_{\eta}$ in $\Sph(\Gamma_\TT)$ constructed in \cite[Construction~A.3]{QZ2}. \begin{proposition}[{\cite[Theorem~6.6]{QQ}, \cite[Proposition~4.3]{QZ2}}]\label{pp:simple} There is a canonical bijection \begin{gather}\label{eq:X} \widetilde{X}_\TT\colon\cA(\surfo)\to\Sph(\Gamma_\TT)/[1] \end{gather} sending $\eta$ to $X^\TT_\eta[\mathbb{Z}]$. Moreover, this is compatible with the isomorphism \eqref{eq:cong} in the following sense. Suppose that $\TT'$ is a triangulation in $\EGp(\surfo)$ with its dual $\TT'^*$. Then $\widetilde{X}_\TT(\TT'^\ast)$ is the set of the shift orbits of simples in $\h_\TT^{\TT'}$. \end{proposition} Further, for any two oriented closed arcs $\eta_1,\eta_2\in\cA(\surfo)$ having the same starting point $Z$, the oriented angle $\theta$ (in clockwise direction) at $Z$ from $\eta_1$ to $\eta_2$ induces a morphism $$ \varphi^\TT(\eta_1,\eta_2)=\varphi(\eta_1,\eta_2):X_{\eta_1}\to X_{\eta_2}, $$ see \cite[Construction~A.5]{QZ2}. We have two useful lemmas. \begin{lemma}\cite[Corollary~A.9 and Lemma~3.3]{QZ2}\label{lem:A11} Let ${\eta_i}$, for $i=1,2,3$, be oriented closed arcs, which have the same starting point $Z$ and whose start segments are in clockwise order at $Z$. Then \[\varphi(\eta_2,\eta_3)\circ\varphi(\eta_1,\eta_2)=\varphi(\eta_1,\eta_3).\] Moreover, this is the only way such that the composition of two $\varphi(-,-)$'s is not zero. \end{lemma} \begin{lemma}[{\cite[Proposition~3.1 and Theorem~4.5]{QZ2}}]\label{lem:b} For any two closed arcs $\eta_1,\eta_2$, if they do not cross each other in $\surf-\Tri$, then the morphisms from $X_{\eta_1}$ to $X_{\eta_2}$ of the form $\varphi(-,-)$ form a basis of $\Ext^\mathbb{Z}(X_{\eta_1},X_{\eta_2})$. \end{lemma} We denote by $S_{\eta}$, $\eta\in\TT'^\ast$, the simples in $\h_\TT^{\TT'}$. Then by Proposition~\ref{prop:simple}, we have that $S_\eta\in X^\TT_\eta[\mathbb{Z}]$. Then by Lemma~\ref{lem:b}, we have \begin{lemma}\label{lem:con} $\{\varphi(-,-)\}$ form a basis of the Ext-algebra \[\E(\h_\TT^{\TT'}):=\Ext^{\mathbb{Z}}\left(\bigoplus\limits_{\eta\in\TT'^\ast}S_\eta,\bigoplus\limits_{\eta\in\TT'^\ast}S_\eta\right).\] And the multiplication between this basis is given by Lemma~\ref{lem:A11}. \end{lemma} It follows directly from the construction of $\varphi(-,-)$ and Proposition~\ref{prop:simple} that this basis gives a nice geometric model for Keller-Yang's equivalence. \begin{proposition}\label{prop:KY} For any two closed arcs $\eta_i$ and $\eta_j$ in $\TT'^\ast$, which have the same starting point, the image of $\varphi^{\TT'}(\eta_i,\eta_j)$ under the Keller-Yang's equivalence $\kappa_{\TT'}^{\TT}$ is $\varphi^{\TT}(\eta_i,\eta_j)$. \end{proposition} \section{Intrinsic derived equivalences}\label{sec:KY} In this section, we will first construct an intrinsic equivalence between the finite dimensional derived categories associated to two triangulations (Construction~\ref{con:iota}). Then we show that this equivalence is naturally isomorphic to the composition of any sequence of Keller-Yang's equivalences which connects these two triangulations (Theorem~\ref{thm:comp}). This gives a proof of Theorem~\ref{thma}. \subsection{The construction}\label{sec:con} Fix a triangulation $\TT_0$ in $\EGp(\surfo)$ and let $\Gamma_0=\Gamma_{\TT_0}$. Let $\TT$ be any triangulation in $\EGp(\surf)$. Recall that $\h_0^\TT:=\h_{\TT_0}^\TT$ is the heart in $\D_{fd}(\Gamma_0)$ corresponding to $\TT$, and $\h_\TT$ is the canonical heart in $\D_{fd}(\Gamma_\TT)$. \begin{construction}\label{con:iota} By Lemma~\ref{lem:con}, there is an isomorphism between Ext algebras \[ \iota_\TT\colon\E(\h_0^\TT)\xrightarrow{\sim}\E(\h_\TT), \] which sends $\varphi^{\TT_0}(\eta_1,\eta_2)$ to $\varphi^{\TT}(\eta_1,\eta_2)$ for any $\eta_1,\eta_2\in\TT^\ast$. \end{construction} As a result, we have an induced triangle equivalence $\Psi_\TT$ fitting the following commutative diagram of equivalences \begin{gather}\label{eq:deeq} \xymatrix{ \D_{fd}(\Gamma_0)\ar@/_2pc/[rrrrrr]_{\Psi_\TT}\ar[rr]^{\Ext_{\Gamma_0}^\mathbb{Z}(S_{\TT_0}^{\TT},-)}&&\per\E(\h_0^\TT)\ar[rr]^{\quad\iota_\TT\quad}&& \per\E(\h_\TT)&&\D_{fd}(\Gamma_\TT)\ar[ll]_{\Ext_{\Gamma_\TT}^\mathbb{Z}(S_{\TT}^{\TT},-)} } \end{gather} Consider a sequence of forward/backward flips $$p\colon\TT_0\xrightarrow{}\TT_1\xrightarrow{}\cdots \xrightarrow{}\TT_m=\TT$$ and the sequence of the associated KY's equivalences $$\D(\Gamma_{\TT_{0}})\xrightarrow{\;\kappa_{\TT_0}^{\TT_1}\;}\D(\Gamma_{\TT_{1}}) \xrightarrow{\;\kappa_{\TT_1}^{\TT_2}\;}\cdots\xrightarrow{\;\kappa_{\TT_{m-1}}^{\TT_m}\;}\D(\Gamma_{\TT_{m}})=\D(\Gamma_\TT).$$ Restricted to $\D_{fd}$, we obtain a triangle equivalence \begin{gather}\label{eq:deeq0} \Psi(p)=\kappa_{\TT_{m-1}}^{\TT_m}\circ\cdots\circ\kappa_{\TT_1}^{\TT_2}\circ\kappa_{\TT_0}^{\TT_1}\colon \D_{fd}(\Gamma_0) \xrightarrow{\quad\simeq\quad} \D_{fd}(\Gamma_\TT). \end{gather} \begin{theorem}\label{thm:comp} $\Psi_\TT$ and $\Psi(p)$ are naturally isomorphic to each other (denoted by $\Psi_\TT\sim\Psi(p)$), for any $\TT\in\EGp(\surfo)$ and any sequence of flips $p\colon\TT_0\to\TT$. \end{theorem} The remaining of this section is devoted to the proof of this theorem. As a result, we can denote the 3-CY category associated to $\surfo$ by $\D_{fd}(\surfo)$. \subsection{Compatibility/Proof of Theorem~\ref{thm:comp}}\label{sec:ind} Use induction on the number $m$ of flips in the flip sequence $p$, starting with the trivial case, when $m=0$ or $\TT_0=\TT$ so that both equivalences are isomorphic to the identity. Now suppose that $\Psi_\TT\sim\Psi(p)$ for some $p$ and consider a flip $\mu_k\colon\TT\to\TT'$ and the flip sequence $p'=\mu_k\circ p$. Without loss of generality, assume $\mu_k$ is a forward flip. Fix/recall the notations as follows: \begin{itemize} \item $\TT=\{\gamma_i\},\TT^*=\{\eta_i\}$ and $\TT'=\{\gamma_i'\},(\TT')^*=\{\eta_i'\}$. Note that $\gamma_i'=\gamma_i$ for $i\neq k$. The local pictures of $\TT$ and $\TT'$ are shown in Figure~\ref{fig:WH} and the local mutation of the corresponding quiver is: \begin{gather}\label{eq:mutation} \xymatrix@C=2.3pc@R=2pc{ & 2 \ar[d]^{c} &&Q_\TT \ar@{=>}[r]^{\mu_k}&Q_{\TT'}&& 2 \ar@{<-}[d]_{c'}\\ 1\ar[ur]^{b} &k\ar[l]^a \ar[r]^e &3\ar[dl]^f &&& 1\ar@{<-}[dr]_{[ag]} &k\ar@{<-}[l]_{a'} \ar@{<-}[r]_{e'}&3\ar@{<-}[ul]_{[ec]}\\ &4\ar[u]^g &&&&& 4\ar@{<-}[u]_{g'} }.\end{gather} Note that $\eta_i$ might not exist for any $1\leq i\leq 4$ and some vertices might coincide. \begin{figure}[ht]\centering \begin{tikzpicture}[xscale=-.4,yscale=.425] \path (4,3) coordinate (v2) (-4,3) coordinate (v1) (2,0) coordinate (v3) (0,5) coordinate (v4); \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=0] coordinates {(v1)(v3)(v2)}; \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=0] coordinates {(v1)(v4)(v2)}; \foreach \j in {.1, .18, .26, .34, .42, .5,.58, .66, .74, .82, .9} { \path (v3)--(v4) coordinate[pos=\j] (m0); \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=.3] coordinates {(v1)(m0)(v2)}; } \path (4,-3) coordinate (v1) (-4,-3) coordinate (v2) (-2,0) coordinate (v3) (0,-5) coordinate (v4); \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=0] coordinates {(v1)(v3)(v2)}; \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=0] coordinates {(v1)(v4)(v2)}; \foreach \j in {.1, .18, .26, .34, .42, .5,.58, .66, .74, .82, .9} { \path (v3)--(v4) coordinate[pos=\j] (m0); \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=.3] coordinates {(v1)(m0)(v2)}; } \path (4,-3) coordinate (v2) (-4,3) coordinate (v1) (2,0) coordinate (v3) (-2,0) coordinate (v4); \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=0] coordinates {(v1)(v3)(v2)}; \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=0] coordinates {(v1)(v4)(v2)}; \foreach \j in {.13,.26,.39,.87,.74,.61} { \path (v3)--(v4) coordinate[pos=\j] (m0); \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=.3] coordinates {(v1)(m0)(v2)}; } \path (4,3) coordinate (v1) (4,-3) coordinate (v2) (2,0) coordinate (v3) (6,0) coordinate (v4); \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=0] coordinates {(v1)(v3)(v2)}; \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=0] coordinates {(v1)(v4)(v2)}; \foreach \j in {.1,.2,.3,.4,.5,.6,.7,.8,.9} { \path (v3)--(v4) coordinate[pos=\j] (m0); \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=.3] coordinates {(v1)(m0)(v2)}; } \path (-4,-3) coordinate (v2) (-4,3) coordinate (v1) (-2,0) coordinate (v3) (-6,0) coordinate (v4); \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=0] coordinates {(v1)(v3)(v2)}; \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=0] coordinates {(v1)(v4)(v2)}; \foreach \j in {.1,.2,.3,.4,.5,.6,.7,.8,.9} { \path (v3)--(v4) coordinate[pos=\j] (m0); \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=.3] coordinates {(v1)(m0)(v2)}; } \draw[NavyBlue,thin] (4,-3)node{$\bullet$}to(4,3)node{$\bullet$}to(-4,3)node{$\bullet$}to(-4,-3)node{$\bullet$}to(4,-3)to(-4,3); \draw[red,ultra thick](2,0)to(-2,0); \draw[red,thick] (-6,0)node{$\bullet$}node[white]{\tiny{$\bullet$}}node{\tiny{$\circ$}}to (-2,0)node{$\bullet$}node[white]{\tiny{$\bullet$}}node{\tiny{$\circ$}}node[above]{$_Y$}to (2,0)node{$\bullet$}node[white]{\tiny{$\bullet$}}node{\tiny{$\circ$}}node[below]{$_Z$}to (6,0)node{$\bullet$}node[white]{\tiny{$\bullet$}}node{\tiny{$\circ$}} (0,5)node{$\bullet$}node[white]{\tiny{$\bullet$}}node{\tiny{$\circ$}}to(2,0) (0,-5)node{$\bullet$}node[white]{\tiny{$\bullet$}}node{\tiny{$\circ$}}to(-2,0) (0,0)node[above]{$\eta_k$}(4.5,0)node[above]{$\eta_1$}(-4.5,0)node[above]{$\eta_3$} (1.5,3)node[above]{$\eta_2$}(-1.5,-3)node[below]{$\eta_4$}; \end{tikzpicture}\quad \begin{tikzpicture}[scale=1.3, rotate=0] \draw[blue,->,>=stealth](3-.6,0)to(3+.6,0);\draw (3,-1.8); \end{tikzpicture}\quad \begin{tikzpicture}[xscale=.4,yscale=.425] \path (4,3) coordinate (v1) (-4,3) coordinate (v2) (2,0) coordinate (v3) (0,5) coordinate (v4); \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=0] coordinates {(v1)(v3)(v2)}; \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=0] coordinates {(v1)(v4)(v2)}; \foreach \j in {.1, .18, .26, .34, .42, .5,.58, .66, .74, .82, .9} { \path (v3)--(v4) coordinate[pos=\j] (m0); \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=.3] coordinates {(v1)(m0)(v2)}; } \path (4,-3) coordinate (v1) (-4,-3) coordinate (v2) (-2,0) coordinate (v3) (0,-5) coordinate (v4); \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=0] coordinates {(v1)(v3)(v2)}; \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=0] coordinates {(v1)(v4)(v2)}; \foreach \j in {.1, .18, .26, .34, .42, .5,.58, .66, .74, .82, .9} { \path (v3)--(v4) coordinate[pos=\j] (m0); \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=.3] coordinates {(v1)(m0)(v2)}; } \path (4,-3) coordinate (v2) (-4,3) coordinate (v1) (2,0) coordinate (v3) (-2,0) coordinate (v4); \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=0] coordinates {(v1)(v3)(v2)}; \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=0] coordinates {(v1)(v4)(v2)}; \foreach \j in {.13,.26,.39,.87,.74,.61} { \path (v3)--(v4) coordinate[pos=\j] (m0); \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=.3] coordinates {(v1)(m0)(v2)}; } \path (4,3) coordinate (v1) (4,-3) coordinate (v2) (2,0) coordinate (v3) (6,0) coordinate (v4); \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=0] coordinates {(v1)(v3)(v2)}; \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=0] coordinates {(v1)(v4)(v2)}; \foreach \j in {.1,.2,.3,.4,.5,.6,.7,.8,.9} { \path (v3)--(v4) coordinate[pos=\j] (m0); \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=.3] coordinates {(v1)(m0)(v2)}; } \path (-4,-3) coordinate (v1) (-4,3) coordinate (v2) (-2,0) coordinate (v3) (-6,0) coordinate (v4); \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=0] coordinates {(v1)(v3)(v2)}; \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=0] coordinates {(v1)(v4)(v2)}; \foreach \j in {.1,.2,.3,.4,.5,.6,.7,.8,.9} { \path (v3)--(v4) coordinate[pos=\j] (m0); \draw[blue!30!green!30, dashed,very thin] plot [smooth,tension=.3] coordinates {(v1)(m0)(v2)}; } \draw[NavyBlue,thin] (4,-3)node{$\bullet$}to(4,3)node{$\bullet$}to(-4,3)node{$\bullet$}to(-4,-3)node{$\bullet$}to(4,-3)to(-4,3); \draw[red,ultra thick](2,0)to(-2,0); \draw[red,thick] (-6,0)node{$\bullet$}node[white]{\tiny{$\bullet$}}node{\tiny{$\circ$}}to (-2,0)node{$\bullet$}node[white]{\tiny{$\bullet$}}node{\tiny{$\circ$}}node[above]{$_Z$}to (2,0)node{$\bullet$}node[white]{\tiny{$\bullet$}}node{\tiny{$\circ$}}node[below]{$_Y$}to (6,0)node{$\bullet$}node[white]{\tiny{$\bullet$}}node{\tiny{$\circ$}} (0,5)node{$\bullet$}node[white]{\tiny{$\bullet$}}node{\tiny{$\circ$}}to(2,0) (0,-5)node{$\bullet$}node[white]{\tiny{$\bullet$}}node{\tiny{$\circ$}}to(-2,0) (0,0)node[above]{$\eta_k$}(4.5,0)node[above]{$\eta_3$}(-4.5,0)node[above]{$\eta_1$} (1.5,3)node[above]{$\eta'_{2}$}(-1.5,-3)node[below]{$\eta'_{4}$}; \end{tikzpicture} \caption{A forward flip} \label{fig:WH} \end{figure} \item $\h_0^\TT$ and $\h_0^{\TT'}$ are the hearts with simples $\{S_{\eta_i}\}$ and $\{S_{\eta_i'}\}$ in $\D_{fd}(\Gamma_0)$ that correspond to $\TT$ and $\TT'$, respectively. Note that $\h_0^{\TT'}$ is the forward tilting of $\h_0^{\TT}$ w.r.t. $S_{\eta_k}$. \item $\h_\TT$ and $\h_{\TT'}$ are the canonical hearts with simples $\{S_{i}\}$ and $\{S_i'\}$ in $\D_{fd}(\Gamma_\TT)$ and $\D_{fd}(\Gamma_{\TT'})$, respectively. \item Moreover, $\h_\TT^{\TT'}$ is the forward tilting of $\h_\TT$ w.r.t. $S_k$, with simples $\{S_i^\sharp\}$ in $\D_{fd}(\Gamma_\TT)$ (see Construction~\ref{con:sharp} for the construction of $S_i^\sharp$). \end{itemize} Note that we shall prove $\Psi_{\TT'}\sim\Psi(\mu_k\circ p)=\kappa_{\TT}^{\TT'}\circ\Psi(p)$, where the latter is $\kappa_{\TT}^{\TT'}\circ\Psi_{\TT}$ by induction. By definition, it suffices to show that $\kappa_{\TT}^{\TT'}\circ\Psi_{\TT}$ induces the isomorphism $\iota_{\TT'}$, which means that $\kappa_{\TT}^{\TT'}\circ\Psi_{\TT}$ preserves the morphism of the from $\varphi^{\TT'}(-,-)$ induced by any angle in $\TT'^\ast$. By Proposition~\ref{prop:KY}, we have that $\kappa_{\TT}^{\TT'}$ preserves such morphisms. So it suffices to show that so does $\Psi_{\TT}$. If there is no arrow in $Q_{\TT'}$ from $k$ to $i$, that is, $i\neq 2$ or $4$ in Figure~\ref{fig:WH}, then $\eta'_i=\eta_i$. Hence we only need to consider the angles between $\eta'_2$ and another arc (similarly for $\eta'_4$). For the angles to $\eta'_2$ in $\TT'^\ast$, we have the following cases, up to dual (i.e. the case starting at $\eta'_2$): \begin{itemize} \item For an angle at $Y$ from $\eta_k$ to $\eta'_2$, by \cite[Proposition~3.1]{QZ2}, we have a triangle \[S'_{\eta_2}\xrightarrow{\varphi(\eta_2,\eta_k)} S'_{\eta_k}\xrightarrow{\varphi(\eta_k,\eta'_2)} S'_{\eta'_2}\to S'_{\eta_2}[1].\] As $\iota_\TT$ (and so $\Phi_\TT$) preserves $\varphi(\eta_2,\eta_k)$, we deduce that $\Phi_\TT$ preserves this triangle and hence $\varphi(\eta_k,\eta'_2)$. Note that when the number of arrows from $k$ to $2$ in $Q_{\TT'}$ is 2, i.e., the vertices $2$ and $4$ coincide, then we need to add another copy of $S'_{\eta_k}$ to the second term of the triangle. However the rest of the deduction and conclusion are the same. \item For the angle at $Y$ from $\eta_3$ to $\eta'_2$, by Lemma~\ref{lem:A11}, we have $\varphi(\eta_3,\eta'_2)=\varphi(\eta_k,\eta'_2)\circ\varphi(\eta_3,\eta_k)$. As above, $\Phi_\TT$ preserves $\varphi(\eta_k,\eta'_2)$ and $\varphi(\eta_3,\eta_k)$. Hence it preserves $\varphi(\eta_3,\eta'_2)$ too. \item For any angle to $\eta'_2$ in $\TT'^\ast$, which is at the other endpoint of $\eta'_2$ from $Y$, it factors through $\eta_2$ (i.e. decomposes). Again, we can prove it is preserved in the same fashion. \end{itemize} \section{An application}\label{sec:app} \subsection{Calabi-Yau categories and spherical objects}\label{sec:DC} \begin{definition}\label{def:sph} A triangulated $\k$-category $\D$ is called \emph{$N$-Calabi-Yau} (or $N$-CY for short) if for any pair of objects $L,M$ in $\D$, we have a natural isomorphism \begin{equation}\label{eq:serre} \Hom_{\D_{fd}(\Gamma)}(L,M)\cong D\Hom_{\D_{fd}(\Gamma)}(M,L[N]) \end{equation} where $D=\Hom_\k(-,\k)$. Further, an object $S$ in a $N$-CY triangulated $\k$-category $\D$ is \emph{($N$-)spherical} if $\Hom_{\D}(S, S[i])=\k$ for $i=0$ or $N$, and $0$ otherwise. The \emph{twist functor $\phi$ of a spherical object} $S$ is defined by \begin{gather}\label{eq:phi} \phi_S(X)=\Cone\left(S\otimes\Hom^\bullet(S,X)\to X\right) \end{gather} with inverse \[ \phi_S^{-1}(X)=\Cone\left(X\to S\otimes\Hom^\bullet(X,S)^\vee \right)[-1] \] \end{definition} Recall that $\D_{fd}(\Gamma)$ is the finite-dimensional derived category of $\Gamma$, for a Ginzburg dg algebra $\Gamma$. It is well-known that this is a 3-CY category. We also know that $\D_{fd}(\Gamma)$ admits a canonical heart $\zero$ generated by simple $\Gamma$-modules $S_i$, for $i\in Q_0$, each of which is 3-spherical. Denote by $\ST(\Gamma)$ the spherical twist group of $\D_{fd}(\Gamma)$ in $\Aut\D_{fd}(\Gamma)$, generated by $\{\phi_{S_i}\mid i\in Q_0\}$. Further, the set of reachable spherical objects is \begin{gather}\label{eq:sph=st} \Sph(\Gamma)=\ST(\Gamma)\cdot\Sim\zero, \end{gather} which is equivalent to the definition in \eqref{eq:sph} (cf. \cite[Lemma~9.2]{QQ}). For $\D_{fd}(\surfo)$, we will use notation $\Sph(\surfo)$ and $\ST(\surfo)$ instead. Furthermore, by \eqref{eq:cong}, we will not distinguish $\EGp(\surfo)$ and $\EGp(\Gamma_\TT)$. \subsection{Stability conditions}\label{sec:sc} Recall the definition of stability conditions as follows. \begin{definition}[{\cite[Definition~3.3]{B}}]\label{def:stab} A \emph{stability condition} $\sigma = (Z,\hua{P})$ on $\hua{D}$ consists of a group homomorphism $Z:K(\hua{D}) \to \kong{C}$ called the \emph{central charge} and full additive subcategories $\hua{P}(\varphi) \subset \hua{D}$ for each $\varphi \in \kong{R}$, satisfying the following axioms: \begin{itemize} \item if $0 \neq E \in \hua{P}(\varphi)$ then $Z(E) = m(E) \exp(\varphi \pi \mathbf{i} )$ for some $m(E) \in \kong{R}_{>0}$, \item for all $\varphi \in \kong{R}$, $\hua{P}(\varphi+1)=\hua{P}(\varphi)[1]$, \item if $\varphi_1>\varphi_2$ and $A_i \in \hua{P}(\varphi_i)$ then $\Hom_{\hua{D}}(A_1,A_2)=0$, \item (HN-property) for each nonzero object $E \in \hua{D}$ there is a finite sequence of real numbers $$\varphi_1 > \varphi_2 > ... > \varphi_m$$ and a collection of triangles $$\xymatrix@C=0.8pc@R=1.4pc{ 0=E_0 \ar[rr] && E_1 \ar[dl] \ar[rr] && E_2 \ar[dl] \ar[rr] && ... \ \ar[rr] && E_{m-1} \ar[rr] && E_m=E \ar[dl] \\ & A_1 \ar@{-->}[ul] && A_2 \ar@{-->}[ul] && && && A_m \ar@{-->}[ul] },$$ with $A_j \in \hua{P}(\varphi_j)$ for all j. \end{itemize} \end{definition} A crucial result about stability condition is that they form a complex manifold. \begin{theorem}[Bridgeland\cite{B}] All stability conditions on a triangulated category $\D$ form a complex manifold, denoted by $\Stab(\D)$; each connected component of $\Stab(\D)$ is locally homeomorphic to a linear sub-manifold of $\Hom_{\kong{Z}}(K(\D),\kong{C})$, sending a stability condition $(\h, Z)$ to its central change $Z$. \end{theorem} We will study the pricipal component $\Stap(\surfo)$ of the space of stability conditions on $\D_{fd}(\surfo)$, that is the connected component containing stability conditions whose hearts are in $\EGp(\surfo)$. \subsection{Faithful actions}\label{sec:ff} \begin{lemma} An auto-equivalence $\varphi\in\Aut\D_{fd}(\surfo)$ acts trivially on $\Stap(\surfo)$ if and only if it acts trivially on $\Sph(\surfo)$. \end{lemma} \begin{proof} As to give a stability condition is equivalent to give a heart $\mathcal{H}$ with a stability function $Z$ on $\mathcal{H}$ satisfying the HN-property in Definition~\ref{def:stab} (see \cite[Proposition~5.3]{B}), we have the following equivalences \begin{itemize} \item $\varphi$ acts trivially on $\Stap(\surfo)$; \item $\varphi$ acts trivially on the exchange graph $\EGp(\surfo)$; \item $\varphi$ acts trivially on any vertices of $\EGp(\surfo)$ and any edges of $\EGp(\surfo)$. \end{itemize} As a heart in $\EGp(\surfo)$ is determined by its simples and the edges of $\EGp(\surfo)$ are labeled by simple of hearts, we deduce that $\varphi$ acts on $\Stap(\surfo)$ if and only if it acts trivially on the set \[ \bigcup_{\mathcal{H}\in\EGp(\surfo)}\Sim\mathcal{H}.\] This is $\Sph(\surfo)$ by \eqref{eq:sph}. \end{proof} \begin{theorem}\label{thmbb} The spherical twist group $\ST(\surfo)$ acts faithfully on $\Stap(\surfo)$. \end{theorem} \begin{proof} Choose any $\h_{\TT}\in\EGp(\surfo)$ that corresponds to a triangulation $\TT$. Let $\phi\in\ST(\surfo)$. By \cite[Corollary~8.5]{KQ}, $\phi(\h_\TT)$ can be obtained from $\h_\TT$ by a sequence of tiltings. Hence $\phi(\h_\TT)=\h_{\TT'}$ for some $\TT'$, which is obtained from $\TT$ by the corresponding sequence of flips. Hence, $\phi$ can be realized as the composition of a sequence of KY equivalences. By Theorem~\ref{thm:comp}, $\phi$ can be determined by $\h_\TT$ and $\h_{\TT'}$ directly. In the case that $\phi$ acts trivially on $\Stap(\surfo)$ or $\Sph(\surfo)$, we have $\phi(\h_{\TT})=\h_{\TT'}$ and the corresponding equivalence from Construction~\ref{con:iota} is the identity. Thus, $\phi$ is naturally isomorphic to the identity as required. \end{proof} In \cite{QQ}, we have $\ST(\surfo)/\Aut_0\cong\BT(\surfo)$, where $\BT(\surfo)$ is the braid twist group of $\surfo$, and where $\Aut_0$ is the part of $\Aut\D_{fd}(\surfo)$ that acts trivially on $\Stap\D_{fd}(\surfo)$. Hence a consequence of the theorem above is the following. \begin{corollary} $\ST(\surfo)=\BT(\surfo)$. \end{corollary}
1,116,691,501,114
arxiv
\section{Introduction} Deep learning-based image recognition studies have been recently achieving very accurate performance in visual applications, e.g. image classification \cite{deep_vgg, deep_resnet, deep_densenet}, face recognition, \cite{Luu_FG2011, Duong_ICASSP2011, Nguyen_2021_CVPR, duong2020vec2face, Quach_2021_CVPR}, image synthesis \cite{Duong_2017_ICCV, duong2016dam_cvpr, duong2019learning, duong2029cvpr_automatic, truong2021fastflow, duong2019dam_ijcv}, action recognition \cite{Truong_2022_CVPR, fi13080194}, semantic segmentation \cite{Huynh_2021_CVPR, le2018segmentation}. \textit{However, these methods assume the testing images from the same distribution as the training images, therefore, these deep learning-based models are likely to fail when performing in real data in the new domains.} Hence, image recognition crossing domains play an important role to address the mentioned problem and has become an active topic in the research communities. Particularly, \textit{domain adaptation} \cite{i2i_adapt, adda_cvpr, udab_icml, DBLP:conf/aaai/ShenQZY18, Truong_2021_ICCV} has received much attention in computer vision. Domain adaptation refers to the problem of leveraging labeled data in a source domain to learn an accurate model in a target label-free domain. The knowledge from the source domains will be learned and transferred to the target domains in a supervised or unsupervised manner. Specifically, domain adaptation tries to minimize the difference in the deep feature representation between source and target domains by minimizing the distance between the source and target distributions \cite{DBLP:conf/aaai/ShenQZY18, adda_cvpr, udab_icml}. These prior works have indicated the importance of the discrepancy between data distributions across domains. Hence, these works result in the principle approach to solve the domain adaptation problem is that we transform the feature distributions so as to make the target feature distributions closer to the source feature distributions and utilize the classifier learned in the source domain applying to the target domain. In our paper, we also take this intuition into account and propose a novel framework that allows to minimize the differences between source and target feature distributions. Particularly, we approach to the domain adaptation problem based on optimal transport distances. \begin{figure}[!t] \centering \includegraphics[width=1.0\columnwidth]{figures/figure1.png} \caption{Optimal-Transported Based Adaptation. The Gromov-Wasserstein distance helps to align and associate target features to source features.} \vspace{-7mm} \label{fig:figure_1} \end{figure} Optimal Transport (OT) has become an active topic in recent years since it has various applications in domain adaptation \cite{pmlr-v89-redko19a, 7586038, NIPS2017_0070d23b}, generative models \cite{pmlr-v70-arjovsky17a, Deshpande_2018_CVPR, bunne2019, Truong_2021_ICCV_righ2talk}, shape matching \cite{7053911, 5457690}, etc. OT distances are used to compute the distance two probability distributions, which are known under several metrics such as $p$-Wasserstein (Earth Mover), Monge-Kantorovich, Gromov-Wasserstein distances. Theoretically, OT provides a way of inferring correspondences between two distributions by leveraging their intrinsic geometries. One of the well-known OT distances is Wasserstein which provide a way to measure two probability distributions. The Wasserstein distance is widely used for domain adaptation since it can help to mitigate the differences between source and target feature domains. However, Wasserstein leaves a serious problem, specifically, the Wasserstein distance will be not practical if we cannot define the meaningful metric across domains. In another word, if two feature domains are unaligned, we cannot directly compared or measure two data points. To address this problem, we propose a new approach that leverages the Gromov-Wasserstein distance in deep feature spaces to compare two distributions in different domains. \textbf{Contributions of this Work:} In order to solve the problem defined above, we propose the use of recent advanced deep learning approaches to deal with limited training samples. We present a novel optimal transport loss approach with domain adaptation integrated into deep convolutional neural network (CNN) to train a robust insect classifier. The most recent domain adaptation methods are based on adversarial training \cite{adda_cvpr, udab_icml} that minimizes the discrepancy between source and target domains. However, minimizing feature distributions in different domains is not practical due to the lack of a feasible metric across domains. In particular, defining a metric that is compatible with both two domains and satisfies all properties in both two domains (e.g. features of different classes should be distinguished and features of the same class should be close) is not an trivial task. Other prior metrics (e.g. adversarial loss, KL divergence, Wasserstein distance, etc) usually could not sufficiently satisfy this property. Moreover, these current methods ignore the feature distribution structures between source and target domains. To address these mentioned issues, we propose a novel optimal transport distance, specifically, the Gromov-Wasserstein (GW) distance, that allows comparing features across domains while aligning feature distributions and maintaining the feature structures between source and target domains. In addition, since the computation of GW distance is costly due to the solving non-convex quadratic assignment problem, we present a fast approximation form of GW distance based on 1D-GW distance. Table \ref{tab:method_summary} summarizes the properties of our proposed method compared to other current domain adaptation methods. Through intensive experiments on MNIST, MNIST-M, IP102, and VisDA datasets, we prove our proposed method can help to improve the performance of domain adaptation methods. \begin{table*}[!t] \small \centering \caption{Comparisons in the properties between our proposed approach and other recent methods, where \xmark represents \textit{not applicable} properties. Gaussian Mixture Model (GMM), Probabilistic Graphical Model (PGM), Convolutional Neural Networks (CNN), Adversarial Loss ($\ell_{adv}$), Log Likelihood Loss ($\ell_{LL}$), Cycle Consistency Loss ($\ell_{cyc}$), Discrepancy Loss ($\ell_{dis}$) and Cross-Entropy Loss ($\ell_{CE}$), Sliced Gromov-Wasserstein Loss ($\ell_{SGW}$).} \begin{tabular}{ c|c|c|c|c|c} & \begin{tabular}{@{}c@{}}\textbf{Domain}\\ \textbf{Modality} \end{tabular} & \textbf{Network Structures}& \begin{tabular}{@{}c@{}}\textbf{Loss}\\ \textbf{Functions}\end{tabular}& \begin{tabular}{@{}c@{}}\textbf{End-to-End}\end{tabular} & \begin{tabular}{@{}c@{}}\textbf{Target-domain} \\ \textbf{Label-free} \end{tabular} \\ FT \cite{feature_transfer_learning} & Transfer Learning & CNN & $\ell_{2}$ & \cmark & \xmark \\ \hline \hline UBM \cite{ubm_speaker}&Adaptation & GMM & $\ell_{LL}$ & \xmark & \cmark \\ DANN \cite{udab_icml}&Adaptation & CNN & $\ell_{adv}$ & \cmark & \cmark \\ CoGAN \cite{cogan}&Adaptation & CNN+GAN & $\ell_{adv}$ & \cmark & \cmark \\ I2IAdapt \cite{i2i_adapt} &Adaptation & CNN+GAN & $\ell_{adv}+\ell_{cyc}$ & \cmark & \cmark \\ ADDA \cite{adda_cvpr} &Adaptation & CNN+GAN & $\ell_{adv}$ & \cmark & \cmark \\ MCD \cite{mcd_adaptaion} &Adaptation & CNN+GAN & $\ell_{adv} + \ell_{dis}$ & \cmark & \cmark \\ ADA \cite{generalize-unseen-domain} &Generalization & CNN &$\ell_{CE}$ & \cmark & \cmark \\ E-UNVP \cite{e_unvp} & Generalization & PGM+CNN & ${\ell_{LL}} + {\ell_{CE}} $ & \cmark & \cmark \\ \textbf{OTAdapt} &\textbf{Adaptation} & \textbf{CNN + GAN} & \textbf{$\boldsymbol{\ell_{adv}} + \boldsymbol{\ell_{SGW}} $} & \cmark & \cmark \end{tabular} \label{tab:method_summary} \vspace{-6mm} \end{table*} \section{Related Work} \textbf{Domain adaptation} is a technique in machine learning, especially CNN, that aims to learn a concept from a source dataset and perform well on target datasets. Deep convolution networks have been used in segmentation, classification, and recognition of visual domains in many applications by learning good features from the given datasets. Moreover, the learned representation from the deep convolution networks is used for other datasets. However, these representations may not generalize enough for the new datasets due to the domain shift. It is possible to mitigate this problem by fine-tuning but for large parameters employed by deep networks, it is challenging to acquire ample of labeled data. The main goal of the domain adaptation is to reduce the discrepancy between the source and target feature distributions by leading feature learning. There are many works published in domain adaptation recently. The main aim of domain adaptation is to learn a distribution in a source data and find a way to improve the performance of a model on a different target data distribution. It addresses to reduce the domain shift happening between the source and the target domain. In \cite{DBLP:journals/corr/TzengHDS15}, the method maximizes domain confusion loss to learn dominant invariant representation in both source and target domains. The correlation between classes learned in the source domain transferred to target domains so that it maintains the relationship between classes. Tzeng et. al. \cite{adda_cvpr} proposed domain adaptation using discriminative feature learning and adversarial learning for the unsupervised domain. At first, a source encoder is trained using a supervised method. Then, an adversarial adaptation is used to train the target network. Here, the discriminator that compares the source and target domain fails to recognize the difference between them. So, during testing, the trained target model with source classifier classifies the target images. Similarly, \cite{udab_icml} proposed a unified framework that learns the labeled data and unlabeled data at the same time. Ber et. al. \cite{ber2020domain} presented a novel method for unsupervised domain adaptation which is suitable for imbalanced and overlapping datasets and also works with label and conditional shifts. Luo et. al. \cite{luo2021relaxed} identified the label-domination problem on a natural and widespread conditional GAN framework for semi-supervised domain adaptation. Also proposed Relaxed cGAN, addressing the label-domination problem by carefully designing the modules and loss functions. Here, state-of-the-art performance is obtained on Digit, DomainNet and Office-Home datasets. Zhang et. al. \cite{zhangadversarial} proposed a novel method called Adversarial Continuous learning in unsupervised Domain Adaptation (ACDA). This proposed model confuses the domain discriminator by learning adversarially high confidence examples from the target domain. Here, a deep correlation loss is also proposed to ensure that consistency is maintained with predictions. Sener et. al. \cite{Sener:2016:LTR:3157096.3157333} proposes a unified model for learning transferable representations target label inference for unsupervised domain adaptation. \textbf{Optimal Transport} has been widely used to compute the distance between two probability distributions, which has been first introduced in middle of the 19th century. Optimal transport has several applications in image processing (e.g. color transfer between images, etc), computer graphics (e.g. shape matching, etc). Recently, OT has gained much attention from the computer vision research society. OT has become a major metric in learning generative models \cite{pmlr-v70-arjovsky17a, Deshpande_2018_CVPR, bunne2019}, domain adaptations \cite{pmlr-v89-redko19a, 7586038, NIPS2017_0070d23b}. However, OT suffers several issues, specifically, the computation efficiency. Computing the OT distances (e.g. Wasserstein, Gromov-Wasserstein, etc) requires a large computational cost since it has to solve the assignment problems which are NP hard problem in the general cases. Recently, there were several prior works that introduced novel methods to fast approximate the OT distances by using the sliced approaches \cite{SW_distance, NEURIPS2019_f0935e4c, vay_sgw_2019}. In our approach, we also take the intuition of the sliced approach into account to fast approximate the Gromov-Wasserstein distance. \section{The Proposed Method} \label{sec:the_proposal} \begin{figure*}[!t] \centering \includegraphics[width=1.5\columnwidth]{figures/new_framework.png} \vspace{-4mm} \caption{(a) The Proposed Framework. (b) An example of SGW in the 3D spaces that are projected to the line by a projection $\Delta$. The solution for this projection is the anti-identity mapping. } \vspace{-6mm} \label{fig:training_process} \end{figure*} In this section, our proposed method is introduced to the problem of unsupervised domain adaptation based on the Sliced Gromov-Wasserstein distance. In unsupervised domain adaptation, we assume the source image $\mathbf{x}_s \in \mathbf{X}_s$ and source label $\mathbf{y}_s \in \mathbf{Y}_s$ are drawn from a source domain distribution $p_{I_s}(\mathbf{x}_s, \mathbf{y}_s)$. Similarly, the target image $\mathbf{x}_t \in \mathbf{X}_t$ is drawn from $p_{I_t}(\mathbf{x}_t)$ and the target label $\mathbf{y_t}$ is unknown. Fig. \ref{fig:training_process}(a) illustrates the proposed method. Our method aims to learn to minimize the gap between source and target distributions. The discriminator tries to align the source and target feature representation distributions extracted from source and target extractors. Meanwhile, the Sliced Gromov-Wasserstein distance helps to associate features from the target domain to the source domain. In other words, we try to learn a feature representation for the target domain that can utilize the classifier trained on the source domain. Let $\mathcal{F}$ be the feature extractor, $\mathcal{C}$ be the classifier, and $\mathcal{D}$ be the discriminator. \textbf{Network Backbone} Our network consists of two subnetworks that are a backbone network $\mathcal{F}$ and a classifier $\mathcal{C}$. Particularly, we choose standard networks in our experiments, i.e. LeNet \cite{lenet_ref}, ResNet-50 \cite{deep_resnet}, VGG-16 \cite{deep_vgg} as the backbone of the source and target networks. The classifier $\mathcal{C}$ includes a fully connected layer followed by the softmax layer. However, it should be noticed that the network structures between source and target can be different as long as the feature representations of source and target domain have the same number of dimensions. The discriminator $\mathcal{D}$ is designed as the a stack of two fully connected layers followed by the Leaky ReLU activation. The unsupervised domain adaptation to image classification can be formed as follows: \begin{equation} \min_{\mathcal{F}, \mathcal{C}} \left[\mathop{\mathbb{E}}_{\mathbf{x}_s, \mathbf{y}_s}\mathcal{L}_s(\mathbf{x}_s, \mathbf{y}_s; \mathcal{F}, \mathcal{C}) + \mathop{\mathbb{E}}_{\mathbf{x}_t}\mathcal{L}_t(\mathbf{x}_t; \mathcal{F})\right] \end{equation} where $\mathcal{L}_s$ is the supervised loss on the source domain that can be defined as follows: \begin{equation} \mathcal{L}_s(\mathbf{x}_s, \mathbf{y}_s; \mathcal{F}, \mathcal{C}) = -\sum_{i=1}^{c}\mathds{1}_{k=\mathbb{y}_s}\log\mathcal{C}(\mathcal{F}(\mathbf{x}_s)) \end{equation} and the $\mathcal{L}_t$ is the unsupervised loss defined on the target domain. Let $\mathbf{f}_s \sim p_s(\mathbf{f}_s), \mathbf{f}_t \sim p_t(\mathbf{f}_t)$ be the features extracted from the source image $\mathbf{x}_s$ and the target image $\mathbf{x}_t$ by the feature extractor $\mathcal{F}$, respectively. To adapt the knowledge from the source domain to the target domain, we minimize the source and the target feature representation by minimizing the gap between $p_s$ and $p_t$. This can be addressed by the adversarial training. The domain discriminator $\mathcal{D}$ will classify whether a feature $\mathbf{f}$ comes from the source or the target domain. $\mathcal{D}$ can be optimized by adversarial loss as follows, \begin{equation} \begin{split} \min_{\mathcal{D}} \mathop{\mathbb{E}}_{\mathbf{x}_s, \mathbf{x}_t}\left[-\log \mathcal{D}(\mathcal{F}(\mathbf{x}_s)) - \log [1 - \mathcal{D}(\mathcal{F}(\mathbf{x}_t))]\right] \end{split} \end{equation} Next, we adapt knowledge from the source feature extractor $\mathcal{F}_s$ to the target feature extractor $\mathcal{F}_t$. Hence, the target feature extractor $\mathcal{F}_t$ is optimized according to the adversarial loss as follows, \begin{equation} \begin{split} \label{eqn:adv_loss_target} \mathcal{L}_{adv}(\mathbf{x}_t; \mathcal{F}) = -\log\mathcal{D}(\mathcal{F}(\mathbf{x}_t)) \end{split} \end{equation} Domain adaptive training helps to minimize the distance between source and target distributions via domain adversarial training; however, it is insufficient due to three reasons: (1) adversarial training helps to align two distributions without guaranteeing the correct mapping of each classes, (2) this approach fails when a meaningful metric across domains cannot be defined, and (3) the adversarial loss ignores the topology of features distributions between two domains. To address these aforementioned issues, we adopt the optimal transport distance, i.e. the Gromov-Wasserstein distance, to mitigate the issues caused by misaligned domains. \textbf{Gromov-Wasserstein Distance} Let $\pi$ be a correspondence map such that $p_s$ and $p_t$ are marginal distributions of $\pi$. The distance between two distributions $p_s$ and $p_t$ across domains can be formulated as follows, \begin{equation} \label{eqn:GW_distance} GW_2^2(c_{p_s}, c_{p_t}, p_s, p_t) = \min_{\pi \in \Pi(p_s, p_t)}J(c_{p_s}, c_{p_t}, \pi) \end{equation} where \begin{equation}\label{eqn:J_distance} J(c_{p_s}, c_{p_t}, \pi) = \sum_{i,j,k,l}|c_{p_s}(\mathbf{f}_s^i, \mathbf{f}_s^j) - c_{p_t}(\mathbf{f}_t^k, \mathbf{f}_t^l)|^2\pi_{i,j}\pi_{k,l} \end{equation} $c_{p_s}, c_{p_t}$ are the distances in their space, in our method, we utilize squared euclidean distances, i.e. $c_{p_s}(\mathbf{f}_s^i, \mathbf{f}_s^j) = ||\mathbf{f}^i_s - \mathbf{f}_s^j||_2^2$, $c_{p_t}(\mathbf{f}_t^k, \mathbf{f}_t^l) = ||\mathbf{f}^k_t - \mathbf{f}_t^l||_2^2$. The GW distance aims to map pairs of features with similar distances within each pair, specifically, the pairs $c_{p_s}(\mathbf{f}_s^i, \mathbf{f}_s^j)$ is associated to $c_{p_t}(\mathbf{f}_t^k, \mathbf{f}_t^l)$ when the distances are similar and the transport coefficients $\pi_{i,j}$ and $\pi_{k,l}$ of these pairs are high respond. As shown in Eq. \eqref{eqn:GW_distance}, we only need to know the intra-distance of each domain without defining any metric across two domains. In particular, the Ecludian distance has been used as intra-distance of each domain since the Ecludian distance is invariant to permutations, rotations, or translation. This invariant-property allows GW to align the complex feature domains. In addition, the corresponding map (transportation map) $\pi$ illustrates the association between source features and targets features, which helps to guarantee the correct mapping of each classes between two domains. Also, the term $|c_{p_s}(\mathbf{f}_s^i, \mathbf{f}_s^j) - c_{p_t}(\mathbf{f}_t^k, \mathbf{f}_t^l)|^2$ of the Eq. \eqref{eqn:J_distance} implies the constraint of the topology of feature distributions between two domains have to be identical. Fig. \ref{fig:mnist_mnistm_distribution}(B) illustrates the aligned features distributions of source and target domains when using the Gromov-Wasserstein distance. However, solving the equation Eq. \eqref{eqn:GW_distance} is costly due to optimizing a non-convex Quadratic Problem with the time complexity is $O(n^3)$. Instead of directly solving the GW distance, we present the Sliced Gromov-Wassertein (SGW) distance \cite{vay_sgw_2019} with the time complexity is less costly than GW distance. It is similar to the Sliced Wasserstein distance \cite{SW_distance}, features are projected from the high dimensional space to the 1D space, and then solving GW distance on the 1D space. As the results of Quadratic Assignment Problem \cite{vay_sgw_2019}, solving GW on the 1D space is effectively sufficient. Therefore, the Eq. \eqref{eqn:GW_distance} on the 1D space can be formulated as follows, \begin{equation} \small \begin{split} \label{eqn:GW_1D} GW_2^2(c_{p_s}, c_{p_t}, p_s, p_t,\Delta)=\min_{\sigma}\frac{1}{n^2}\sum_{i,j}|c_{p_s}&(\mathbf{\bar{f}}_s^i, \mathbf{\bar{f}}_s^j) \\ &- c_{p_t}(\mathbf{\bar{f}}_t^{\sigma(i)}, \mathbf{\bar{f}}_t^{\sigma(j)})|^2 \end{split} \end{equation} where $\sigma$ is a one-to-one mapping $\{1,...,n\} \rightarrow \{1,...,n\}$, $\mathbf{\bar{f}}$ is a projected feature of $\mathbf{f}$ on 1D space, $\Delta$ is a projection matrix. Fortunately, if the source and target projected features are sorted in the increasing order, the solution for $\sigma$ is just either the identity mapping $\sigma(i) = i$ or anti-identity mapping $\sigma(i) = n - i$. Therefore, the Eq. \eqref{eqn:GW_1D} can be computed in $O(n\log(n))$ where n is the number of data points. Fig. \ref{fig:training_process}(b) illustrates an example of solving GW in the 1D space. \textbf{Sliced Gromov-Wasserstein Distance} As aforementioned, similar to the Sliced Wasserstein (SW) distance, the main idea of SW is to project features in the high dimensional space to the 1D space where computing Wasserstein distance is simple and easy followed by averaging these distances. In SGW, the same manner is applied, specifically, the SGW distance can be defined as follows, \begin{equation}\label{eqn:sgw_loss} \begin{split} \mathcal{L}_{SGW}(\mathbf{x}_s, \mathbf{x}_t) &= SGW(c_{p_s}, c_{p_t}, p_s, p_t)\\ &=\int_{\Delta \in \mathbb{R}^{d-1}}GW_2^2(c_{p_s}, c_{p_t}, p_s, p_t,\Delta) d\Delta \\ &= \frac{1}{L}\sum_{i=1}^LGW_2^2(c_{p_s}, c_{p_t}, p_s, p_t, \Delta_i) \end{split} \end{equation} where $d$ is the number of dimensions of source and target feature spaces; $L$ is the number of projections. In our experiments, we set the number of projections $L$ to $200$. The time complexity of computing SGW distance is $O(Ln\log(n))$. Finally, the total loss for the target feature extractor is a summation of adversarial loss (Eq. \eqref{eqn:adv_loss_target}) and SGW loss (Eq. \eqref{eqn:sgw_loss}). \begin{equation} \label{eqn:total_loss_g} \mathcal{L}_t(\mathbf{x}_t; \mathcal{F}) = \lambda_{adv}\mathcal{L}_{adv}(\mathbf{x}_t; \mathcal{F}) + \lambda_{SGW}\mathcal{L}_{SGW}(\mathbf{x}_s, \mathbf{x}_t) \end{equation} where $\lambda_{adv}$ and $\lambda_{SGW}$ are control weights for $\mathcal{L}_{adv}$ and $\mathcal{L}_{SGW}$, respectively. \section{Experiments} \label{sec:experiments} \begin{figure}[!t] \centering \includegraphics[width=0.9\columnwidth]{figures/ExamplesDigits.png} \vspace{-2mm} \caption{Examples of MNIST and MNIST-M datasets} \label{fig:mnist_dataset} \end{figure} \begin{table}[!t] \small \centering \vspace{-4mm} \caption{Ablative experiment results (\%) on the effectiveness of the adversarial loss ($\mathcal{L}_{adv}$) and Gromov-Wasserstein loss ($\mathcal{L}_{SGW}$). We evaluate our proposed method in the cases of MNIST $\to$ MNIST-M and MNIST-M $\to$ MNIST.} \begin{tabular}{|c|c|c|} \hline Methods & MNIST $\to$ MNIST-M & MNIST-M $\to$ MNIST\\ \hline Pure-CNN & 58.49\% &98.45\% \\ \hline $\mathcal{L}_{adv}$ Only & 64.77\% & 63.26\% \\ \hline $\mathcal{L}_{SGW}$ Only & 65.72\% & 99.06\%\\ \hline $\mathcal{L}_{adv} + \mathcal{L}_{SGW}$ & \textbf{68.56\%} & \textbf{99.19\%} \\ \hline \end{tabular} \vspace{-4mm} \label{tab:effect_loss} \end{table} In this section, we first show the impact of our proposed method compared to other methods in Sec \ref{sec:ablation_study}. In these experiments, we consider MNIST as the source dataset and MNIST-M as the target dataset. The proposed method is also benchmarked on different network structures, i.e. LeNet \cite{lenet_ref}, VGG \cite{deep_vgg}, ResNet \cite{deep_resnet}. Finally, we show the advantages of our method in the across-domain pest insect recognition on IP102 dataset \cite{IP_102_dataset} in Sec \ref{label:insect_results}. In our experiments, the accuracy metric is used to compare our method and prior approaches. \subsection{Ablation Studies} \label{sec:ablation_study} This ablation study aims to compare our method against to other domain adaptation methods. In these experiments, MNIST and MNIST-M are used as the source and the target datasets, respectively. Fig. \ref{fig:mnist_dataset} illustrates samples of MNIST and MNIST-M datasets. We compare our proposed method (SGW) against to Pure-CNN, ADDA \cite{adda_cvpr}, ADA \cite{generalize-unseen-domain}, TCA \cite{TCA_method}, SA \cite{SA_method}, DAN \cite{DAN_method}, UNVP, and E-UNVP \cite{e_unvp}. \begin{table}[!b] \centering \vspace{-7mm} \caption{Experimental results on MNIST $\to$ MNIST-M.} \label{tab:mnist2mnistm} \begin{tabular}{|c|c|c|} \hline Method & MNIST & MNIST-M \\ \hline Pure CNN & 99.33\% & 58.49\% \\ \hline SA \cite{SA_method} & 90.80\% & 59.90\%\\ \hline DAN \cite{DAN_method} & 97.10\% & 67.00\% \\ \hline TCA \cite{TCA_method} & 78.40\% & 45.20\%\\ \hline ADA \cite{generalize-unseen-domain} & 99.17\% & 60.02\% \\ \hline ADDA \cite{adda_cvpr} & 99.29\% & 63.39\% \\ \hline UNVP \cite{e_unvp} & 99.30\% & 59.45\% \\ \hline E-UNVP \cite{e_unvp} & \textbf{99.42\%} & 61.70\% \\ \hline \textbf{OTAdapt} & 99.19\% & \textbf{68.56\%} \\ \hline \end{tabular} \end{table} \textbf{Hyper-parameter Settings:} During the training, the batch size and the learning rate are set to $128$ and $0.0002$, respectively. For the control weights $\lambda_{adv}$ and $\lambda_{SGW}$ in Eqn \eqref{eqn:total_loss_g}, we set $\lambda_{adv} = \lambda_{SGW} = 1.0$.For the training processes, we train $10$ epochs for each process. We use image sizes $32\times32$ for LeNet and $64\times64$ for VGG and ResNet. As shown in Table \ref{tab:effect_loss}, the proposed $\mathcal{L}_{SGW}$ and $\mathcal{L}_{adv}$ help to improve the accuracy of the network on target dataset. When both $\mathcal{L}_{adv}$ and $\mathcal{L}_{SGW}$ are adopted, the performance of the proposed method is significantly improved. Table \ref{tab:mnist2mnistm} illustrates our results compared to other methods. In this experiment, LeNet is used for all methods in the table. As shown in the results, our method can achieve the state-of-the-art performance and help to improve performance of the model from {$58.49\%$ to $68.56\%$} on MNIST-M datasets. The experimental results have shown that with our approach, the performance of the model has been improved on the color images (MNIST-M). However, although the model has been generalized into a new color image domain, there is a minor decrease in the performance of the gray-scale images (MNIST). \textbf{Deep Network Structures} This experiment evaluates the robustness and consistent improvement of our method with common deep networks, including, LeNet, VGG, ResNet. The proposed method consistently outperform than the stand-alone deep network (Pure-CNN). As shown in Table \ref{tab:networks}, the proposed method helps to improves {$10.07\%$, $4.31\%$, $3.21\%$} on MNIST-M using LeNet, VGG, ResNet, respectively. \textbf{Sample Distributions} Fig. \ref{fig:mnist_mnistm_distribution} illustrate the feature distributions of MNIST (source dataset) and MNIST-M (target dataset) in the cases of with domain adaptation and without domain adaptation. Features of 10 classes extracted from testing sets of MNIST (blue points) and MNIST-M (green points) are projected into the 2D space by the t-SNE method. As shown in Fig. \ref{fig:mnist_mnistm_distribution}(A), the features of MNIST-M are not well distributed. Meanwhile, features of MNIST and MNIST-M visualized on Fig. \ref{fig:mnist_mnistm_distribution}(B) are well aligned. \begin{table}[!t] \small \centering \caption{Experimental results $(\%)$ when using SGW in various common CNNs on MNIST $\to$ MNIST-M.} \label{tab:networks} \begin{tabular}{|c|c|c|c|} \hline \textbf{Networks} & \textbf{Methods} & \textbf{MNIST} & \textbf{MNIST-M} \\ \hline \multirow{2}{*}{LeNet} & Pure-CNN & \textbf{99.33\%} & 58.49\% \\ \cline{2-4} \multirow{2}{*}{} & \textbf{OTAdapt} & 99.19\% & \textbf{68.56\%} \\ \hline \multirow{2}{*}{VGG} & Pure CNN &98.91\%&60.95\% \\ \cline{2-4} \multirow{2}{*}{} & \textbf{OTAdapt} & \textbf{99.00\%} & \textbf{65.26\%} \\ \hline \multirow{2}{*}{ResNet} & Pure CNN &98.97\% & 64.23\% \\ \cline{2-4} \multirow{2}{*}{} & \textbf{OTAdapt} &\textbf{99.31\%} & \textbf{67.44\%} \\ \hline \end{tabular} \end{table} \begin{figure}[!t] \centering \vspace{-4mm} \includegraphics[width=0.45\textwidth]{figures/domain_distribution.png} \vspace{-4mm} \caption{Feature Distributions of MNIST and MNIST-M.} \vspace{-6mm} \label{fig:mnist_mnistm_distribution} \end{figure} \subsection{Insect Pest Recognition} \label{label:insect_results} \begin{figure}[!t] \centering \includegraphics[width=0.7\columnwidth]{figures/ExamplesIP102.png} \caption{Examples of IP102 dataset. The images in the source domain and target domain are captured in nature and laboratories, respectively.} \label{fig:ip102_dataset} \vspace{-6mm} \end{figure} \begin{table}[!t] \small \centering \caption{Experimental results $(\%)$ when using SGW in various common CNNs on Insect Pest Dataset} \label{tab:insect_pets} \vspace{-3mm} \begin{tabular}{|c|c|c|c|} \hline \textbf{Networks} & \textbf{Methods} & \textbf{Nature} & \textbf{Laboratory} \\ \hline \multirow{2}{*}{VGG} & Pure CNN & 48.33\% & 47.04\% \\ \cline{2-4} \multirow{2}{*}{} & \textbf{OTAdapt} & \textbf{50.54\%} & \textbf{50.35\%} \\ \hline \multirow{2}{*}{ResNet} & Pure CNN & 53.05\% & 50.96\% \\ \cline{2-4} \multirow{2}{*}{} & \textbf{OTAdapt} & \textbf{55.51\%} & \textbf{53.87\%} \\ \hline \multirow{2}{*}{DenseNet} & Pure CNN & 58.82\% & 58.70\% \\ \cline{2-4} \multirow{2}{*}{} & \textbf{OTAdapt} & \textbf{62.42\%} & \textbf{62.32\%} \\ \hline \end{tabular} \vspace{-7mm} \end{table} \begin{table}[!b] \footnotesize \centering \vspace{-6mm} \caption{Experimental Results ($\%$) on the Office-31 Dataset (A: Amazon, W: Wecam, D: DLSR).} \vspace{-3mm} \resizebox{.5\textwidth}{!}{ \begin{tabular}{|c|c c|c c|c c|} \hline & A $\to$ W & A $\to$ D & W $\to$ A & W $\to$ D & D $\to$ A & D $\to$ W \\ \hline \hline GFK \cite{6247911} & 58.60\% & 50.70\% & 44.10\% & 70.50\% & 45.70\% & 76.50\% \\ \hline MMDT \cite{hoffman2013efficient} &64.60\% &56.70\%&47.70\%&67.00\%&46.90\%&74.10\% \\ \hline TCA \cite{TCA_method} & 72.70\% & 74.10\% & 60.90\% & $-$ & 61.70\% & $-$ \\ \hline DAN \cite{DAN_method} & 78.60\% & 80.50\% & 62.80\% & $-$ & 63.60\% & $-$ \\ \hline \hline VGG &63.64\% &71.23\% &67.21\% &65.37\% &72.54\% &68.67\% \\ +\textbf{OTAdapt} &\textbf{75.32\%} &\textbf{73.83\%} &\textbf{72.37\%} &\textbf{73.69\%} &\textbf{75.48\%} &\textbf{74.57\%} \\ \hline \hline ResNet &61.55\% &62.44\% &74.87\% &69.23\% &71.63\% &63.80\% \\ +\textbf{OTAdapt} &\textbf{73.33\%} &\textbf{73.29\%} &\textbf{77.78\%} & \textbf{75.50\%} &7\textbf{8.21\%} &\textbf{73.46\%} \\ \hline \hline DenseNet &67.42\% &62.85\% &65.35\% &68.35\% &42.06\% &72.20\% \\ +\textbf{OTAdapt} & \textbf{78.49\%} & \textbf{77.51\%} & \textbf{77.78\%} &\textbf{74.39\%} & \textbf{79.27\%} & \textbf{73.46\%} \\ \hline \end{tabular} } \label{tab:office_exp} \end{table} \textbf{IP102 Dataset:} The IP102 dataset is a benchmark dataset for Insect Pest Recognition \cite{IP_102_dataset}. In particular, it includes more than $75000$ images belonging to $102$ different categories collected in the Internet. In the taxonomic system of the IP102, there are 8 types of crops damaged by insect pests, specifically, Rice, Corn, Wheat, Beet, Alfalfa, Vitis, Citrus, and Mango. Based on the property of image collection, we divide this dataset into two domains for the source and the target domains. The source domain is a set of images collected in nature; in particular, images were collected in the farms and outside. Meanwhile, the target domain images are captured in laboratories. Fig. \ref{fig:ip102_dataset} illustrates the examples of the source and the target domains of the IP102 dataset. In this experiment, the proposed method is evaluated in Insect Pest Dataset (IP102) \cite{IP_102_dataset}. Our proposed method is evaluated with common deep network structures. In this experiment, we use image size $224\times224$, batch size and learning rate are set to $128$ and $0.0002$, respectively. Table \ref{tab:insect_pets} shows the results of our proposed method on various deep network structures on IP102 dataset. The experimental results in Table \ref{tab:insect_pets} show that our proposed methods help to improve the recognition performance on the target domain. Specifically, it helps to improve by {\textbf{$3.31\%$}, \textbf{$2.91\%$}, and $3.62\%$} on VGG, ResNet, and DenseNet, respectively. \subsection{Office-31 and VisDA 2017 Experiments} \textbf{Office-31 Dataset}: The Office-31 dataset is a benchmark dataset for domain adaptation \cite{saenko2010adapting}. In particular, this dataset includes 31 object categories in 3 domains i.e., Amazon , DSLR and Webcam. All the 31 categories in this dataset are the objects which are commonly seen in the office environments. The Amazon domain contains a total of 2817 images where each class is having 90 images on average. The DSLR domain have 498 low-noise high resolution images with high resolution. Finally, for Webcam, there are a total of 795 images of low resolution with resolution of $640 \times 480$. The proposed method is evaluated in Office-31 dataset \cite{saenko2010adapting}. By using the common deep neural network architectures, our proposed method is evaluated. Under this experiment, we use images of size $224\times224$, batch size is 128 and learning rate is set to 0.0001. Table \ref{tab:office_exp} shows the results of our proposed method on various deep network architectures along with baselines, i.e. Geodesic Flow Kernel (GFK) \cite{6247911}, Max-Margin Domain Transforms (MMDT) \cite{hoffman2013efficient}, TCA \cite{TCA_method}, DAN \cite{DAN_method}. This experiment demonstrates that our proposed method achieves a better recognition performance and is able to outperform the other domain adaptation techniques. \textbf{VisDA 2017: } We have evaluated our approach on the VisDA dataset \cite{visda2017}. The source domain is a collection of synthetic images. Meanwhile, images in the target domain are real photos. We compare our results with DAN \cite{DAN_method}, DANN \cite{udab_icml}. As shown in Table \ref{tab:visda}, our approach outperforms other baselines. Also, we conduct ablation study to illustrate the performance of our proposed components. In particular, with the adversarial loss ($\mathcal{L}_{adv}$) only, the result is $68.97\%$; Meanwhile, the Gromov-Wassertein loss only improves the result to $70.53\%$. When we use the two proposed losses together, the results have been improved up to $71.88\%$. \begin{figure}[!t] \centering \includegraphics[width=0.40\textwidth]{figures/office31_sample.png} \vspace{-3mm} \caption{Examples of Office-31 Datasets} \vspace{-6mm} \label{fig:office31} \end{figure} \begin{table}[!b] \centering \vspace{-6mm} \caption{Experimental Results on VisDA 2017.} \label{tab:visda} \vspace{-3mm} \begin{tabular}{|c|c|c|} \hline \multicolumn{2}{|c|}{Methods} & VisDA \\ \hline \multicolumn{2}{|c|}{Source Only} & 52.40\%\\ \hline \multicolumn{2}{|c|}{DAN \cite{DAN_method}} & 51.62\% \\ \hline \multicolumn{2}{|c|}{DANN \cite{udab_icml}} & 57.40\% \\ \hline \multirow{3}{*}{\textbf{OTAdapt}} & $\mathcal{L}_{adv}$ & 68.97\% \\ \cline{2-3} & $\mathcal{L}_{SGW}$ & 70.53\% \\ \cline{2-3} & $\mathcal{L}_{adv} + \mathcal{L}_{SGW}$ & \textbf{71.88\%}\\ \hline \end{tabular} \end{table} \section{Conclusions} In this paper, we present a novel Domain Adaptation method that utilizes the optimal transport distance. Our proposed method is able to compare and align feature distribution across domains; meanwhile, previous methods are usually failed when the meaningful metric across domain cannot be defined. Through the experiment on MNIST and MNIST-M, we prove our method is able to consistently improve performance on various deep network structures and outperform other methods. Experiments on IP102, Office-31, and VisDA have showed our method is outstanding in classification tasks. \noindent \textbf{Acknowledgment:} This work is supported by NSF Small Business Innovation Research Program (SBIR), Chancellor’s Innovation Fund, and SolaRid LLC. \bibliographystyle{IEEEtran}